Transformers transform Checkpoints of Tensorflow

By hugging face Compile |VK Source: Github

A command-line interface is provided to transform the Checkpoints of the original Bert / GPT / gpt-2 / transformer XL / xlnet / XLM in the model, and then the Checkpoints are loaded using the from? Untrained method of the library.

Note: since version 2.3.0, transformation scripts have now become part of the transformers cli (transformers CLI), which is available for any transformers=2.3.0. The following documentation reflects the transformer cli convert command format.

BERT

You can convert any of BERT's Tensorflow Checkpoints to PyTorch format (especially the pre training model published by Google) by using Convert TF checkpoint to PyTorch.py( https://github.com/google-research/bert#pre-trained-models))

This CLI takes TensorFlow checkpoints (three files starting with bert_model.ckpt) and the associated configuration file (bert_config.json) as input, creates a PyTorch model for this configuration, and loads it in the PyTorch model from TensorFlow checkpoints do the weight calculation, and then save the generated model to a standard py torch format file, which can be imported using torch.load() (see the examples of run ﹤ Bert ﹤ extract ﹤ features. Py, run ﹤ Bert ﹤ classifier. Py and run ﹤ Bert ﹤ squad. Py).

You only need to run the transformation script once to get the PyTorch model. Then you can ignore tensorflow checkpoints (three files that start with bert_model.ckpt), but make sure to keep the configuration file (bert_config.json) and vocabulary file (vocab.txt) because the PyTorch model also needs them.

To run this specific transformation script, you will need to install TensorFlow and PyTorch(pip install tensorflow). Only PyTorch is required for the rest of the repository.

This is an example of the transformation process of a pre trained Bert base uncased model:

export BERT_BASE_DIR=/path/to/bert/uncased_L-12_H-768_A-12

transformers-cli convert --model_type bert \
  --tf_checkpoint $BERT_BASE_DIR/bert_model.ckpt \
  --config $BERT_BASE_DIR/bert_config.json \
  --pytorch_dump_output $BERT_BASE_DIR/pytorch_model.bin

You can be here( https://github.com/google-research/bert#pre-trained-models )Download Google's pre training model.

OpenAI GPT

This is an example of the pre training OpenAI GPT model transformation process, assuming that your NumPy checkpoints are saved in the same format as the OpenAI pre training model (see here( https://github.com/openai/finetune-transformer-lm))

export OPENAI_GPT_CHECKPOINT_FOLDER_PATH=/path/to/openai/pretrained/numpy/weights

transformers-cli convert --model_type gpt \
  --tf_checkpoint $OPENAI_GPT_CHECKPOINT_FOLDER_PATH \
  --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
  [--config OPENAI_GPT_CONFIG] \
  [--finetuning_task_name OPENAI_GPT_FINETUNED_TASK] \

OpenAI GPT-2

This is an example of the pre training OpenAI GPT-2 model transformation process (see here( https://github.com/openai/gpt-2))

export OPENAI_GPT2_CHECKPOINT_PATH=/path/to/gpt2/pretrained/weights

transformers-cli convert --model_type gpt2 \
  --tf_checkpoint $OPENAI_GPT2_CHECKPOINT_PATH \
  --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
  [--config OPENAI_GPT2_CONFIG] \
  [--finetuning_task_name OPENAI_GPT2_FINETUNED_TASK]

Transformer-XL

This is an example of the pre training transformer XL model transformation process (see here( https://github.com/kimiyoung/transformer-xl/tree/master/tf#obtain-and-evaluate-pretrained-sota-models))

export TRANSFO_XL_CHECKPOINT_FOLDER_PATH=/path/to/transfo/xl/checkpoint

transformers-cli convert --model_type transfo_xl \
  --tf_checkpoint $TRANSFO_XL_CHECKPOINT_FOLDER_PATH \
  --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
  [--config TRANSFO_XL_CONFIG] \
  [--finetuning_task_name TRANSFO_XL_FINETUNED_TASK]

XLNet

This is an example of the conversion process of a pre trained XLNet model:

export TRANSFO_XL_CHECKPOINT_PATH=/path/to/xlnet/checkpoint
export TRANSFO_XL_CONFIG_PATH=/path/to/xlnet/config

transformers-cli convert --model_type xlnet \
  --tf_checkpoint $TRANSFO_XL_CHECKPOINT_PATH \
  --config $TRANSFO_XL_CONFIG_PATH \
  --pytorch_dump_output $PYTORCH_DUMP_OUTPUT \
  [--finetuning_task_name XLNET_FINETUNED_TASK] \

XLM

This is an example of the conversion process of a pre trained XLM model:

export XLM_CHECKPOINT_PATH=/path/to/xlm/checkpoint

transformers-cli convert --model_type xlm \
  --tf_checkpoint $XLM_CHECKPOINT_PATH \
  --pytorch_dump_output $PYTORCH_DUMP_OUTPUT
 [--config XML_CONFIG] \
 [--finetuning_task_name XML_FINETUNED_TASK]

Original link: https://huggingface.co/transformers/converting_tensorflow_models.html

Welcome to pioneer AI blog: http://panchuang.net/

OpenCV official document in Chinese: http://woshicver.com/

Welcome to pioneer blog Resource Hub: http://docs.panchuang.net/

Tags: github Google JSON pip

Posted on Mon, 13 Apr 2020 06:01:08 -0700 by falcon8253