Transformers: Is any possible for load local model ?

Created on 7 Jan 2020  ยท  4Comments  ยท  Source: huggingface/transformers

โ“ Questions & Help


For some reason(GFW), I need download pretrained model first then load it locally. But I read the source code where tell me below:

pretrained_model_name_or_path: either:
    - a string with the `shortcut name` of a pre-trained model to load from cache or download, e.g.: ``bert-base-uncased``.
    - a string with the `identifier name` of a pre-trained model that was user-uploaded to our S3, e.g.: ``dbmdz/bert-base-german-cased``.
    - a path to a `directory` containing model weights saved using :func:`~transformers.PreTrainedModel.save_pretrained`, e.g.: ``./my_model_directory/``.
    - a path or url to a `tensorflow index checkpoint file` (e.g. `./tf_model/model.ckpt.index`). In this case, ``from_tf`` should be set to True and a configuration object should be provided as ``config`` argument. This loading path is slower than converting the TensorFlow checkpoint in a PyTorch model using the provided conversion scripts and loading the PyTorch model afterwards.
    - None if you are both providing the configuration and state dictionary (resp. with keyword arguments ``config`` and ``state_dict``)

I wanna download a pretrained model and load it locally with from_pretrained api, How can I do that?

Most helpful comment

It can be done as the documentation suggests.
Once you've got the pre-trained tokenizer and model loaded the first time via (say for T5):

tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFAutoModelWithLMHead.from_pretrained("t5-small")

You can then save them locally via:

tokenizer.save_pretrained('./local_model_directory/')
model.save_pretrained('./local_model_directory/')

And then simply load from the directory:

tokenizer = AutoTokenizer.from_pretrained('./local_model_directory/')
model = TFAutoModelWithLMHead.from_pretrained('./local_model_directory/')

All 4 comments

You can use that third option and use a directory. Alternatively, I think you can also do

model = DistilBertModel(DistilBertConfig())
model.load_state_dict(torch.load(<path>))

Thanks for your advice . I'll have a try!

I found a solution. If you want use a pretrained model offline, you can download all files of the model. For example, If you wanna use "chinese-xlnet-mid", you can find files in https://s3.amazonaws.com/models.huggingface.co/ like below:
image
now, you can download all files you need by type the url in your browser like this https://s3.amazonaws.com/models.huggingface.co/bert/hfl/chinese-xlnet-mid/added_tokens.json.
Put all this files into a single folder, then you can use this offline.

tokenizer = XLNetTokenizer.from_pretrained('your-folder-name')
model = XLNetModel.from_pretrained('your-folder-name')

If any one have the same problem, maybe you can try this method. I'll close this issue, Thanks.

It can be done as the documentation suggests.
Once you've got the pre-trained tokenizer and model loaded the first time via (say for T5):

tokenizer = AutoTokenizer.from_pretrained("t5-small")
model = TFAutoModelWithLMHead.from_pretrained("t5-small")

You can then save them locally via:

tokenizer.save_pretrained('./local_model_directory/')
model.save_pretrained('./local_model_directory/')

And then simply load from the directory:

tokenizer = AutoTokenizer.from_pretrained('./local_model_directory/')
model = TFAutoModelWithLMHead.from_pretrained('./local_model_directory/')
Was this page helpful?
0 / 5 - 0 ratings

Related issues

rsanjaykamath picture rsanjaykamath  ยท  3Comments

fabiocapsouza picture fabiocapsouza  ยท  3Comments

alphanlp picture alphanlp  ยท  3Comments

lcswillems picture lcswillems  ยท  3Comments

siddsach picture siddsach  ยท  3Comments