Transformers: How to use gpt-2-xl with run_generation.py

Created on 6 Nov 2019  ยท  4Comments  ยท  Source: huggingface/transformers

โ“ Questions & Help


I am currently running the run_generation.py and putting the model as gpt-2, however I am not satisfied with the results frm the small model.

How can I insert the big model instead?

wontfix

Most helpful comment

Ah yes we're yet to release a pypi version with GPT2-XL. To install from source you would have to do:

pip install git+https://github.com/huggingface/transformers

All 4 comments

You're probably using the command:

python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2

to use GPT2-XL for generation you would change the last argument to gpt2-xl:

python run_generation.py --model_type=gpt2 --model_name_or_path=gpt2-xl

ah. its because the latest version isnt available on pypi yet. and i pip installed it. 2.1.1 is the version I've got installed.

Is there any way to force pip to install the latest version that was commited 21 hours ago.

Ah yes we're yet to release a pypi version with GPT2-XL. To install from source you would have to do:

pip install git+https://github.com/huggingface/transformers

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Was this page helpful?
0 / 5 - 0 ratings