Although I can use a fine-tuned GPT2 model from code, the model page complains about the config file (which is already uploaded).
at https://huggingface.co/akhooli/gpt2-small-arabic-poetry (for a prompt), I get:
Can't load config for 'akhooli/gpt2-small-arabic-poetry'. Make sure that: - 'akhooli/gpt2-small-arabic-poetry' is a correct model identifier listed on 'https://huggingface.co/models' - or 'akhooli/gpt2-small-arabic-poetry' is the correct path to a directory containing a config.json file
Currently working on a fix. Will update here
Should be fixed now
Hey I'm facing the same issue with two models I uploaded today
https://huggingface.co/rohanrajpal/bert-base-en-hi-codemix-cased?text=I+like+you.+I+love+you
https://huggingface.co/rohanrajpal/bert-base-en-es-codemix-cased?text=I+like+you.+I+love+you
Here are the config files
https://s3.amazonaws.com/models.huggingface.co/bert/rohanrajpal/bert-base-en-es-codemix-cased/config.json
https://s3.amazonaws.com/models.huggingface.co/bert/rohanrajpal/bert-base-en-hi-codemix-cased/config.json
pinging @mfuntowicz on this
I am seeing the same issue with a new model I just uploaded (akhooli/xlm-r-large-arabic-sent). It works if called in code through HF pipeline.
Seeing the same issue, pinging @mfuntowicz, @julien-c
https://huggingface.co/donal/Pro_Berta?text=The+goal+of+life+is+%3Cmask%3E.
All the files seem to be in the right place.
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/merges.txt
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/special_tokens_map.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/training_args.bin
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/pytorch_model.bin
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/config.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/tokenizer_config.json
Your file now lives at:
https://s3.amazonaws.com/models.huggingface.co/bert/donal/Pro_Berta/vocab.json
You probably are already aware of this, but inference worked for a bit and broke again (Can't load config for [model]).
I had to revert the change I did because it was breaking one workflow we have on api-inference side. I'm working on having a stable patch by today, sorry for the inconvenience
Should be fixed now. Let us know 馃憤
Yup works
Confirmed (existing model then updated). Thanks for the fix and for HF great work!
It works now. Thanks! @mfuntowicz
Most helpful comment
Hey I'm facing the same issue with two models I uploaded today
https://huggingface.co/rohanrajpal/bert-base-en-hi-codemix-cased?text=I+like+you.+I+love+you
https://huggingface.co/rohanrajpal/bert-base-en-es-codemix-cased?text=I+like+you.+I+love+you
Here are the config files
https://s3.amazonaws.com/models.huggingface.co/bert/rohanrajpal/bert-base-en-es-codemix-cased/config.json
https://s3.amazonaws.com/models.huggingface.co/bert/rohanrajpal/bert-base-en-hi-codemix-cased/config.json