ConnectionError: HTTPSConnectionPool(host='s3.eu-central-1.amazonaws.com', port=443): Max retries exceeded with url: /alan-nlp/resources/models-v0.4/TEXT-CLASSIFICATION_imdb/imdb.pt (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7efc06e96438>: Failed to establish a new connection: [Errno 110] Connection timed out'))
I'm getting this error when loading the model. I'm not allowed to connect to "s3.eu-central-1.amazonaws.com". Is there any way I can download the models and embeddings and load it offline.
I've got the same issue as well when I tried to use embedding = BertEmbeddings('bert-base-uncased'). After uninstall and install pyopenssl, my error becomes:
Model name 'bert-base-uncased' was not found in model name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese). We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt' was a path or url but couldn't find any file associated to this path or url.
Hello @sam1064max does this problem still persist? Generally, downloading models should work - maybe there was a temporary connection issue?
@xinru43 this seems like the same error as issue #594 - could it be that you do not have the current version of pytorch-pretrained-bert installed?
I got this error when I ran out of space when downloading the models. There's also an open issue here.
Hello @alanakbik , I am facing same error while using downloaded model . is this issue fixed?
Please suggest.
error:
path or url but couldn't find any file associated to this path or url.
Thanks
Mahesh
@search4mahesh did you manage to solve the problem? We cannot reproduce the error, downloading and loading models seems to work on our end.
Closing for now, but feel free to reopen if you have more questions/comments.
requests.exceptions.SSLError: HTTPSConnectionPool(host='s3.eu-central-1.amazonaws.com', port=443): Max retries exceeded with url: /alan-nlp/resources/models-v0.4/TEXT-CLASSIFICATION_imdb/imdb.pt (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))
Facing an SSLError , also should the downloaded models be placed at .flair/embeddings/ ????
Hi @Barath19 if you download the models you can place them whereever you want and then pass the full path to the constructor. So if you download into /path/on/your/machine/IMDB.pt you can load the model like this (in Flair 0.4.2):
imdb_classifier = TextClassifier.load(`/path/on/your/machine/IMDB.pt`)
Same issue while trying to run sample code post installation of flair
requests.exceptions.SSLError: HTTPSConnectionPool(host='s3.eu-central-1.amazonaws.com', port=443): Max retries exceeded with url: /alan-nlp/resources/models-v0.4/TEXT-CLASSIFICATION_imdb/imdb.pt (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:833)'),))
Same issue while trying to run sample code and I can not get access to the url of ‘https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models-v0.2'.
Can anyone successfully run the sample code?
I think the folder itself is not open, but you still can download all the old models. Which model do you want to download?
Please bear in mind that there was a bug in most of those old models (dropout was not turned off during prediction), which is why we've retrained them over time and are currently retraining the last of them for the next release (see #974).
Thank you for rapid response.
I am trying to reproduce the Context String Embedding at https://github.com/zalandoresearch/flair/tree/v0.2.0. Therefore, I run the train.py and I got the problem as following:

What should I do to deal with it?
Thank you very much.
Unfortunately it looks like you cannot connect o AWS. You need to download the word embeddings and point the embeddings classes to the downloaded models (see #965). The error message stells you the url from which you can download the models.
Use this kind of url to specifically download from web, and don't forget your vpn or other access tools.
https://s3.eu-central-1.amazonaws.com/alan-nlp/resources/models-v0.4/...
@alanakbik I got the same problem and cannot download the embeddings (glove.gensim.vectors.npy, glove.gensim, news-backward-0.4.1.pt, news-forward-0.4.1.pt).
Now I managed to download them in my local folder, is there anyway to manually load these files when training?
Yes, you can pass the path to these files to the init method of the embeddings.
embedding_1 = WordEmbeddings('path/to/glove.gensim')
embedding_2 = FlairEmbeddings('path/to/news-forward-0.4.1.pt')
@alanakbik thank you, let me try this
Hi, i have the same problem. I saved the glove embeddings insde the folder embeddings\ and used WordEmbeddings('embeddings/glove.gensim.vectors.npy'),but get this error
File "train.py", line 34, in <module>
WordEmbeddings('embeddings/glove.gensim.vectors.npy'),
File "/home/xxx/anaconda3/lib/python3.7/site-packages/flair/embeddings/token.py", line 188, in __init__
str(embeddings)
File "/home/xxx/anaconda3/lib/python3.7/site-packages/gensim/models/keyedvectors.py", line 1553, in load
model = super(WordEmbeddingsKeyedVectors, cls).load(fname_or_handle, **kwargs)
File "/home/xx/anaconda3/lib/python3.7/site-packages/gensim/models/keyedvectors.py", line 228, in load
return super(BaseKeyedVectors, cls).load(fname_or_handle, **kwargs)
File "/home/xxx/anaconda3/lib/python3.7/site-packages/gensim/utils.py", line 435, in load
obj = unpickle(fname)
File "/home/xxx/anaconda3/lib/python3.7/site-packages/gensim/utils.py", line 1398, in unpickle
return _pickle.load(f, encoding='latin1')
_pickle.UnpicklingError: STACK_GLOBAL requires str
Unable to download the model from
https://nlp.informatik.hu-berlin.de/resources/models/pt-pos-clinical/pucpr-flair-clinical-pos-tagging-best-model.pt
A webpage appears instead of downloading the model.