Describe the bug
AttributeError: 'LSTM' object has no attribute 'proj_size' while using NER tagger
To Reproduce
Created new conda environment to test the new update.
pip install flair
Taking the example code:
from flair.data import Sentence
from flair.models import SequenceTagger
# make a sentence
sentence = Sentence('I love Berlin .')
# load the NER tagger
tagger = SequenceTagger.load('ner')
# run NER over sentence
tagger.predict(sentence)
Expected behavior
Should result in tags as explained in: https://github.com/flairNLP/flair instead throws an error
Environment (please complete the following information):
The update kinda broke my environment i.e. plenty of errors with tensorflow, torch, flair, etc. I had to create a new env to get started again. Got attribute error on trying with the fresh installation.
Any suggestion on how to solve it?
Which torch version are you using?
I think maybe the torch 1.8.0 release broke it. Could you try the 'ner-large' model if that works. And also try 'ner' model with torch 1.7.0?
Had the same problem today.
flair==0.8
pytorch==1.8.0
I think maybe the torch 1.8.0 release broke it. Could you try the 'ner-large' model if that works. And also try 'ner' model with torch 1.7.0?
Using ner-german-large mitigated the problem for now.
This likely affects all RNN models trained with torch 1.7.0. The best immediate solution is to use torch 1.7.0 for now until we fix this. The "-large" models will work on torch 1.8.0 because they don't use RNNs.
I think maybe the torch 1.8.0 release broke it. Could you try the 'ner-large' model if that works. And also try 'ner' model with torch 1.7.0?
It is torch 1.8.0, which gets installed while installing flair. Using -large models help, however, pre-update transformer models do not work anymore. I had to download the models again.
What is the best way to save a model as a variable in some local folder dbert_model = TransformerDocumentEmbeddings('distilbert-base-german-cased', fine_tune=True) ?
currently I store it as a pickle file
Ahhh 馃槩
Well, maybe we can release an emergency release that pins pytorch<=1.7 :thinking:
I encountered the same issue when running the latest version of flair (v0.8.0) on Mac (v10.15.7). Although far from optimum (due to size), I was able to execute the NER sequence tagging with the larger English model (i.e. ner-large).
As an additional note, the issue seems to be related to a number of other models; e.g. frame, frame-fast, ner-fast, etc.
Code used for testing
from flair.data import Sentence
from flair.models import MultiTagger
sentence = Sentence("Let's meet with George Washington on the 20th of February 2020 in London.")
tagger = MultiTagger.load(['frame-fast', 'ner-fast'])
tagger.predict(sentence)
print(f'{sentence}')
print(f'{sentence.to_tagged_string()}')
I can confirm that the demo works for me if I do this:
pip install flair
pip uninstall torch
pip install torch==1.7.1
I've pushed a hotfix to pip that restricts the torch version to a maximum of 1.7.1 for now. If you install a fresh version through pip it should work now.
A more thorough fix that solves the issues also for 1.8.0 is being prepared as well.
Although the correct version of torch is deployed when using the following command
pip install flair
it seems that torch version 1.8.0 is still being identified when running the pip-compile command. Is this expected?
Hi @qhreul could you please provide the exact command that you ran :thinking:
I checked both pip3 install flair and pip3 install https://github.com/flairNLP/flair.git in a clean docker container and in both cases, Torch in version 1.7.1 is downloaded.
Hi @stefan-it I was running the following command
pip-compile --no-emit-index-url requirements\common.in
In the common.in file, there is a reference to the flair library; i.e.
flair==0.8.0
when I look at the TXT file that was generated from the pip-compile, I see the following
flair==0.8.0
# via -r requirements/common.in
...
torch==1.8.0
# via flair
Ah, I see, could you try to use the 0.8.0.post1 identifier :thinking:
tried pip install flair in google colab
got torchvision, torchtext, and konoha errors
ERROR: torchvision 0.9.0+cu101 has requirement torch==1.8.0, but you'll have torch 1.7.1 which is incompatible.
ERROR: torchtext 0.9.0 has requirement torch==1.8.0, but you'll have torch 1.7.1 which is incompatible.
ERROR: konoha 4.6.4 has requirement requests<3.0.0,>=2.25.1, but you'll have requests 2.23.0 which is incompatible.
looks like chicken-egg problem
FYI, this issue is on PyTorch discussion as well: https://github.com/pytorch/pytorch/issues/53359
The issue is that somewhere in the codebase of the Flair there is torch.save(model) and torch.load(model) (instead of the state_dict). This needs to be fixed, as saving the whole model is not the right way to go...
@z-a-f thanks a lot for pointing to this. Our master branch fixes this by changing serialization (see #2151) but this unfortunately requires all already-trained flair-models to be re-serialized.
If PyTorch fixes this themselves for 1.8.1 I am thinking of holding back the fix until that release. This would have the advantage that all previously serialized models would still run (i.e. minimal friction), but all newly saved models in Flair would be "future-proof" against such changes.
Most helpful comment
Well, maybe we can release an emergency release that pins
pytorch<=1.7:thinking: