Spacy: AttributeError: 'FunctionLayer' object has no attribute 'W'

Created on 3 Jan 2020  路  8Comments  路  Source: explosion/spaCy

Hello, I have a problem using pretrained vectors with command line API. Steps to reproduce:

python -m spacy pretrain fulltext.jsonl vectors/hr_vectors_web_md models/hr/language --use-vectors --use-char --dropout 0.3 --n-iter 60

After that trying to train NER tagger with this command:

python -m spacy train hr models/ner train-ner.json dev-ner.json -v vectors/hr_vectors_web_md --init-tok2vec models/hr/language/model59.bin -p ner

Produces:

Traceback (most recent call last):
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/spacy/__main__.py", line 33, in
plac.call(commands[command], sys.argv[1:])
File "/usr/local/lib/python3.6/dist-packages/plac_core.py", line 328, in call
cmd, result = parser.consume(arglist)
File "/usr/local/lib/python3.6/dist-packages/plac_core.py", line 207, in consume
return cmd, self.func((args + varargs + extraopts), *kwargs)
File "/usr/local/lib/python3.6/dist-packages/spacy/cli/train.py", line 244, in train
components = _load_pretrained_tok2vec(nlp, init_tok2vec)
File "/usr/local/lib/python3.6/dist-packages/spacy/cli/train.py", line 551, in _load_pretrained_tok2vec
component.tok2vec.from_bytes(weights_data)
File "/usr/local/lib/python3.6/dist-packages/thinc/neural/_classes/model.py", line 375, in from_bytes
dest = getattr(layer, name)
AttributeError: 'FunctionLayer' object has no attribute 'W'

I'm using Google Colaboratory enviroment.

Info about spaCy

  • spaCy version: 2.2.3
  • Platform: Linux-4.14.137+-x86_64-with-Ubuntu-18.04-bionic
  • Python version: 3.6.9
bug feat / cli training

Most helpful comment

As a quick hack, you could also set the environment variable 'subword_features' to False (it needs to be the negation of use-char), then the parser will hopefully find it here.

All of this will be thoroughly refactored and much easier to work with, from spaCy v.3 onwards ;-)

All 8 comments

Hi @danielvasic, thanks for the report!

It looks like this is due to the --use-char argument you used for pretrain, which should be replicated when you run the train command, but that option was not supported yet. PR https://github.com/explosion/spaCy/pull/5021 should hopefully fix that. See also my more extensive comment here.

Dear @svlandeg , thanks for the response.

Looking forward for the support of --use-char option in train command as it's very important for morphologically rich languages such as Croatian. I will try to train the model without this option for now.

All the best,
Daniel

As a quick hack, you could also set the environment variable 'subword_features' to False (it needs to be the negation of use-char), then the parser will hopefully find it here.

All of this will be thoroughly refactored and much easier to work with, from spaCy v.3 onwards ;-)

Actually, one more thought. If you want character embeddings, you should try running without vectors. Have a look at the code here: CharacterEmbed is only used when pretrained_vectors==None and subword_features is False !

I'm afraid I do not have enough annotated data not to use pre trained word vectors, I have already trained model using FastText word vectors, but I will try this and compare the results maybe I'm wrong, many thanks for the support :-)

And yet another note: you can add a block for CharacterEmbed combined with a vector, but there's a bug in concatenate_lists on GPU so you can only use it on CPU. (Which is probably part of why CharacterEmbed isn't used in any spacy v2.2 models by default.)

@adrianeboyd thank you very much for the suggestion, will give it a try.

All the best,
Daniel

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

Was this page helpful?
0 / 5 - 0 ratings