I have downloaded the the raw code on pretrained_word_embeddings.py to try out. I can't run it since the provided weight is not the same as the set weights, here is what I see, any way to fix this bug?
Using TensorFlow backend.
Indexing word vectors.
Found 400000 word vectors.
Processing text dataset
Found 19997 texts.
Found 214873 unique tokens.
Shape of data tensor: (19997, 1000)
Shape of label tensor: (19997, 20)
Preparing embedding matrix.
Training model.
Traceback (most recent call last):
File "[the path]/pretrained_word_embeddings.py", line 126, in
embedded_sequences = embedding_layer(sequence_input)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py", line 543, in __call__
self.build(input_shapes[0])
File "/usr/local/lib/python3.5/dist-packages/keras/layers/embeddings.py", line 101, in build
self.set_weights(self.initial_weights)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py", line 966, in set_weights
str(weights)[:50] + '...')
ValueError: You called set_weights(weights)
on layer "embedding_1" with a weight list of length 1, but the layer was expecting 0 weights. Provided weights: [array([[ 0. , 0. , 0. , .....
Are you using keras 1.2.0? I found the same error when testing the newly released 1.2.0 version. After switching back to 1.1.2, this issue is gone.
thanks @danielhers, of course i could seek to unsinstall my keras 1.2 and re install 1.1.2. But what has been changed in between these two versions that cause the issue? from a programmer point of view..
Any clues offered? should I look at these two source codes?
I tried to replaced individual file, realizing that the bug will go away if I replaced embeddings.py with 1.2 version. However, another problem comes up and I do see that why going back 1.2 might be the smartest way. I cannot do it now as I am running another job that lasts for 10 days.
Using TensorFlow backend.
Indexing word vectors.
Found 400000 word vectors.
Processing text dataset
Found 19997 texts.
Found 214873 unique tokens.
Shape of data tensor: (19997, 1000)
Shape of label tensor: (19997, 20)
Preparing embedding matrix.
Training model.
Traceback (most recent call last):
File "/home/bnpp/ReadingForms/03_wordembedding/pretrained_word_embeddings.py", line 127, in
x = Conv1D(128, 5, activation='relu')(embedded_sequences)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/topology.py", line 491, in __call__
self.build(input_shapes[0])
File "/usr/local/lib/python3.5/dist-packages/keras/layers/convolutional.py", line 117, in build
self.W = self.add_weight(self.W_shape,
AttributeError: 'Convolution1D' object has no attribute 'add_weight'
Turns out my problem wasn't specifically with 1.2.0 or 1.1.2. I was passing both weights=
and trainable=False
to an Embedding layer, and they are incompatible.
I meet the same issue, no solution yet. It's true the build in embeddings.py is different
have to roll back to 1.1.2 version. Has anyone further suggestions?
Removing trainable=False
from the Embedding layer and adding model.layers[1].trainable=False
before it's compiled worked for me.
Thank you @SiWorgan !!! Your solution is working!
Thank you @SiWorgan !!! Your solution is working!
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs, but feel free to re-open it if needed.
I am getting this same issue with keras 2.0.6
and I can not get it work with values as either True
or False
for trainable
Same issue here with Keras 2.1.5
found a way to set it using self defined initializer....
+1
I meet the same problem of Convolution1D
, instead of using layer.set_weights()
, using initializer:
Convolution1D(filters=filter_num,
kernel_size=ks,
padding="valid",
activation="relu",
weights= **, # this will solve the problem
strides=1)(z)
I have faced a similar problem and found the solution is to add the layer to an existing model first, and then invoke set_weights
.
Most helpful comment
Removing
trainable=False
from the Embedding layer and addingmodel.layers[1].trainable=False
before it's compiled worked for me.