I found the code has the problem as the title shows, but if I add the dropout layer model.add(Dropout(dropout)), it can work. Anyone knows why? The back-end is tensorflow 1.0, Keras 2.0.2
def prep_model1(embedding_layer1, embedding_layer2, dropout=0.5):
model0 = Sequential()
model0.add(embedding_layer1)
model0.add(Bidirectional(LSTM(128, return_sequences=False, dropout=dropout)))
model1 = Sequential()
model1.add(embedding_layer2)
model1.add(Bidirectional(LSTM(128, return_sequences=False, dropout=dropout)))
model = Sequential()
model.add(Merge([model0, model1], mode='concat', concat_axis=1))
#model.add(Dropout(dropout))
model.add(Dense(1, activation='sigmoid'))
return model
I also got this error when not using a Dropout Layer.
Keras 2.0.0
Node 6.10.3
```
model = Sequential()
model.add(TimeDistributed(Dense(num_mfcc_coefficients),input_shape=(max_mfcc_frames,num_mfcc_coefficients)))
for _ in range(ENCODE_LAYERS):
model.add(Bidirectional(RNN(HIDDEN_SIZE, return_sequences=True, dropout=RNN_DROPOUT)))
model.add(Dropout(DROPOUT))
model.add(Bidirectional(RNN( 4*max_sentence_length, return_sequences=False, dropout=RNN_DROPOUT ) ))
model.add(Reshape((max_sentence_length,-1)))
for _ in range(DECODE_LAYERS):
model.add(Bidirectional(RNN(HIDDEN_SIZE, return_sequences=True, dropout=RNN_DROPOUT)))
model.add(TimeDistributed(Dense(num_output_classes)))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'],sample_weight_mode="temporal")
model.summary()
```
EDIT: This seems to be FIxed in 2.0.4. It is working as expected ! @wolfshow I think you can close that issue ;)
I have version 2.0.4 but still getting the same error for my first LSTM layer combined with the wrapper. Any idea?
self.model = Sequential()
self.model.add(Embedding(input_dim=vocab_size,
output_dim=embedding_size, input_length=max_len))
lstm_model = LSTM(num_units, dropout=dropout)
if useBiDirection:
lstm_model = Bidirectional(lstm_model)
self.model.add(lstm_model)
self.model.add(Dense(n_classes, activation='softmax'))
self.model.summary()
self.model.compile(loss='categorical_crossentropy',
optimizer=Adam(lr=learning_rate),
metrics=['accuracy'])
I also see the same error if I try to fit this model.
I get the error
InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'bidirectional_1/keras_learning_phase' with dtype bool
How I solved it was putting K.set_learning_phase(1) before building the model. Before I was putting it after compile.
Are you all using the tensorboard callback? I think there is a bug there.
I'm also experiencing this issue, and I'm not using the TensorBoard callback. Similarly to everyone else, adding a Dropout layer fixes it, in addition to removing the LSTM dropout parameter or adding a final Dense layer.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
I am gettijng a similar error Anyone with idea?
InvalidArgumentError: You must feed a value for placeholder tensor 'embedding_layer_input' with dtype float
[[Node: embedding_layer_input = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]
[[Node: output_layer_2/bias/read/_237 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:0", send_device_incarnation=1, tensor_name="edge_1546_output_layer_2/bias/read", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]
Code:
`def model_param(self):
# Method to do deep learning
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout, Activation
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.initializers import TruncatedNormal
tn=TruncatedNormal(mean=0.0, stddev=1/sqrt(self.x_train.shape[1]*self.x_train.shape[1]), seed=2)
self.model = Sequential()
self.model.add(Embedding(self.len_vocab,300,input_length=self.x_train.shape[1]))
# Adding LSTM cell
self.model.add(LSTM(self.num_units,dropout=0.30,kernel_initializer=tn,name="lstm_1"))
# Adding the dense output layer for Output
self.model.add(Dense(1,activation="sigmoid",name="output_layer"))
#sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
self.model.compile(loss='binary_crossentropy',
optimizer="adam",
metrics=['accuracy'])
self.model.summary()
def fit(self):
# Training the deep learning network on the training data
# Adding the callbacks for Logging
import keras
logger_tb=keras.callbacks.TensorBoard(
log_dir="logs_sentiment_lstm",
write_graph=True,
histogram_freq=5
)
self.model.fit(self.x_train, self.y_train,validation_split=0.20,
epochs=10,
batch_size=128,callbacks=[logger_tb]
)`
Having this with the TensorBoard callback. Worked fine on the first run but now it stopped working as I used a different data file.
Running via Hydrogen via Atom. After restarting the kernel it runs fine again ...
Also noticing this when training a second model in a jupyter notebook, while using the TensorBoard callback. Seems the error is thrown at the end of the first epoch of training (when evaluating on the validation set, I believe). Once I remove the TensorBoard callback, my second model trains/evaluates fine.
Closing as this is resolved
Most helpful comment
Are you all using the tensorboard callback? I think there is a bug there.