Hi all,
quick question: in the examples of the applications webpage (https://keras.io/applications/) there is a section for extracting features in a trained model. In this case you pick the input and the layer of interest in the architecture and build the model as follows:
base_model = VGG19(weights='imagenet')
model = Model(inputs=base_model.input, outputs=base_model.get_layer('block4_pool').output)
So far so good. The problem is when one would like to do the exact same thing on a model that has been trained as submodel of another network. Here is an example:
# Get a trained model
model = VGG16(weights='imagenet', include_top=False)
# embedding into another model
inflow = layers.Input((256, 256, 3))
x = ... some layers ... (inflow)
latent = model(x)
outflow = layers.Dense(128)(latent)
new_model = models.Model(inflow, outflow)
# training
[ ... ]
# extract model
old_model = new_model.get_layer('vgg16')
old_model= models.Model(inputs=old_model.input, output=old_model.get_layer('block5_pool').output)
And error message pops up on the screen
ValueError Traceback (most recent call last)
[...]
C:\Anaconda2\envs\tensorflow\lib\site-packages\keras\engine\network.py in _init_graph_network(self, inputs, outputs, name)
163 'must come from `keras.layers.Input`. '
164 'Received: ' + str(x) +
--> 165 ' (missing previous layer metadata).')
166 # Check that x is an input tensor.
167 layer, node_index, tensor_index = x._keras_history
ValueError: Input tensors to a Model must come from `keras.layers.Input`. Received: <keras.engine.input_layer.InputLayer object at 0x0000000006712EB8> (missing previous layer metadata).
I really cannot figure out what is the problem. Any help?
This is an edge case, but I believe it's a bug.
@gabrieldemarmiesse do you know any way around it?
Try to reference the original model. Don't use model.layers.
@gabrieldemarmiesse unfortunately that I can't in the project I'm working in. Anyhow thanks for the help so far ;-)
@HansLeonardVanBrueggemann any luck? I'm facing the same issue
Same here, this seems like something that would need to be fixed
I am facing the same issue. How do I fix it?
Heya, facing the same issue here. Using a version of this - a sequence-to-sequence model on a word level with embeddings. The training part works perfectly and if I run the code right after training the model, it works. But if load the model after training, I run into this issue.
encoder_inputs = model.get_layer('input_1')
decoder_inputs = model.get_layer('input_2')
decoder_embedding = model.get_layer('embedding_2')
encoder_lstm = model.get_layer('lstm_1')
decoder_lstm = model.get_layer('lstm_2')
decoder_dense = model.get_layer('dense_1')
_, enc_state_h, enc_state_c = encoder_lstm.output
encoder_states = [enc_state_h, enc_state_c]
# Error occurs at this line.
encoder_model = Model(encoder_inputs, encoder_states)
ValueError: Input tensors to a Model must come from keras.layers.Input. Received:
Edit: Seems like this was unrelated. I solved the issue. I was passing the layer itself instead of the input into the function. For anyone who finds this via Google or something: It should actually be encoder_inputs.output or alternatively model.input[0]
Refer: https://github.com/keras-team/keras/blob/master/examples/lstm_seq2seq_restore.py
On a side note, am I right in assuming that model.input[0] = input_layer.input = input_layer.output?
I encountered the same problem while implementing YOLO. I don't know the exact reason for this error. Then, I used Sequential API instead of Functional API, and it worked for me. Maybe you might convert your code to Sequential counterpart. It should appear like this:
```# Get a trained model
model = VGG16(weights='imagenet', include_top=False)
new_model = Seqeuntial()
input = (Input(shape=(256, 256, 3), name='input'))
x = ...
... Add other layers
outflow = Dense(128)
new_model.add(input)
new_model.add(x)
... add other layers ...
for layer in model.layers:
new_model.add(layer)
new_model.add(outflow)
[ ... ]
start_index = ... the index of the first layer of VGG in new_model.layers list...
end_index = ... the index of the 'block5_pool' in new_model.layers...
old_model = Sequential()
for layer in new_model.layers[start_index:end_index]:
old_model.add(layer)
```
I was getting the same error on Functional API. But i solved the error by making sure the variable that stores the Input layer(in my case model_input) and variable of the last layer(in my case preds) are properly mentioned in
keras.Model(model_input, preds)
Just this small check solved it for me.I think Keras API is not the problem. This seems like edge case.
I was trying to put 2 nets together with the Model command:
model = Model(inputs=base_mode, output=top)
And getting the same error.
So i solved using Sequential:
model = Sequential()
model.add(base_model)
model.add(top)