Hi,
Keras is quite amazing, thanks. But I can't find the right way to get output of intermediate layers. suppose I have trained a convolutional network and after the training I want to put the fully connected layers away and use the output of last convolutional layer, is it possible?
I've thought of using multiple outputs with Graph model but it demands training values for all of inputs which is not available in my case. Also load_weights method only works on models with identical architecture so it can't be used to load learned weights of convolutional layers to another model which only contains those layers
Hi,
See this: https://github.com/fchollet/keras/issues/431#issuecomment-124175958
thanks, it solved my issue :)
hi @smohsensh , i use the graph model and want to output the intermediate layers.
like this,
model = graph()
model.add_input(name='input0',input_shape=())
model.add_node(Convolution2D(),name='c1',input='input0')
.......
And i want to see the output of the c1,Then
getFeatureMap = theano.function(model.inputs['input0'].input,model.nodes['c1'].get_output(train=False),
allow_input_downcast=True)
But it show me that
TypeError: list indices must be integers, not str
Do you give me some advices? Thanks.
More generally you can visualise the output/activations of every layer of your model. I wrote an example with MNIST to show how here:
https://github.com/philipperemy/keras-visualize-activations
So far it's the less painful I've seen.
You can cut the model from input to the layer you want:
def build_bottleneck_model(model, layer_name):
for layer in model.layers:
if layer.name == layer_name:
output = layer.output
bottleneck_model = Model(model.input, output)
bottleneck_model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
return bottleneck_model
This will return you a model that take old model input that output neuron value of target layer
@Shawn-Shan try to use https://github.com/philipperemy/keras-visualize-activations
No need to cut the model (not very elegant!).
Most helpful comment
More generally you can visualise the output/activations of every layer of your model. I wrote an example with MNIST to show how here:
https://github.com/philipperemy/keras-visualize-activations
So far it's the less painful I've seen.