Hi,
In the Keras blog, author demonstrates how to build the autoencoder. However, the definition for decoder model, seems to me, only fits to the case with one hidden layer.
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# create the decoder model
decoder = Model(input=encoded_input, output=decoder_layer(encoded_input))
I have been trying to figure out the way to define the decoder model for the generic cases with multiple hidden layers. But none was working.
For instance
decoder = Model(input=encoded_input, output=decoded) which will give error message such as
Traceback (most recent call last):
File "train.py", line 37, in <module>
decoder = Model(input=encoded_input, output=decoded)
File "tfw/lib/python3.4/site-packages/Keras-1.0.3-py3.4.egg/keras/engine/topology.py", line 1713, in __init__
str(layers_with_complete_input))
Exception: Graph disconnected: cannot obtain value for tensor Tensor("input_1:0", shape=(?, 784), dtype=float32) at layer "input_1". The following previous layers were accessed without issue: []
You can take a look at the section Deep autoencoder. There they use multiple hidden layer.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
Most helpful comment
https://stackoverflow.com/questions/44472693/how-to-decode-encoded-data-from-deep-autoencoder-in-keras-unclarity-in-tutorial