I get an unexpected KeyError when saving a model with several inputs where some of the inputs are not used in the computation graph, as in the following minimal example (A):
(A)
from keras.layers import Input, merge
from keras.models import Model
input1 = Input((10,))
input2 = Input((10,))
#create output that only uses input1
output = merge([input1, input1])
model = Model(input=[input1, input2], output=[output])
#this works
model.save_weights("weights.hdf5")
#this raises an error
model.save("weights.hdf5")
```
KeyError: 'input_2_ib-0'
when changing the code to incorporate "input2" into the model, everything works as expected
(B)
```python
#...
output = merge([input1, input2])
#...
The underlying reason seems to be that for (A) the dangling input layer is pruned from model.layers at build stage yet still present in model.input_layers raising the error in model.get_config():
(A)
print([lay.name for lay in model.layers])
Out[4]: ['input_1', 'merge_1']
print([lay.name for lay in model.input_layers])
Out[5]: ['input_1', 'input_2']
As I would not necessarily call that a bug (I could just change the inputs of the model to only those that actually get used) I find that behaviour a bit unintuitive:
Maybe I'm missing something or is there an easy workaround?
Thanks!
OSX 10.11.6, keras 1.2.1 master, theano backend, GeForce GT 750M (CNMeM is disabled, cuDNN 5105)
[x] Check that you are up-to-date with the master branch of Keras. You can update with:
pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
[ ] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
[x] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
[x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
any news?
It's an existing issue:
https://github.com/fchollet/keras/issues/2790
Francois' response mirrors the joke: "Doctor, it hurts when I touch my press my finger on my head." to which the doctor responds: "Then stop pressing your head!"
It's a fair response I suppose, but there are some advantages to being able to support this sort of "dangling" network topology. For example, you want to know if the secondary input provides any benefit, and have the good habit of saving every network you try, and don't want to have to switch the generator, model, etc. every time you toggle. I'd also like to see this feature, but I'm not holding my breath.
Note that this also happens when setting initial states for recurrent layers:
from keras.layers import Input, LSTM
from keras.models import Model
input1 = Input((10,10,))
input2 = [Input((10,))]
output = LSTM(32)(input1, initial_state=input2)
model = Model(input=[input1] + input2, output=[output])
#this works
model.save_weights("weights.hdf5")
#this raises an error
model.save("weights.hdf5")
KeyError: 'input_2_ib-0'
This is happening to me despite my using the input. It's closed over in the lambda argument to a Lambda layer. I'm doing something like
x_fgsm_layer = kl.Lambda(
lambda y_p: i_x + FGSM_EPS * K.sign(
K.gradients(
K.sum(K.categorical_crossentropy(i_y, y_p)),
i_x,
)
)
)
Where i_y is the input that's causing the crash. There is no Layer applied to i_y directly. This might work if the internal functions are wrapped in Lambdas too, but that seems like a lot of hassle (looks like I'd need I need a Merge to wrap the cxe in a Lambda) for something that trains just fine, and just fails to save.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
Still having this issue, any chances on fixing that? It'd actually be useful...
Same error here
~same error~
update: in my case, error was due to discontinuity in graph, As If fixed it, model saved successfully.
same error
Most helpful comment
Still having this issue, any chances on fixing that? It'd actually be useful...