When loading a model with a Lambda layer, Keras throws the error TypeError: arg 5 (closure) must be tuple. But after implementing the changes outlined in https://github.com/explosion/spaCy/issues/767#issuecomment-278651113, load_model() works as expected.
The issue is with https://github.com/fchollet/keras/commit/edae1785327dd7a418ac06c2fe85a8c1f6ea05b7#diff-56dc3cc42e1732fdb3a3c2c3c8efa32a. This commit removes func_reconstruct_closure(), but when I added that back in and called it in func_load(), it works.
Is there a fix planned for this?
[x] Check that you are up-to-date with the master branch of Keras. You can update with:
pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
[x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
[ ] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
[ ] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.
I am having a similar issue and tried to revert my pip installation back to 2.0.0, which still didn't make it work..
You'll have to manually make the code changes detailed above. Seems like @fchollet removed that section of the code because there weren't any unit tests. I might have some time in the future to familiarise myself with that section of the codebase and submit a PR, but that may be a while. In the meantime, the workaround I've found is to use load_weights() instead of load_model()
@nigeljyng thanks for the work on this!
Hm, so to be clear the current state is that we can't load_model() for models with Lambda layers in keras 2.0.6?
To be clear, the changes you're suggesting are to bring back func_reconstruct_closure() and rename it to func_load(), correct?
@adalca No, I'm suggesting to bring back func_reconstruct_closure() and call it from func_load(). I don't think I'll have time to look into this so hopefully someone else can contribute?
@nigeljyng thanks for the quick clarification.
That makes a lot more sense! had misread your original comment. Thank you.
This is still an issue in 2.0.8.
Is my understanding correct that all that needs to happen is to add unit tests to this PR?
How many do we need and exactly what do they need to cover? Would a model consisting only of a simple lambda layer saved, then loaded suffice?
This indeed still appears to be an issue. Is someone working on this? @gw0 ?
@hgaiser Unfortunately, no. At the moment I am doing some embedded programming for IoT.
I also understand it, that only unit tests for verifying that func_reconstruct_closure() works correctly. This means a few different examples that cover all possible situations of functions.
To add to this, it's not just re-adding the code that's necessary, as my network is failing in dropout reconstruction.
~/keras/keras/layers/recurrent.py in _generate_dropout_mask(self, inputs, training)
1588 import code; code.interact(local=dict(globals(), **locals()))
1589
-> 1590 ones = K.ones_like(K.squeeze(inputs[:, 0:1, :], axis=1))
1591
1592 def dropped_inputs():
TypeError: list indices must be integers, not tuple
The inputs causing the problem:
Python 3.4.3 (default, Nov 17 2016, 01:08:31)
[GCC 4.8.4] on linux
Type "help", "copyright", "credits" or "license" for more information.
(InteractiveConsole)
In : inputs
Out[2]:
[<tf.Tensor 'lambda_2/strided_slice:0' shape=(32, 1, 100) dtype=float32>,
<tf.Tensor 'encoder_0/while/Exit_2:0' shape=(32, 150) dtype=float32>,
<tf.Tensor 'encoder_0/while/Exit_3:0' shape=(32, 150) dtype=float32>]
(But, removing recurrent dropout does allow model loading)
Seeing this on 2.0.8 is there a workaround?
@dvaldivia This PR should provide working model loads with Lambda layers.
@soaxelbrooke the change did work for me
Got the same error...
Most helpful comment
This is still an issue in 2.0.8.
Is my understanding correct that all that needs to happen is to add unit tests to this PR?
How many do we need and exactly what do they need to cover? Would a model consisting only of a simple lambda layer saved, then loaded suffice?