Using Tensorflow v1.11.0.
Please see here to reproduce the error: https://github.com/esdu/misc/blob/master/bug_report_lstm_freeze.ipynb
In the above notebook, I train a simple LSTM, try to freeze the graph, and load the frozen graph. Upon loading the frozen graph, I see an error:
InvalidArgumentError: Input 0 of node import/lstm/while/ReadVariableOp/Enter was passed float from import/lstm/kernel:0 incompatible with expected resource.
The lstm/kernel node was frozen into a Const, and lstm/while/ReadVariableOp/Enter cannot read it in. The same thing is observed for other variables related to the LSTM: lstm/bias and lstm/recurrent_kernel.
Any hints on how to freeze an LSTM trained in Keras?
I have the exact same behaviour
Same behaviour here. convert_variables_to_constants changes ReadVariableOps to Identity nodes after transforming computed variables to constants. However some variables are attached to Enter nodes which are incompatible with the generated constants (eg. they expect a resource and they receive a flot). Any idea how can we fix Enter nodes?
Is there any hope to have a fix on this?
For information, the problem is coming from the symbolic loop used in the LSTM layer. If the sequence is short enough, setting the "unroll" option of this layer to True might be a workaround.
when you ignore the error by using variable_names_blacklist=['lstm/kernel', 'lstm/bias', 'lstm/recurrent_kernel']), the .pd file can not be parsed into C sharp. CNN models are fine, however, I tried a simple GRU and a LSTM, it gives the same error, waiting for a fix on this.
Same issue here with GRU: Invalid argument: Input 0 of node gru/while/ReadVariableOp/Enter was passed float from gru/kernel:0 incompatible with expected resource.
Same here please someone have a look!!!
Did anyone find a solution for this?
Have a look at @yangw1234 's solution.
https://github.com/tensorflow/tensorflow/issues/25721
I tried @yangw1234's solution and was able to freeze the model. But I am getting different set of errors when trying to "tf.import_graph_def" of this frozen graph. The error is "Node 'lstm/while/ReadVariableOp/Enter' has an _output_shapes attribute inconsistent with the GraphDef for output #0: Shapes must be equal rank, but are 2 and 0" . Any clues for a workaround?
Same error occurs, when using Autograph with LSTM-layer. Running the current master branch on commit 7c5157667006181f16efa3b70468ec1bd62cb070. Any news on how to fix this?
I tried @yangw1234's solution and was able to freeze the model. But I am getting different set of errors when trying to "tf.import_graph_def" of this frozen graph. The error is "Node 'lstm/while/ReadVariableOp/Enter' has an _output_shapes attribute inconsistent with the GraphDef for output #0: Shapes must be equal rank, but are 2 and 0" . Any clues for a workaround?
I have the same error
As pointed out by @gbaulieu, using unroll = True option on GRU layer fixes the problem for me. No need to patch-copy anything, freezing and loading frozen graph works fine with Keras and Tensorflow 1.13.1.
Most helpful comment
Same behaviour here. convert_variables_to_constants changes ReadVariableOps to Identity nodes after transforming computed variables to constants. However some variables are attached to Enter nodes which are incompatible with the generated constants (eg. they expect a resource and they receive a flot). Any idea how can we fix Enter nodes?