Python 3.6
TensorFlow: 2.1.0rc0
Keras: 2.2.4-tf
After start training:
File "C:\project\maskRCNN\model.py", line 349, in compile
self.keras_model.add_loss(loss)
File "C:\python36\lib\site-packages\tensorflow_core\python\keras\engine\base_layer.py", line 1081, in add_loss
self._graph_network_add_loss(symbolic_loss)
File "C:\python36\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 1484, in _graph_network_add_loss
self._insert_layers(new_layers, new_nodes)
File "C:\python36\lib\site-packages\tensorflow_core\python\keras\engine\network.py", line 1439, in _insert_layers
layer_set = set(self._layers)
File "C:\python36\lib\site-packages\tensorflow_core\python\training\tracking\data_structures.py", line 598, in __hash__
raise TypeError("unhashable type: 'ListWrapper'")
TypeError: unhashable type: 'ListWrapper'
Any estimates on this issue?
How are you running this with TF 2.0? Are there updates or documentation on conversion? Am I missing something??
Sorry for such an open question...
@taylormcclenny
Yes I try run my maskRCNN code with tf.keras on TF 1.14, 1,15, 2.0 and 2.1rc0
Here more info about this issue: https://github.com/tensorflow/tensorflow/issues/34962
The "ListWrapper" bug appear after fixing output layer shape: https://github.com/tensorflow/tensorflow/issues/33785
@kiflowb777 & @dankor - My understanding is that Mask-RCNN won't run on TF 2.0. See the comments on this article, since TF 2.0's release.
I've been attempting to convert this model to run on TF 2.0 but I just get endless errors. Again, I apologize for a question that is so much more broad than your original post, but I can't find the info elsewhere - Is there somewhere else I can look for finding an updated Mask-RCNN that works (kind of) on TF 2.0?
It seems to require also heavy-lifting rework rather than one-convert-script-run renaming methods. Currently, as I see, @tomgross is working on the migration since he has marked this bug here.
I found the cause and the solution. This is the responsible tensorflow / keras commit: https://github.com/tensorflow/tensorflow/commit/45df90d5c2d6b125a10cb0809899c254d49412e6#diff-8eb7e20502209f082d0cb15119a50413R781
As documented you need to wrap the loss function with an empty lamda, when adding to the model. I've added the fix to my tensorflow 2.0 compatibility PR here:
https://github.com/matterport/Mask_RCNN/pull/1896/files#diff-312c7e001d14bbb7ce5f8978f7b04cc3R2171
I think the offending lines might be where these protected variables of keras_model are accessed directly:
self.keras_model._losses = []
self.keras_model._per_input_losses = {}
Removing those allowed me to proceed with training without setting those empty lambdas.
Removing the brackets works well to me,
modify from
loss = (tf.reduce_mean(input_tensor=layer.output, keepdims=True))
to
loss = tf.reduce_mean(input_tensor=layer.output, keepdims=True)
I think the offending lines might be where these protected variables of
keras_modelare accessed directly:self.keras_model._losses = [] self.keras_model._per_input_losses = {}Removing those allowed me to proceed with training without setting those empty lambdas.
When i removed these lines, I got the following error:
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 479, in _disallow_in_graph_mode
" this function with @tf.function.".format(task))
tensorflow.python.framework.errors_impl.OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed in Graph execution. Use Eager execution or decorate this function with @tf.function.
Most helpful comment
I found the cause and the solution. This is the responsible tensorflow / keras commit: https://github.com/tensorflow/tensorflow/commit/45df90d5c2d6b125a10cb0809899c254d49412e6#diff-8eb7e20502209f082d0cb15119a50413R781
As documented you need to wrap the loss function with an empty lamda, when adding to the model. I've added the fix to my tensorflow 2.0 compatibility PR here:
https://github.com/matterport/Mask_RCNN/pull/1896/files#diff-312c7e001d14bbb7ce5f8978f7b04cc3R2171