When I set first_stage_only = true in my config file, it trains well. However, when I try to use export_inference_graph.py to export the trained model for inference, there is a bug:
Traceback (most recent call last):
File "export_inference_graph.py", line 101, in
tf.app.run()
File "/home/linhb/anaconda2/envs/tensorflow/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "export_inference_graph.py", line 97, in main
FLAGS.export_as_saved_model)
File "/home/linhb/horse/models/object_detection/exporter.py", line 339, in export_inference_graph
export_as_saved_model)
File "/home/linhb/horse/models/object_detection/exporter.py", line 310, in _export_inference_graph
outputs = _add_output_tensor_nodes(postprocessed_tensors)
File "/home/linhb/horse/models/object_detection/exporter.py", line 184, in _add_output_tensor_nodes
classes = postprocessed_tensors.get('detection_classes') + label_id_offset
TypeError: unsupported operand type(s) for +: 'NoneType' and 'int'
That's due to the output detection_classes is None when using first_stage_only. I change the code in exporter.py, line184, from,
classes = postprocessed_tensors.get('detection_classes')+label_id_offset
to,
classes = postprocessed_tensors.get('detection_classes')
if classes:
classes += label_id_offset
else:
classes = tf.constant(1, tf.int32, scores.shape)
then the program can run correctly.
Hey guys,
Any news on this subject? Personnaly, I also face issues when I try to run eval.py.
I get the following error
Traceback (most recent call last):
File "object_detection/eval.py", line 161, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "object_detection/eval.py", line 157, in main
FLAGS.checkpoint_dir, FLAGS.eval_dir)
File "/home/ubuntu/models-tf/object_detection/evaluator.py", line 132, in evaluate
ignore_groundtruth=eval_config.ignore_groundtruth)
File "/home/ubuntu/models-tf/object_detection/evaluator.py", line 70, in _extract_prediction_tensors
tf.squeeze(detections['detection_classes'], axis=0) +
KeyError: 'detection_classes'
Traceback (most recent call last):
File "object_detection/eval.py", line 161, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "object_detection/eval.py", line 157, in main
FLAGS.checkpoint_dir, FLAGS.eval_dir)
File "/home/ubuntu/models-tf/object_detection/evaluator.py", line 132, in evaluate
ignore_groundtruth=eval_config.ignore_groundtruth)
File "/home/ubuntu/models-tf/object_detection/evaluator.py", line 70, in _extract_prediction_tensors
tf.squeeze(detections['detection_classes'], axis=0) +
KeyError: 'detection_classes'
I made it work with the same tweak @shzygmyx used.
Thanks for the above fix. I had to make one small change because I couldn't use a tensor as a python boolean. Here's what I replaced line 184 with (in case anyone else encounters this issue):
if classes is not None:
classes += label_id_offset
else:
classes = tf.constant(1, tf.int32, scores.shape)
@slsd123 @tombstone @shzygmyx after your fix I am still running into a new error
File "export_inference_graph.py", line 119, in
tf.app.run()
File "/opt/conda/lib/python3.5/site-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "export_inference_graph.py", line 115, in main FLAGS.output_directory, input_shape) File "/code/dent_obj_detection/TF/object_detection/exporter.py", line 434, in export_inference_graph input_shape, optimize_graph, output_collection_name)
File "/code/dent_obj_detection/TF/object_detection/exporter.py", line 362, in _export_inference_graph
output_collection_name)
File "/code/dent_obj_detection/TF/object_detection/exporter.py", line 240, in _add_output_tensor_nodes
classes = tf.constant(1, tf.int32, scores.shape)
File "/opt/conda/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py", line 208, in constant
value, dtype=dtype, shape=shape, verify_shape=verify_shape))
File "/opt/conda/lib/python3.5/site-packages/tensorflow/python/framework/tensor_util.py", line 380, in make_tensor_proto
if shape is not None and np.prod(shape, dtype=np.int64) == 0:
File "/opt/conda/lib/python3.5/site-packages/numpy/core/fromnumeric.py", line 2518, in prod
out=out, **kwargs)
File "/opt/conda/lib/python3.5/site-packages/numpy/core/_methods.py", line 35, in _prod
return umr_prod(a, axis, dtype, out, keepdims)
TypeError: __int__ returned non-int (type NoneType)
Any idea how to fix this? The error I think is related to the tensorflow version.
Update:
The above error is fixed by providing values for the input_shape argument in export_inference_graph.py
Is this still an issue or can I close?
@skye yes, the issue still persists! Check my update on the previous comment. Even after the fix mentioned above, it works only if we give arguments for input_shape
The problem is that your constant needs to have known shape at creation time. The way around it is to dynamically reshape it as a tensor:
if classes:
classes += label_id_offset
else:
one = tf.constant(1, dtype=tf.int32, shape=(1, 1), name='classes_dummy')
classes = tf.tile(one, tf.shape(scores))
I am going to submit a pull request if that works for you as well.
Automatically closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks!
Thanks for the above fix. I had to make one small change because I couldn't use a tensor as a python boolean. Here's what I replaced line 184 with (in case anyone else encounters this issue):
if classes is not None: classes += label_id_offset else: classes = tf.constant(1, tf.int32, scores.shape)
This fix is the correct one, if you use if classes you will encounter problems when you export the whole detection graph.
Most helpful comment
The problem is that your constant needs to have known shape at creation time. The way around it is to dynamically reshape it as a tensor:
I am going to submit a pull request if that works for you as well.