Hello guys, I successfully train a SSD Mobilenets model using my own dataset but now I would like to perform model quantization and the script requires the output node of the graph.
I knew the output node of SSD MobileNets is not softmax so would anyone please help me how to find it ?
You can use the summarize_graph tool to get possible inputs and outputs: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md
The outputs should be similar to what showed in the tutorial... https://github.com/tensorflow/models/blob/master/research/object_detection/object_detection_tutorial.ipynb
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
Not sure you need num_detections as you might be able to handle your output just with the other 3 (in that case you can use the graph transform tool to only keep the paths to the outputs that are relevant to you).
Thanks for your reply, I have one more question.
So in the summarize_graph tool, I see it required Bazel to execute the command. I installed tensorflow using pip not from source so I didn't install Bazel and my question is : If I install Bazel now, Will I run the command normally ?
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=tensorflow_inception_graph.pb
You need to have tensorflow source to have access to the C++ tools
On Mon, 30 Oct 2017 at 6:17 PM, Phong Nguyen Ha notifications@github.com
wrote:
Thanks for your reply, I have one more question.
So in the summarize_graph tool, I see it required Bazel to execute the
command. I installed tensorflow using pip not from source so I didn't
install Bazel and my question is : If I install Bazel now, Will I run the
command normally ?
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph
--in_graph=tensorflow_inception_graph.pb—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/tensorflow/models/issues/2623#issuecomment-340401155,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AN_wJnZ-_qJrt-vhZFPBhoXXrjnBHdNAks5sxaI4gaJpZM4QJyIO
.
@phongnhhn92 : It seems your questions have been answered? (Thanks @AndreaPisoni )
Yeah, I figured this one out so I will close this for now. Thanks for support guys !
@phongnhhn92 hi, how you solve your problem? i want to find the input and output node name as well
@phongnhhn92 ,hi.I also have the problem just now.you can solve it by
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=tensorflow_inception_graph.pb
that is in :https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md
thanks you
I need the input/output name to create pb file. How do you find the output/input name when pb file is not available? Seems like summarize_graph needs .pb file.
i only find the input output name after converting hdf5 file to pb file.
to find input/output name in pb file, i followed this: https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/optimizing-neural-networks-for-mobile-and-embedded-devices-with-tensorflow/determine-the-names-of-input-and-output-nodes
@phongnhhn92 How did you solve the problem? Could you please give me some advice? Thank you.
https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/optimizing-neural-networks-for-mobile-and-embedded-devices-with-tensorflow/determine-the-names-of-input-and-output-nodes
from here u can see your graph, then u can easily trace the input node name and output node name. normally they will locate at the bottom and top of graph.
If you are using Object Detection API:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md
@hanako94 I have found the output nodes using method you mentioned. Thank you.
@lechatthecat I'm working to convert the .pb file to .tflite model instead of converting .ckpt to .pb model. Thank you all the same.
Hey everyone. By running:
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=/home/lorenzo/Downloads/ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb
I get:
Found 1 possible inputs: (name=image_tensor, type=uint8(4), shape=[?,?,?,3])
No variables spotted.
Found 4 possible outputs: (name=detection_boxes, op=Identity) (name=detection_scores, op=Identity) (name=num_detections, op=Identity) (name=detection_classes, op=Identity)
Found 6818239 (6.82M) const parameters, 0 (0) variable parameters, and 1680 control_edges
Op types used: 1878 Const, 549 Gather, 452 Minimum, 360 Maximum, 305 Reshape, 197 Sub, 184 Cast, 183 Greater, 180 Split, 180 Where, 146 Add, 135 Mul, 128 StridedSlice, 124 Shape, 114 Pack, 108 ConcatV2, 97 Squeeze, 93 Slice, 93 Unpack, 90 ZerosLike, 90 NonMaxSuppressionV2, 35 Relu6, 34 Conv2D, 28 Switch, 27 Identity, 23 Enter, 13 Tile, 13 DepthwiseConv2dNative, 13 Merge, 13 RealDiv, 12 BiasAdd, 10 Range, 9 ExpandDims, 9 TensorArrayV3, 7 NextIteration, 5 Assert, 5 TensorArrayWriteV3, 5 TensorArraySizeV3, 5 Exit, 5 TensorArrayGatherV3, 4 TensorArrayScatterV3, 4 TensorArrayReadV3, 4 Fill, 3 Transpose, 3 Equal, 2 Exp, 2 GreaterEqual, 2 Less, 2 LoopCond, 1 TopKV2, 1 All, 1 Size, 1 Sigmoid, 1 ResizeBilinear, 1 Placeholder, 1 LogicalAnd
To use with tensorflow/tools/benchmark:benchmark_model try these arguments:
bazel run tensorflow/tools/benchmark:benchmark_model -- --graph=/home/lorenzo/Downloads/ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb --show_flops --input_layer=image_tensor --input_layer_type=uint8 --input_layer_shape=-1,-1,-1,3 --output_layer=detection_boxes,detection_scores,num_detections,detection_classes
But when I try to run this line:
uff_model = uff.from_tensorflow_frozen_model("/home/lorenzo/Downloads/ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb", ['detection_scores'])
I get:
Using output node detection_boxes
Converting to UFF graph
Warning: No conversion function registered for layer: Identity yet.
Converting as custom op Identity detection_boxes
name: "detection_boxes"
op: "Identity"
input: "Postprocessor/BatchMultiClassNonMaxSuppression/map/TensorArrayStack/TensorArrayGatherV3"
attr {
key: "T"
value {
type: DT_FLOAT
}
}
Traceback (most recent call last):
File "tensorRT_conversion.py", line 10, in <module>
uff_model = uff.from_tensorflow_frozen_model("/home/lorenzo/Downloads/ssd_mobilenet_v1_coco_2017_11_17/frozen_inference_graph.pb", ['detection_boxes'])
File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 113, in from_tensorflow_frozen_model
return from_tensorflow(tf_graphdef, output_nodes, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/conversion_helpers.py", line 77, in from_tensorflow
name="main")
File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 74, in convert_tf2uff_graph
uff_graph, input_replacements)
File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 61, in convert_tf2uff_node
op, name, tf_node, inputs, uff_graph, tf_nodes=tf_nodes)
File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 31, in convert_layer
fields = cls.parse_tf_attrs(tf_node.attr)
File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 201, in parse_tf_attrs
for key, val in attrs.items()}
File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 201, in <dictcomp>
for key, val in attrs.items()}
File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 196, in parse_tf_attr_value
return cls.convert_tf2uff_field(code, val)
File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 170, in convert_tf2uff_field
return TensorFlowToUFFConverter.convert_tf2numpy_dtype(val)
File "/usr/local/lib/python2.7/dist-packages/uff/converters/tensorflow/converter.py", line 87, in convert_tf2numpy_dtype
return np.dtype(dt[dtype])
TypeError: list indices must be integers, not AttrValue
What's wrong with it? Thanks!
As the error information says, TypeError: list indices must be integers, not AttrValue.
Please check it first.
Yeah, I figured this one out so I will close this for now. Thanks for support guys !
How did you figure it out? I have the same problem.
@asimshankar, @AndreaPisoni What's the best solution can you Guys propose to me because I have already installed tensorflow using Anaconda (pip). Thanks in advance
Hi Guys,
I am facing the same problem when I am trying to convert the following model to uff format :
http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
Kindly help me out. Thanks in advance.
@atishay14
If you are using bazel and toco to convert your model, you can try output_arrays='TFLite_Detection_PostProcess','TFLite_Detection_PostProcess:1','TFLite_Detection_PostProcess:2','TFLite_Detection_PostProcess:3' when doing quantization. But this op is not supported by edgetpu if you want to run your quantized model on edgetpu so it will downgrade inference performance. I'm still trying to figure out how to make it fully supported by edgetpu.
Hi, I'm still confused that summarize_graph and import_pb_to_tensorboard.py need .pb to work correctly to find out the output node, but I need to specify the output node name to make .pb file. It's like a chicken-egg problem. So how can I get the .pb first without knowing the output node name?
@heibaidaolx123
You are using tensorflow's object detection API right? Use this method.
At first move to the correct directory:
$ cd tensorflow/models/research/
Then
INPUT_TYPE=image_tensor
PIPELINE_CONFIG_PATH={path to pipeline config file}
TRAINED_CKPT_PREFIX={path to model.ckpt}
EXPORT_DIR={path to folder that will be used for export}
python object_detection/export_inference_graph.py \
--input_type=${INPUT_TYPE} \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--trained_checkpoint_prefix=${TRAINED_CKPT_PREFIX} \
--output_directory=${EXPORT_DIR}
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/exporting_models.md
@lechatthecat Hi, no, I'm not using the object detection API. What I ask is a general question.
Thanks any way.
Most helpful comment
https://developer.arm.com/technologies/machine-learning-on-arm/developer-material/how-to-guides/optimizing-neural-networks-for-mobile-and-embedded-devices-with-tensorflow/determine-the-names-of-input-and-output-nodes
from here u can see your graph, then u can easily trace the input node name and output node name. normally they will locate at the bottom and top of graph.