Hi, while running
export CONFIG_FILE=~/ssdlite_mobilenet_v2_coco_2018_05_09/pipeline.config
export CHECKPOINT_PATH=~/ssdlite_mobilenet_v2_coco_2018_05_09/model.ckpt
/bazel run -c opt tensorflow/contrib/lite/toco:toco — — input_file=$OUTPUT_DIR/tflite_graph.pb — output_file=$OUTPUT_DIR/detect.tflite — input_shapes=1,300,300,3 — input_arrays=normalized_input_image_tensor — output_arrays=’TFLite_Detection_PostProcess’,’TFLite_Detection_PostProcess:1',’TFLite_Detection_PostProcess:2',’TFLite_Detection_PostProcess:3' — inference_type=QUANTIZED_UINT8 — mean_values=128 — std_values=128 — change_concat_input_ranges=false — allow_custom_ops
INFO: Analysed target //tensorflow/contrib/lite/toco:toco (0 packages loaded).
INFO: Found 1 target…
Target //tensorflow/contrib/lite/toco:toco up-to-date:
bazel-bin/tensorflow/contrib/lite/toco/toco
INFO: Elapsed time: 0.392s, Critical Path: 0.01s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/toco/toco ‘ — input_file=/home/ubuntu/tflite/tflite_graph.pb’ ‘ — output_file=/home/ubuntu/tflite/detect.tflite’ ‘ — input_shapes=1,300,300,3’ ‘ — input_arrays=normalized_input_image_tensor’ ‘ — output_arrays=TFLite_Detection_PostProcess,TFLite_Detection_PostProcess:1,TFLite_Detection_PostProcess:2,TFLite_Detection_PostProcess:3’ ‘INFO: Build completed successfully, 1 total action
2018–07–19 15:32:40.725811: I tensorflow/contrib/lite/toco/import_tensorflow.cc:1048] Converting unsupported operation: TFLite_Detection_PostProcess
2018–07–19 15:32:40.742690: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 1070 operators, 1570 arrays (0 quantized)
2018–07–19 15:32:40.791000: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 1070 operators, 1570 arrays (0 quantized)
2018–07–19 15:32:40.849051: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 116 operators, 310 arrays (1 quantized)
2018–07–19 15:32:40.852027: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before pre-quantization graph transformations: 116 operators, 310 arrays (1 quantized)
2018–07–19 15:32:40.853807: F tensorflow/contrib/lite/toco/tooling_util.cc:1621] Array FeatureExtractor/MobilenetV2/Conv/Relu6, which is an input to the DepthwiseConv operator producing the output array FeatureExtractor/MobilenetV2/expanded_conv/depthwise/Relu6, is lacking min/max data, which is necessary for quantization. Either target a non-quantized output format, or change the input graph to contain min/max information, or pass — default_ranges_min= and — default_ranges_max= if you do not care about the accuracy of results.
Seems like you trained a fp.32 model and not a 8-bit one.
Run:
/bazel run -c opt tensorflow/contrib/lite/toco:toco — — \
input_file=$OUTPUT_DIR/tflite_graph.pb \
— output_file=$OUTPUT_DIR/detect.tflite \
— input_shapes=1,300,300,3 \
— input_arrays=normalized_input_image_tensor \
--output_arrays=’TFLite_Detection_PostProcess’,’TFLite_Detection_PostProcess:1',’TFLite_Detection_PostProcess:2',’TFLite_Detection_PostProcess:3' \
— mean_values=128 \
— std_values=128 \
— change_concat_input_ranges=false \
— allow_custom_ops
ie... skip the quantisation flag.
i m just try to exporting default checkpoint model
That is not quantised.
On Thu 19 Jul, 2018, 10:28 PM Ricky singh, notifications@github.com wrote:
i m just try to exporting default checkpoint model
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/tensorflow/models/issues/4838#issuecomment-406345203,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AUME57KtVi5U6GyW9bWtN_kN9d0BAzNIks5uILqhgaJpZM4VWo61
.>
Thank you,
Varun.
Please consider using the float instructions in that case:
https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md
Hi how to use quantisation uint8
I used ssd_mobilenet_v1_coco_11_06_2017 or ssdlite_mobilenet_v2_coco_2018_05_09
according this https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md
There's the same problem
Either target a non-quantized output format, or change the input graph to contain min/max information, or pass — default_ranges_min= and — default_ranges_max= if you do not care about the accuracy of results.
Can you give me some advice
Same problem! Did any body find a solution?
@MohammadMoradi i solved it.
use tensorflow r1.10 and models r1.10
retrain model "ssd_mobilenet_v1_quantized_300x300_coco14_sync_2018_07_18” success
according url https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md
@jackweiwang @MohammadMoradi Is this still an open issue?
@achowdhery The problem has been solved. Just before I used the wrong version
@jackweiwang @MohammadMoradi @jackweiwang
I have the same problem if I use "ssd_mobilenet_v1_coco_2018_01_28" model, it can be converted to float type, but can't be converted to uint8 type.
And I find that several models provided in "https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md" can be converted successfully, such as "ssd_mobilenet_v1_quantized_coco" and "ssd_mobilenet_v1_0.75_depth_quantized_coco", and the other can't be converted.
Can you find a way to convert "ssd_mobilenet_v2_coco" or "ssdlite_mobilenet_v2_coco" models to .tflite???
@jackweiwang @MohammadMoradi @jackweiwang
I have the same problem if I use "ssd_mobilenet_v1_coco_2018_01_28" model, it can be converted to float type, but can't be converted to uint8 type.
And I find that several models provided in "https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md" can be converted successfully, such as "ssd_mobilenet_v1_quantized_coco" and "ssd_mobilenet_v1_0.75_depth_quantized_coco", and the other can't be converted.Can you find a way to convert "ssd_mobilenet_v2_coco" or "ssdlite_mobilenet_v2_coco" models to .tflite???
you need add
graph_rewriter {
quantization{
...
...
}
}
to ".config" file and retraining model .
Details can be found in the following documents or .py
ssd_mobilenet_v1_quantized_coco.config and ssd_mobilenet_v1_0.75_depth_quantized_coco.config
@jackweiwang
Hi, thank you for your reply!~
And I still have a quesion:Is it the same reason that I can't convert the "ssd_mobilenet_v2_coco" and "ssdlite_mobilenet_v2_coco" models to .tflite model ? I got an error when I use the "object_detection/export_tflite_ssd_graph.py" script to convert the two models:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1334, in _do_call
return fn(*args)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1319, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1407, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1,1,256,546] rhs shape= [1,1,1280,546]
[[{{node save/Assign_22}} = Assign[T=DT_FLOAT, _class=["loc:@BoxPredictor_1/ClassPredictor/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](BoxPredictor_1/ClassPredictor/weights, save/RestoreV2:22)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1546, in restore
{self.saver_def.filename_tensor_name: save_path})
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 929, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1152, in _run
feed_dict_tensor, options, run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1328, in _do_run
run_metadata)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 1348, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [1,1,256,546] rhs shape= [1,1,1280,546]
[[node save/Assign_22 (defined at /home/models/research/object_detection/export_tflite_ssd_graph_lib.py:255) = Assign[T=DT_FLOAT, _class=["loc:@BoxPredictor_1/ClassPredictor/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](BoxPredictor_1/ClassPredictor/weights, save/RestoreV2:22)]]
Caused by op 'save/Assign_22', defined at:
File "object_detection/export_tflite_ssd_graph.py", line 137, in
tf.app.run(main)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "object_detection/export_tflite_ssd_graph.py", line 133, in main
FLAGS.max_classes_per_detection)
File "/home/models/research/object_detection/export_tflite_ssd_graph_lib.py", line 255, in export_tflite_graph
saver = tf.train.Saver(*saver_kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1102, in __init__
self.build()
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1114, in build
self._build(self._filename, build_save=True, build_restore=True)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 1151, in _build
build_save=build_save, build_restore=build_restore)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 795, in _build_internal
restore_sequentially, reshape)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 428, in _AddRestoreOps
assign_ops.append(saveable.restore(saveable_tensors, shapes))
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/training/saver.py", line 119, in restore
self.op.get_shape().is_fully_defined())
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/state_ops.py", line 221, in assign
validate_shape=validate_shape)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 61, in assign
use_locking=use_locking, name=name)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/op_def_library.py", line 787, in _apply_op_helper
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 488, in new_func
return func(args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 3274, in create_op
op_def=op_def)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 1770, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1,1,256,546] rhs shape= [1,1,1280,546]
[[node save/Assign_22 (defined at /home/models/research/object_detection/export_tflite_ssd_graph_lib.py:255) = Assign[T=DT_FLOAT, _class=["loc:@BoxPredictor_1/ClassPredictor/weights"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](BoxPredictor_1/ClassPredictor/weights, save/RestoreV2:22)]]
### How to fix that?
@jackweiwang
I convert "ssd_mobilenet_v2_coco" and "ssdlite_mobilenet_v2_coco" models to .tflite model(float type) successfully using another method metioned in "https://github.com/freedomtan/tensorflow/blob/object_detection_tflite_object_dtection_python/tensorflow/contrib/lite/examples/python/object_detection_ssd_coco.md"
my another question is: Can I quantize the "ssdlite_mobilenet_v2_coco" model to quantized model by tensorflow quantize tools, and then convert this quantized model to .tflite ? because retrain a model will take too much time and effort.
@jackweiwang
I convert "ssd_mobilenet_v2_coco" and "ssdlite_mobilenet_v2_coco" models to .tflite model(float type) successfully using another method metioned in "https://github.com/freedomtan/tensorflow/blob/object_detection_tflite_object_dtection_python/tensorflow/contrib/lite/examples/python/object_detection_ssd_coco.md"my another question is: Can I quantize the "ssdlite_mobilenet_v2_coco" model to quantized model by tensorflow quantize tools, and then convert this quantized model to .tflite ? because retrain a model will take too much time and effort.
@jackweiwang
I had tried the method mentioned in “https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md”
but I got an error as I discribed before, so I tried the way "https://github.com/freedomtan/tensorflow/blob/object_detection_tflite_object_dtection_python/tensorflow/contrib/lite/examples/python/object_detection_ssd_coco.md"
I found only "ssd_mobilenet_v1_quantized_coco" and "ssd_mobilenet_v1_0.75_depth_quantized_coco" models can convert succefully using “https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md” method.
The other models provided in "https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md" couldn't convert successfully.
got the same problem.
I am trying to quantize "ssd_mobilenet_v2_coco" using “https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md” method, and I get the same error, see below
"2018-11-29 14:05:37.007198: F tensorflow/lite/toco/tooling_util.cc:1698] Array FeatureExtractor/MobilenetV2/Conv/Relu6, which is an input to the DepthwiseConv operator producing the output array FeatureExtractor/MobilenetV2/expanded_conv/depthwise/Relu6, is lacking min/max data, which is necessary for quantization. If accuracy matters, either target a non-quantized output format, or run quantized training with your model from a floating point checkpoint to change the input graph to contain min/max information. If you don't care about accuracy, you can pass --default_ranges_min= and --default_ranges_max= for easy experimentation.
./cheng_quantize_tflite_ssd.sh: line 27: 22993 Aborted",
how can I solve it without retraining the model using"graph_rewriter {
quantization{
...
...
}
}"
@varun19299 do you mean it's not possible to convert float model to quant (byte/uint8)?
got the same problem.
I am trying to quantize "ssd_mobilenet_v2_coco" using “https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/running_on_mobile_tensorflowlite.md” method, and I get the same error, see below
"2018-11-29 14:05:37.007198: F tensorflow/lite/toco/tooling_util.cc:1698] Array FeatureExtractor/MobilenetV2/Conv/Relu6, which is an input to the DepthwiseConv operator producing the output array FeatureExtractor/MobilenetV2/expanded_conv/depthwise/Relu6, is lacking min/max data, which is necessary for quantization. If accuracy matters, either target a non-quantized output format, or run quantized training with your model from a floating point checkpoint to change the input graph to contain min/max information. If you don't care about accuracy, you can pass --default_ranges_min= and --default_ranges_max= for easy experimentation.
./cheng_quantize_tflite_ssd.sh: line 27: 22993 Aborted",
how can I solve it without retraining the model using"graph_rewriter {
quantization{
...
...
}
}"
@NorwayLobster Hi, you cant set --default_ranges_min=0 and --default_ranges_max=1, however, it will influence your accuracy of you results. You need to add following parameter
graph_rewriter {
quantization {
delay: 1800
activation_bits: 8
weight_bits: 8
}
in config file.
@burui11087 Where to insert graph_rewriter { quantization { delay: 1800 activation_bits: 8 weight_bits: 8 } in pipeline.config file
@burui11087 Where to insert graph_rewriter { quantization { delay: 1800 activation_bits: 8 weight_bits: 8 } in pipeline.config file
It is at the end of the pipeline.config file. However, this only be suitable for 1.x tensorflow full training