Models: Default output(detection_masks/detection_multiclass_scores/detection_feature) of saved model can not set

Created on 3 Dec 2019  路  2Comments  路  Source: tensorflow/models

Please go to Stack Overflow for help and support:

http://stackoverflow.com/questions/tagged/tensorflow

Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy:

  1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.

Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.


System information

  • What is the top-level directory of the model you are using:models/research/object-detection
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):no
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Ubuntu 16.04
  • TensorFlow installed from (source or binary):source
  • TensorFlow version (use command below):1.13
  • Bazel version (if compiling from source):
  • CUDA/cuDNN version:cuda10
  • GPU model and memory:16G
  • Exact command to reproduce:

You can collect some of this information using our environment capture script:

https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh

You can obtain the TensorFlow version with

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the problem

When I execute export_inference_graph.py to get saved model and start tensorflow serving service, the default output of result below has container too much data, especially detection_features. Test image which size is less than 30KB, but the result of detection has become more than 100MB. The comment says detection_masks, detection_multiclass_scores, detection_features are optional, but it has found no place to set.

detection_classes
raw_detection_boxes
detection_boxes
detection_scores
raw_detection_scores
num_detections
detection_multiclass_scores
detection_features

Source code / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.

research bug

All 2 comments

Because of this, the saved model can't be deployed online (ML Engine for example) since the predict output is too large.

@carlobambo @alexqdh @pkulzc @tombstone
So I found the source of the bug. Below is the solution for the SSD detectors but it could be easily adapted to work with RCNN and other net architectures.

In models/research/object_detection/meta_architectures/ssd_meta_arch.py, the class SSDMetaArch (which is used to build the exported model when we run object_detection/exporter_main_v2.py) has a self._return_raw_detections_during_predict attribute. This attribute seems correctly used in the predict method, but it is completely ignored in the postprocess method.

So the solution is to replace line 772-785:

detection_dict = {
          fields.DetectionResultFields.detection_boxes:
              nmsed_boxes,  
          fields.DetectionResultFields.detection_scores:
              nmsed_scores,  
          fields.DetectionResultFields.detection_classes:
              nmsed_classes,  
          fields.DetectionResultFields.num_detections:  
              tf.cast(num_detections, dtype=tf.float32),  
          fields.DetectionResultFields.raw_detection_boxes:  
              tf.squeeze(detection_boxes, axis=2),  
          fields.DetectionResultFields.raw_detection_scores:  
              detection_scores_with_background  
} 

by:

detection_dict = {  
          fields.DetectionResultFields.detection_boxes:  
              nmsed_boxes,  
          fields.DetectionResultFields.detection_scores:  
              nmsed_scores,  
          fields.DetectionResultFields.detection_classes:  
              nmsed_classes,  
          fields.DetectionResultFields.num_detections:  
              tf.cast(num_detections, dtype=tf.float32)  
}  
if self._return_raw_detections_during_predict:  
   detection_dict[fields.DetectionResultFields.raw_detection_boxes] = tf.squeeze(detection_boxes, axis=2)  
   detection_dict[fields.DetectionResultFields.raw_detection_scores] = detection_scores_with_background

Once this is done you need to run object_detection/exporter_main_v2.py to generate the saved_model from the checkpoints.
I tested it with the SSD_MobileNet_V2 pre-trained model found on the TF2 model zoo, running TF2.2.
With this solution I was able to reduce the output JSON from 2600KB to 16KB (using TF Serving).

This is the command I used to run exporter_main_v2.py:

python exporter_main_v2.py \
        --input_type encoded_image_string_tensor \
        --pipeline_config_path ${CONFIG} \
        --trained_checkpoint_dir ${CHECKPOINT} \
        --output_directory ${OUTPUT} \
        --config_override " \
            model{ \
              ssd { \
                return_raw_detections_during_predict: false \
                post_processing { \
                  batch_non_max_suppression { \
                    score_threshold: 0.5 \
                    max_detections_per_class: 20
                    max_total_detections: 20
                  } \
                } \
              } \
            }"

The postprocess method also exposes other output which could be optional (e.g. anchor indices and detection_multiclass_scores).

Was this page helpful?
0 / 5 - 0 ratings