Models: Object Detection API: How to use inference graphs on Tensorflow Inference Interface? (Android)

Created on 12 Jul 2017  路  6Comments  路  Source: tensorflow/models

System information

  • What is the top-level directory of the model you are using: Object Detection API
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04
  • TensorFlow installed from (source or binary): Source
  • TensorFlow version (use command below): 1.1
  • Bazel version (if compiling from source): 0.5.2
  • CUDA/cuDNN version: 8.0 / 5
  • GPU model and memory: Tesla K80, 12GB

Describe the problem

I try to use the API's models on an Android application, using Tensorflow Inference Interface (following the same procedure as these provided examples). In the mobile app, I modified the image preprocessing phase to add an equivalent of the "KeepAspectRatioResizer" method, and I changed the inputs to match the shape [1,-1,-1,3] (uint8).
Then, when I try to run the model, it fails, returning a No OpKernel was registered to support Op error, depending on the model.
What would be the right procedure to follow to avoid these errors? Do I have to:

What I've tried

Using the "frozen_inference_graph.pb" (in _faster_rcnn_resnet101_coco_, from the ZOO), I got the No OpKernel was registered to support Op 'Round' with these attrs. error.

From there, I tried to remove the unused operators using:

bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
   --in_graph="frozen_inference_graph.pb" \
   --out_graph="frozen_inference_graph_optimized.pb" \
   --inputs="image_tensor" \
   --outputs="detection_boxes,detection_scores,detection_classes,num_detections" \
   --transforms='
strip_unused_nodes(type=float, shape="1,-1,-1,3")
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
'

But I still got the same error (No OpKernel was registered to support Op 'Round' with these attrs.), so I tried to force deleting the nodes renaming them:

[...] \
   --transforms='
rename_op(old_op_name=Round, new_op_name=Identity)
strip_unused_nodes
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
'

and this seems to work (I also had to fix the Switch issue using this method).

I am now trying to do the same using the _faster_rcnn_inception_resnet_v2_atrous_coco_ model (still from the ZOO), but it generates a new error (No OpKernel was registered to support Op 'FloorMob' with these attrs.), which makes me think I am doing something wrong.
Any hint?

support

Most helpful comment

You're not doing anything wrong, there are just many missing ops in the mobile tensorflow libraries as a consequence of needing to keep the build size low. Also, if you end up using the graph transform tool to remove ops, you should test the new graph on your machine to make sure it's still capable of inference.

Perhaps a process for adding ops should be documented in the Makefile?

All 6 comments

You're not doing anything wrong, there are just many missing ops in the mobile tensorflow libraries as a consequence of needing to keep the build size low. Also, if you end up using the graph transform tool to remove ops, you should test the new graph on your machine to make sure it's still capable of inference.

Perhaps a process for adding ops should be documented in the Makefile?

This question is better asked on StackOverflow since it is not a bug or feature request. There is also a larger community that reads questions there.

Also note:

REGISTER_OP("Round").UNARY().Doc(R"doc(
Rounds the values of a tensor to the nearest integer, element-wise.

Rounds half to even.  Also known as bankers rounding. If you want to round
according to the current system rounding mode use std::cint.
)doc");
// Declares cwise unary operations signature: 't -> 't
#define UNARY()                                                              \
  Input("x: T")                                                              \
      .Output("y: T")                                                        \
      .Attr("T: {half, float, double, int32, int64, complex64, complex128}") \
      .SetShapeFn(shape_inference::UnchangedShape)

Hey guys! I've been having the same problem trying to make this work on android and I have used some of the solutions from above but none seem to work...here are some of the errors I get back from the various methods of trying to solve this problem...Any Help would be appreciated!

The models work when I run it on python but doesn't when I run them on android.

Im using the SSD_Mobilenet model from tensorflow object detection api.

This is from the optimize for inference function:
NodeDef expected inputs '' do not match 1 inputs specified; Op output:dtype; attr=value:tensor; attr=dtype:type>; NodeDef: Preprocessor/map/while/add/y = Constdtype=DT_INT32, value=Tensor

This is from renaming Switch to Identify:
Not a valid TensorFlow Graph serialization: Node 'Preprocessor/map/while/add/y': Connecting to invalid output 1 of source node Preprocessor/map/while/Switch which has 1 outputs

This last one loads the model but crashes when processing the inputs:
java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'Switch' with these attrs. Registered devices: [CPU], Registered kernels:
device='GPU'; T in [DT_STRING]
device='GPU'; T in [DT_BOOL]
device='GPU'; T in [DT_INT32]
device='GPU'; T in [DT_FLOAT]
device='CPU'; T in [DT_FLOAT]
device='CPU'; T in [DT_INT32]

                                                             [[Node: Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/cond/Switch = Switch[T=DT_BOOL](Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/Greater, Postprocessor/BatchMultiClassNonMaxSuppression/PadOrClipBoxList/Greater)]]

Any ideas guys? I've also tried to recompile tensorflow with the changes from https://stackoverflow.com/questions/40855271/no-opkernel-was-registered-to-support-op-switch-with-these-attrs-on-ios/43627334#43627334 but bazel crashes while compiling :(

I also occurred this problem!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

How can I add the round operator when I rebuild tensorflow ?
Should I add a file to the blazel BUILD file under:
tensorflow/core/kernels/
for the android section ?

As fas as i know, the inference using this model on android is very slow, as it is not optimized for GPU and DSP processing . Whatever CPU processing capability device has, tensorflow will use that.
Correct me ?

Was this page helpful?
0 / 5 - 0 ratings