Tfjs: Tensorflow Object Detection API Model - Unsupported Ops in ssd_mobilenet_v2 model

Created on 13 Apr 2018  Â·  43Comments  Â·  Source: tensorflow/tfjs

To get help from the community, check out our Google group.

TensorFlow.js version

@tensorflow/tfjs-converter: 0.1.1
(this was installed through pip)

Browser version

N/A (issue is related to tensorflowjs_converter)

Describe the problem or feature request

Unsupported Ops

I tried running the tensorflowjs_converter as follows:
tensorflowjs_converter \ --input_format=tf_saved_model \ --output_node_names='detection_boxes,detection_classes,detection_scores,num_detections' \ --saved_model_tags=serve \ ~/workspace/model/saved_model \ ~/workspace/model/web_model

Once it is complete, I get the following output of unsupported ops:
All, Assert, Enter, Exit, LoopCond, Merge, NextIteration, NonMaxSuppressionV2, Rank, ResizeBilinear, Size, Split, StridedSlice, Switch, TensorArrayGatherV3, TensorArrayReadV3, TensorArrayScatterV3, TensorArraySizeV3, TensorArrayV3, TensorArrayWriteV3, TopKV2, Unpack, Where
This is a ssd_mobilenet_v2_coco model trained through tensorflow object detection api. It is performing well on tensorflow, but it contains ops not supported by tensorflowjs. I have tried several other models from the tensorflow model zoo, and they all have similiar unsupported ops.

Code to reproduce the bug / link to feature request

I found this GIST describing the exact issue: Convert Tensorflow SavedModel to WebModel for TF-JS
From this GIST I got the following:

# Download the model files. 
wget http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz 

# Untar the model .
tar -xzvf ssd_mobilenet_v2_coco_2018_03_29.tar.gz

pip install tensorflow-gpu # Or just tensorflow for CPU
pip install tensorflowjs

saved_model_cli show --dir ssd_mobilenet_v2_coco_2018_03_29/saved_model --tag_set serve --signature_def serving_default

tensorflowjs_converter \
    --input_format=tf_saved_model \
    --output_node_names='detection_boxes,detection_scores,num_detections,detection_classes' \
    --saved_model_tags=serve \
    ./ssd_mobilenet_v2_coco_2018_03_29/saved_model \
    ./ssd_mobilenet_v2_coco_2018_03_29/web_model

Thanks!

converter core

Most helpful comment

We are actively working on this. I expect we'll get this done in 1-2 weeks. Stay tuned.

All 43 comments

Thanks for the really valuable feedback. This will help us prioritize adding new ops.
Thanks to @pyu10055 amazing work, we are almost done with adding support for control-flow ops, so we are getting close!

Q: Do you know how many of these are custom ops? Custom ops are ops whose names are not in this ops.pbtxt. Currently, we don't support custom ops as they are not portable since they come with their python code.

cc our amazing contributors if there is interest for implementing some of these missing ops in tfjs-core, assuming the ops are in this list:
@ManrajGrover @Lewuathe @jgartman

Thanks for the feedback. Some ops (resizeBilinear, where, split) look already implemented in tfjs-core. So overall they are the missing ops in tfjs-core.

  • All
  • Assert
  • Enter
  • Exit
  • LoopCond
  • Merge
  • NextIteration
  • NonMaxSuppressionV2,
  • Rank
  • Size
  • StridedSlice
  • Switch
  • TensorArrayGatherV3
  • TensorArrayReadV3
  • TensorArrayScatterV3
  • TensorArraySizeV3
  • TensorArrayV3
  • TensorArrayWriteV3
  • TopKV2
  • Unpack

I went through the provided list of ops, and compared it to the list of unsupported ops output by tensorflowjs_converter for this ssd_mobilenet_v2 model, and it appears that none of the ops used are custom.

rank and size look like they would be pretty trivial to implement since those are already attributes in the Tensor class.

@threedayoldcoffee it is surprise to see Assert op, can you make sure the output nodes do not contain nodes from the training graph? thanks.

An idea: What's the downside of implementing it in tfjs but produce console.warn() message that it affects performance. Basically, to be a little more flexible when seeing an assert op.

Also assert might be good to have if you are debugging why the activations in python and tfjs don't not match

Is there a way to convert an object detection API model to a tfjs model with the current unsupported ops?

I have similar issue with converting this mnist tf tutorial. I saved it with saved model builder and while trying to convert it to tfjs I get error about unsupported Ops: ScalarSummary, SparseSoftmaxCrossEntropyWithLogits.
I've managed to change SparseSoftmaxCrossEntropyWithLogits to tf.losses.softmax_cross_entropy and now converter complains about: SoftmaxCrossEntropyWithLogits, StopGradient, ScalarSummary. That's weird as tfjs already supports softmax_cross_entropy -> tfjs docs.
Should I start another issue about these ops or leave it here?

It would be really cool to have support to TF object detection api models. As of now I installed tfjs-converter (through pip) and got the following unsupported ops (including some of which were already reported earlier) for a ssdlite mobilenet v2:

  • TensorArrayWriteV3
  • TensorArrayV3
  • TensorArrayGatherV3
  • TensorArrayReadV3
  • TopKV2
  • Where
  • All
  • Rank
  • NonMaxSuppressionV2
  • Assert
  • TensorArraySizeV3
  • Size
  • Unpack
  • TensorArrayScatterV3

Same thing here.
I ran:
tensorflowjs_converter --input_format=tf_frozen_model ./hand_inference_graph/frozen_inference_graph.pb hand-mobile --output_node_names='detection_classes'

from the github https://github.com/victordibia/handtracking that has this frozen_inference_graph.pb file.

But tensorflowjs-converter gives these unsupported ops:
Unsupported Ops in the model
TensorArraySizeV3, TensorArrayV3, All, TensorArrayReadV3, NonMaxSuppressionV2, TopKV2, TensorArrayGatherV3, Size, Unpack, TensorArrayWriteV3, Rank, Where, TensorArrayScatterV3, Assert

I'm also trying to convert a frozen model using the TF object detection API to TF.js and I'm running into problems with unsupported ops, specifically:

  • TensorArrayWriteV3
  • TensorArrayV3
  • TensorArrayGatherV3
  • TensorArrayReadV3
  • TopKV2
  • Where
  • All
  • Rank
  • NonMaxSuppressionV2
  • Assert
  • TensorArraySizeV3
  • Size
  • Unpack
  • TensorArrayScatterV3

It is possible to implement ssd mobile net without all the operations mentioned above. Seems like they are all applied in the post processing layer to filter the boxes.

You can remove the post processing layer and filter the boxes manually, atleast that's how I did it.

Edit.: One also has to remove all Assert ops, which might be located all over the graph, but they shouldn't be in the inference graph.

Do you have code for SSD? Or how fast does it run on your machine in tfjs?

On Jun 7, 2018, at 11:15 PM, justadudewhohacks notifications@github.com wrote:

It is possible to implement ssd mobile net without all the operations mentioned above. Seems like they are all applied in the post processing layer to filter the boxes.

You can remove the post processing layer and filter the boxes manually, atleast that's how I did it.

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.

Time for inference of first implementation is about 100 - 150ms on average on my gpu (AMD Radeon r9 200). Theres probably still room for optimization.

I implemented it with tfjs-core api from scratch though for face detection in this repo, but probably it's possible to convert your graph without the post processing layer using tfjs-converter and do non max suppression and score filtering manually.

@dsmilkov is there a branch where the control-flow ops are being implemented?

The code for the converter lives in the tfjs-converter repo. The PR that added control-flow ops was merged on April 20, so the latest version of the tfjs-converter should support merge/switch/enter/exit/nextIteration/loopCond ops.

Hi,
same problem of @threedayoldcoffee the unsupported ops are presents in file ops.pbtxt

This is the command
tensorflowjs_converter \ --input_format=tf_saved_model \ --output_node_names='detection_boxes,detection_scores,num_detections,detection_classes' \ --saved_model_tags=serve \ /home/user/ssd_inception_v2_coco/saved_model \ /home/user/ssd_inception_v2_coco/web_model

and this is the output
Converted 0 variables to const ops. Unsupported Ops in the model TensorArrayWriteV3, All, TensorArrayGatherV3, TensorArrayReadV3, TensorArrayV3, NonMaxSuppressionV2, Assert, TensorArraySizeV3, TopKV2, Where, TensorArrayScatterV3

any suggestion?
best
Andrea

Small update: TensorArray* ops are being added in this PR

@dsmilkov So helpful. When will TensorArray* supported version can be installed from npm ?

Version 0.12.0+ is available from npm now, and includes TensorArray* ops.

Can anyone tell me how to get the latest version (0.12.0+) working with tfjs-converter?

It looks like converter currently uses 0.12.0
https://github.com/tensorflow/tfjs-converter/blob/master/package.json

"peerDependencies": {
    "@tensorflow/tfjs-core": "~0.12.0"
  },

Is it not working?

I don't know what I did wrong. But from a straight-forward way:

$ git clone https://github.com/tensorflow/tfjs-converter.git
$ cd tfjs-converter
$ yarn

It worked fine, but then I tried to convert a frozen model.
$ tensorflowjs_converter --input_format=tf_frozen_model \ --output_node_names='num_detections,detection_boxes,detection_scores,detection_classes' \
./mobilenet/frozen_inference_graph.pb ./mobilenet/web_model

And as result I got this:
Unsupported Ops in the model
TensorArrayReadV3, NonMaxSuppressionV2, TopKV2, TensorArrayScatterV3, All, TensorArrayWriteV3, TensorArraySizeV3, TensorArrayGatherV3, Assert, Where, TensorArrayV3

Since 0.12.0 includes TensorArray*s I thought at least those would disappear from the unsupported necessary for the conversion.
What am I missing?

@kor0 I got this problem too.I guess that the ops list didn't update yet.

@kor0 , @xiaocode , Ah I see. The ops are supported in core, but not in converter.

cc @pyu10055 Should TensorArray* ops be supported in tfjs-converter 0.5.0 given that https://github.com/tensorflow/tfjs-converter/pull/163 is in? Thanks!

@dsmilkov not yet, PR #163 only added the base tensor array class, I am adding the op implementation right now, should be available fairly soon.

hi @pyu10055 , is there plan for supporting All, Assert, TopKV2 ?

@116050423 All has been supported, we are looking into Assert and TopKV2 support.
TensorArray ops are under review https://github.com/tensorflow/tfjs-converter/pull/170

I can take the topK in tfjs-core, which the converter will later call into. The implementation will be in cpu for both the webgl and cpu backend, since it's tricky to sort on gpu.

Hi folks, thanks for working hard in bringing parity with TF.
With the latest version, the unsupported ops have been down to,

Where, TopKV2, Assert, NonMaxSuppression

@startupgurukul In the latest tfjs-core and tfjs-converter, where ops seems to be supported.

Could you try that?

@Lewuathe I tried on both python package(0.5.2) and tfjs-converter source(0.3.1) , where ops is still not supported.

Thank you

Can confirm, the latest version seems to be missing:
Where, NonMaxSuppression, TopKV2, Assert

Yes, those 4 are still not supported :(

Looks like tfjs-core's where op is mapped to Select op in TensorFlow which makes sense for legacy reasons but it should also be mapped to Where op. (Select was deprecated in v0.12.0 release)

I can take this up.

EDIT: Opened a PR here https://github.com/tensorflow/tfjs-converter/pull/174

cc. @nsthorat @dsmilkov @pyu10055

@pyu10055 Thanks very much and TensorArray ops can work now !
But there're still last 4 ops when I'm converting OD models...
TopKV2, Assert, NonMaxSuppressionV2, Where
Wish these 4 can be supported :D

@abhiped Do you have any plan for these 4 ?

We are actively working on this. I expect we'll get this done in 1-2 weeks. Stay tuned.

Please add support for these operations also.
Unsupported Ops in the model
AudioSpectrogram, DecodeWav, Mfcc

This is done and ported, part of tfjs-models repository!

If there are any specific ops, please file an issue per op.

Great help with this solution!Thanks. :D

SparseSoftmaxCrossEntropyWithLogits op +1

... maybe a new issue should be created for supporting this op? One I've come across missing on my conversions.

https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/sparse-softmax-cross-entropy-with-logits

ValueError: Unsupported Ops in the model before optimization
SparseSoftmaxCrossEntropyWithLogits
Was this page helpful?
0 / 5 - 0 ratings

Related issues

RELNO picture RELNO  Â·  3Comments

JasonShin picture JasonShin  Â·  4Comments

lastnod picture lastnod  Â·  3Comments

kylemcdonald picture kylemcdonald  Â·  3Comments

tpreusse picture tpreusse  Â·  4Comments