Models: Performance issues : Speed is very slow, around .8 seconds per frame on pre-trained model

Created on 12 Jan 2018  路  4Comments  路  Source: tensorflow/models

I am using the pre-trained model "ssd_mobilenet_v1_coco_2017_11_17" and the results for object detection for vehicles is pretty good. However, the performance is extremely slow and hence I am unable to use it for my purpose. I resized the image to a smaller one and even chopped the horizon from the image so that the detection is faster, but it doesnt seem to help. Is this a known issue? Any suggestions here

bug

Most helpful comment

I am having similarly bad performance.
On my Macbook Pro in Tensorflow CPU model I get:
Inference time 0.13113617897033691 sec
On an AWS px2.large K80 GPU instance im getting:
Inference time 0.08756685256958008 sec

Tensorflow GPU is installed and says its connected and I see the process in the nvidia-smi tool.

Anyone have an idea why inference is so slow on GPU?
Is it simply not possible to do it faster? Thats like 12 fps on a K80...

All 4 comments

@MeghaMaheshwari, are you are using TensorFlow or TensorFlow Lite?
What hardware are you running this on? You could try to train a smaller (less layer) SSD network to make a faster result. (as I don't know if there is current smaller mobilenet that does this already). @petewarden.

I am using TensorFlow. GPU Processor is Quadro M2200 and CUDA version is 8. I tried resizing the image to a smaller size and it helped a bit but not significant enough for me to use. Could you advice.
Another question is for the pretrained network, does the network resize the image internally. If so, to what size?

I am having similarly bad performance.
On my Macbook Pro in Tensorflow CPU model I get:
Inference time 0.13113617897033691 sec
On an AWS px2.large K80 GPU instance im getting:
Inference time 0.08756685256958008 sec

Tensorflow GPU is installed and says its connected and I see the process in the nvidia-smi tool.

Anyone have an idea why inference is so slow on GPU?
Is it simply not possible to do it faster? Thats like 12 fps on a K80...

Hi There,
We are checking to see if you still need help on this, as this seems to be considerably old issue. Please update this issue with the latest information, code snippet to reproduce your issue and error you are seeing.
If we don't hear from you in the next 7 days, this issue will be closed automatically. If you don't need help on this issue any more, please consider closing this.

Was this page helpful?
0 / 5 - 0 ratings