Ssd_keras: Slow detection

Created on 11 Nov 2016  路  5Comments  路  Source: rykov8/ssd_keras

Great work! Thanks a lot!

The detection takes around 2 second per image on a mac using only CPU.
It's quite different from the performance of test provided in the paper.
Apart from hardware, is it possible that it's caused by the overhead of Keras?
Also, may I ask is it possible to shrink the network somehow?
Thank you.

Most helpful comment

@MrXu as for training, I'm working on this part, hope to release the code this week. I also had to change some things in the architecture in order to be able to train the net. However, I will test it only for my problem, but I try to implement training as universal, as possible. Hope, it will be useful.
As for real-time detection, I'm quite sure, that unfortunately, it is nearly impossible nowadays to run deep nets on CPU with real-time performance. If you need real-time on CPU you might consider simpler methods with loss of quality.

All 5 comments

The performance of inference phase in this paper is conducted using NVIDIA K40 GPUs, and the input is a batch of images. You can replace the vgg module with AlexNet, AlexNet is smaller than vgg.

@xindongzhang thanks for your comment, but I believe, that the authors state the following:
_We measure the speed with batch size 8 using Titan X and cuDNN v4 with Intel Xeon [email protected]._ However, it doesn't matter, they report performance on GPU.

@MrXu, I measured forward pass of my PC with Titan and for 5 pictures (like in SSD.ipynb) I got the results that are in the screenshot.
screen shot 2016-11-12 at 12 10 18 pm
This means that it takes around 50 ms per image to get the prediction. I haven't measured the original caffe code, but I'm sure that my NMS implementation is slower, than the original one. Moreover, some custom layers can also be not very efficient, this is the thing to improve in the future, because I also need real-time performance on GPU for my problem. Any ideas how to speed up the code are welcome! I've also heard that sometimes Keras is slower, than other frameworks, but I can't bear Caffe, so, for me Keras is the best choice.

As for network shrinkage, apart from replacing vgg with AlexNet (after this step you will have to retrain the net), you can think about scales of your detection. For example, if you know, that you won't have big objects on your images, you, probably, don't need final layers and can delete them.

@xindongzhang thanks for the suggestion. I may prefer to avoid retrain the model.
@rykov8 , thanks for the clarification. I do read that Keras is slower than other framework like TensorLayer or TFlearn. I am trying to run the prediction on Rpi, seems achieving real-time detection with only CPU is really hard...

@MrXu as for training, I'm working on this part, hope to release the code this week. I also had to change some things in the architecture in order to be able to train the net. However, I will test it only for my problem, but I try to implement training as universal, as possible. Hope, it will be useful.
As for real-time detection, I'm quite sure, that unfortunately, it is nearly impossible nowadays to run deep nets on CPU with real-time performance. If you need real-time on CPU you might consider simpler methods with loss of quality.

@rykov8 , Thanks for the code!. It works perfectly. I wanted to know if you tried out anything to improve the fps for real time detection. I have been trying to implement multithreading, but no luck so far.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

freshn picture freshn  路  4Comments

maxlchina picture maxlchina  路  3Comments

zhyhan picture zhyhan  路  6Comments

oarriaga picture oarriaga  路  11Comments

ayushchopra96 picture ayushchopra96  路  15Comments