Yolov5: The inference time of yolov5s.pt is 0.28s on jetson nano (use python detect.py). Is this normal speed?

Created on 13 Jun 2020  Â·  41Comments  Â·  Source: ultralytics/yolov5

Stale

Most helpful comment

That doesn't sound right - see this from when people would convert yolov4 to the yolov3 repo.

That is YOLOv4 at 10FPS on Jetson Nano. YOLOv5s should be faster.

https://www.seeedstudio.com/blog/2020/06/03/accelerate-yolov4-real-time-object-detection-on-jetson-nano/

All 41 comments

Hello @DENESTY, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

That doesn't sound right - see this from when people would convert yolov4 to the yolov3 repo.

That is YOLOv4 at 10FPS on Jetson Nano. YOLOv5s should be faster.

https://www.seeedstudio.com/blog/2020/06/03/accelerate-yolov4-real-time-object-detection-on-jetson-nano/

@Jacobsolawetz
Using a USB camera,the result is [1/1: 0... success (640x480 at 30 FPS). ....512X640 Done .(0.276)],can you tell me the difference between(640x480 at 30 FPS) and (512X640 Done .(0.276)).very thanks

@DENESTY inference is executed on 32-stride multiple letterboxed images. width-height may be transposed in your printed output.

@glenn-jocher
Thanks for your response,I wonder the 30FPS and the 0.267s, What is the relationship between 30fps and 0.267s

@DENESTY source information is shown unredacted. For example if you connect to an rtsp feed the FPS displayed are the feed characteristics.

This has nothing to do with yolov5, there are simply shown for convenience.

@glenn-jocher
so the 0.267 second is the infenence time , process and inference 3.6 pictures every second on jetson nano??

@DENESTY I've never used that hardware, suggest you look for community support.

@glenn-jocher
so the 0.267 second is the inference time, process and inference 3.6 pictures every second on jetson nano??

Which model you are using? I've got 0.1 sec with yolov5-s.

Thanks glenn for directing my comment to here. Hi all, may I know how to make Yolov5 run properly in Jetson Nano - Jetpack 4.4?

My jetson nano freeze after output the following:

you are using Jetson Nano - Jetpack 4.4? I followed your steps but could not run. It freeze after the output below:
python3 detect.py --source ./inference/images/ --weights ./weights/yolov5s.pt --conf 0.4

"Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.4, device='', fourcc='mp4v', half=False, img_size=640, iou_thres=0.5, output='inference/output', save_txt=False, source='./inference/images/', view_img=False, weights='./weights/yolov5s.pt')
Using CUDA device0 _CudaDeviceProperties(name='NVIDIA Tegra X1', total_memory=3956MB)"

@glenn-jocher
so the 0.267 second is the inference time, process and inference 3.6 pictures every second on jetson nano??

Which model you are using? I've got 0.1 sec with yolov5-s.
@PankajJ08
on jetson nano with yolov5-s you got 0.1 sec ? can you share your repo for this ?

Thanks glenn for directing my comment to here. Hi all, may I know how to make Yolov5 run properly in Jetson Nano - Jetpack 4.4?

My jetson nano freeze after output the following:

you are using Jetson Nano - Jetpack 4.4? I followed your steps but could not run. It freeze after the output below:
python3 detect.py --source ./inference/images/ --weights ./weights/yolov5s.pt --conf 0.4

"Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.4, device='', fourcc='mp4v', half=False, img_size=640, iou_thres=0.5, output='inference/output', save_txt=False, source='./inference/images/', view_img=False, weights='./weights/yolov5s.pt')
Using CUDA device0 _CudaDeviceProperties(name='NVIDIA Tegra X1', total_memory=3956MB)"

can confirm yolov5 works on Jetson Nano with Jetpack 4.4
I also got about 0.28s per frame on v5s model. Looking for ways to speed up.

Jetson Nano Power mode:
5W : 0.15s (Inference time)
MAX: 0.1s (Inference time)

mobile phone(Qualcomm Snapdragon 845):
5s (Inference time)

Jetson Nano Power mode:
5W : 0.15s (Inference time)
MAX: 0.1s (Inference time)

mobile phone(Qualcomm Snapdragon 845):
5s (Inference time)

Can you share your repo for this ? I got about 0.2xxxs per frame like @aljohn0422 @DENESTY

I burn the img below for the yolo5, and than install dependencies.
https://github.com/NVIDIA-AI-IOT/jetbot/wiki/Software-Setup

cuda: 10.0
pytorch: 1.3

@timaker mobile phone inference is much faster than 5 whole seconds. iPhone 11 inference time << 0.03s. iDetection shows this clearly.

Thanks glenn for directing my comment to here. Hi all, may I know how to make Yolov5 run properly in Jetson Nano - Jetpack 4.4?
My jetson nano freeze after output the following:
you are using Jetson Nano - Jetpack 4.4? I followed your steps but could not run. It freeze after the output below:
python3 detect.py --source ./inference/images/ --weights ./weights/yolov5s.pt --conf 0.4
"Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.4, device='', fourcc='mp4v', half=False, img_size=640, iou_thres=0.5, output='inference/output', save_txt=False, source='./inference/images/', view_img=False, weights='./weights/yolov5s.pt')
Using CUDA device0 _CudaDeviceProperties(name='NVIDIA Tegra X1', total_memory=3956MB)"

can confirm yolov5 works on Jetson Nano with Jetpack 4.4
I also got about 0.28s per frame on v5s model. Looking for ways to speed up.

Hi, yes I managed to run using jetpack 4.4. Same inference time as well about 0.26-28s.

I burn the img below for the yolo5, and than install dependencies.
https://github.com/NVIDIA-AI-IOT/jetbot/wiki/Software-Setup

cuda: 10.0
pytorch: 1.3

Thanks will try it.

@timaker mobile phone inference is much faster than 5 whole seconds. iPhone 11 inference time << 0.03s. iDetection shows this clearly.

nice work! can it run on android too?

Not yet, but hopefully one day!

@glenn-jocher iDetection app is a custom version, not open source yet

@timaker yes this is true. It shows the exciting possibilities for mobile inference though (30+ FPS for full sized YOLO models in the palm of your hand), and every year new performance improvements arrive from Cupertino like clockwork.

We are working on an Android version as well, if we can find time one day.

@glenn-jocher any updates on nvidia-xavier onnx to tensorrt conversion ?

@yshvrdhn for tensorrt this may be useful:
https://github.com/TrojanXu/yolov5-tensorrt

_No description provided._

Hello,
Can you please tell me how did you make YOLOv5 working on your Jetson Nano please? Mine gives a lot of errors and needed some help for the same. Please help.
Thanking you in advance.
Best Regards

Yeah, Sure. Send me the error log.

On Sun, 2 Aug, 2020, 11:35 am Nit72003, notifications@github.com wrote:

No description provided.

Hello,
Can you please tell me how did you make YOLOv5 working on your Jetson Nano
please? Mine gives a lot of errors and needed some help for the same.
Please help.
Thanking you in advance.
Best Regards

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/ultralytics/yolov5/issues/53#issuecomment-667633320,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AGIKK7N7ZUHH2WH4LLEJ5STR6T6Y3ANCNFSM4N5EKVRA
.

Yeah, Sure. Send me the error log.
…
On Sun, 2 Aug, 2020, 11:35 am Nit72003, @.*> wrote: No description provided. Hello, Can you please tell me how did you make YOLOv5 working on your Jetson Nano please? Mine gives a lot of errors and needed some help for the same. Please help. Thanking you in advance. Best Regards — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#53 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AGIKK7N7ZUHH2WH4LLEJ5STR6T6Y3ANCNFSM4N5EKVRA .

I was getting an attribute error. I have uninstalled everything and reinstalled the Jetpack 4.4 and python 3.6, can you please help me with the proper installations needed?

Thanks glenn for directing my comment to here. Hi all, may I know how to make Yolov5 run properly in Jetson Nano - Jetpack 4.4?
My jetson nano freeze after output the following:
you are using Jetson Nano - Jetpack 4.4? I followed your steps but could not run. It freeze after the output below:
python3 detect.py --source ./inference/images/ --weights ./weights/yolov5s.pt --conf 0.4
"Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.4, device='', fourcc='mp4v', half=False, img_size=640, iou_thres=0.5, output='inference/output', save_txt=False, source='./inference/images/', view_img=False, weights='./weights/yolov5s.pt')
Using CUDA device0 _CudaDeviceProperties(name='NVIDIA Tegra X1', total_memory=3956MB)"

can confirm yolov5 works on Jetson Nano with Jetpack 4.4
I also got about 0.28s per frame on v5s model. Looking for ways to speed up.

Hi, yes I managed to run using jetpack 4.4. Same inference time as well about 0.26-28s.

How did you manage to mae it working on Jetpack 4.4? What all installations are needed for the same? What python version did you use? :)

Jetson Nano Power mode:
5W : 0.15s (Inference time)
MAX: 0.1s (Inference time)

mobile phone(Qualcomm Snapdragon 845):
5s (Inference time)
@timaker
I am wondering that have you ever tried using DSP to accelerate the inference speed when you put the model on Qualcomm Snapdragon 845.

Jetson Nano Power mode:

5W : 0.15s (Inference time)

MAX: 0.1s (Inference time)

mobile phone(Qualcomm Snapdragon 845):

5s (Inference time)

@timaker

I am wondering that have you ever tried using DSP to accelerate the inference speed when you put the model on Qualcomm Snapdragon 845.

Not yet, just use 845 cpu

I have a very hard time believing anything would take 5 seconds to process one image. Our iDetection app on iOS takes about 20-30 ms for one YOLOv5l frame using the ANE on any iPhone of the last few years (X, XS, 11).

In terms of CPU performance, you can test this on any hardware that runs pytorch:
Screen Shot 2020-08-13 at 5 36 44 PM

Anyone got better results on Jetson Nano? I am reaching 3 FPS on Nano using yolov5s

Anyone got better results on Jetson Nano? I am reaching 3 FPS on Nano using yolov5s

I could not make it working idk why :)

Here are a few guidelines for jetson Nano :

Here are a few guidelines for jetson Nano :

Hello sir,

https://jkjung-avt.github.io/jetpack-4.4/
This link has support till Yolov3 I guess. Is it the same for Yolov5? So is it just that that i have to use Jetpack 4.4 and add Cuda to my path and then i can directly run Yolov5 by cloning the repository?

Best Regards,
Nitish.

Here are a few guidelines for jetson Nano :

Hello sir,

https://jkjung-avt.github.io/jetpack-4.4/
This link has support till Yolov3 I guess. Is it the same for Yolov5? So is it just that that i have to use Jetpack 4.4 and add Cuda to my path and then i can directly run Yolov5 by cloning the repository?

Best Regards,
Nitish.

Hi There,

You will need to install some dependencies for this repository to work. You should start by ensuring you have all the basics on the right version required. For v3.0 you will need pytorch 1.6 and the corresponding torchvision.
Then work from there. By cloning the repository and attempting to run the python code, the dependencies will appear and you can install them individually using PIP or finding the right wheel online.

Making it run is not that hard at all. The issue is when you would like to increase the performance. You will then have to compile somethings yourself, to get all the functionalities. (E.g CMAKE + OpenCV )

Here are a few guidelines for jetson Nano :

Hello sir,
https://jkjung-avt.github.io/jetpack-4.4/
This link has support till Yolov3 I guess. Is it the same for Yolov5? So is it just that that i have to use Jetpack 4.4 and add Cuda to my path and then i can directly run Yolov5 by cloning the repository?
Best Regards,
Nitish.

Hi There,

You will need to install some dependencies for this repository to work. You should start by ensuring you have all the basics on the right version required. For v3.0 you will need pytorch 1.6 and the corresponding torchvision.
Then work from there. By cloning the repository and attempting to run the python code, the dependencies will appear and you can install them individually using PIP or finding the right wheel online.

Making it run is not that hard at all. The issue is when you would like to increase the performance. You will then have to compile somethings yourself, to get all the functionalities. (E.g CMAKE + OpenCV )

Hello Sir,
Yolov3 works perfect on my Jetson Nano. The issue is with Yolov5 only. And i was looking for some help with the installation guide of Yolov5 on Jetson Nano. I am fine with 0.3 seconds interface time, that is not the issue with me. The main issue is that it does not really work and just gives errors or "aborted" as the error.

Best Regards,
Nitish.

Hello Sir,
Yolov3 works perfect on my Jetson Nano. The issue is with Yolov5 only. And i was looking for some help with the installation guide of Yolov5 on Jetson Nano. I am fine with 0.3 seconds interface time, that is not the issue with me. The main issue is that it does not really work and just gives errors or "aborted" as the error.

Best Regards,
Nitish.

I would say, firstly ensure the requirements versions are ok. For YOLO v5 v1.0 you will need Pytorch 1.5.1 and the corresponding torchvision
For YOLOV5 3.0 you will need Pytorch 1.6.0 and corresponding torchvision.

Once you have cuDNN and the basics, the errors should guide you to a specific direction. Otherwise post the exact error here (as a new issue) so people can help.

Hello everyone,
I tried to install Pycuda but it is giving errors, can someone please help with the same? I am attaching a photo of it.

Best Regards,
Nitish.

IMG-20200906-WA0005

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

what is the conclusion of this issue? it is full of unrelated setup, mobile issues but what is with performance on jetson?
with jetpack 4.4 + torch 1.6 + torch vision 0.7.0 + opencv > 4 + yolov5 v3.0
i am reaching 3 FPS on Nano using yolov5s like some other. Is it normal? Or is something wrong in our setups?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

linhaoqi027 picture linhaoqi027  Â·  4Comments

nanometer34688 picture nanometer34688  Â·  3Comments

Single430 picture Single430  Â·  4Comments

hktxt picture hktxt  Â·  3Comments

dereyly picture dereyly  Â·  4Comments