Yolov5: Question: is there a way to apply yolov5 to multiple streaming sources?

Created on 8 Oct 2020  路  13Comments  路  Source: ultralytics/yolov5

Hi! I would like to apply yolov5 to multiple cameras, the way I thought is to paste the images and apply the algorithm, but, is there a better way to do it? is ultralytics thinking in this update?

Any suggestion will be welcome. Thanks in advance,

H.

Most helpful comment

@hdnh2006 multi-stream capability is already built in, we've created a multithreaded streamloader that feeds detect.py:
https://github.com/ultralytics/yolov5/blob/77940c3f42d0f0542d346bfe5fa913f8b0033b5c/utils/datasets.py#L255

To use multiple streams you simply create a text file with the addresses (https, rtsp etc), one per line, and pass it as a source. For 16 simultaneous streams for example:

python detect.py --batch 16 --source streams.txt

All 13 comments

Hello @hdnh2006, thank you for your interest in our work! Please visit our Custom Training Tutorial to get started, and see our Jupyter Notebook Open In Colab, Docker Image, and Google Cloud Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

  • Cloud-based AI systems operating on hundreds of HD video streams in realtime.
  • Edge AI integrated into custom iOS and Android apps for realtime 30 FPS video inference.
  • Custom data training, hyperparameter evolution, and model exportation to any destination.

For more information please visit https://www.ultralytics.com.

@hdnh2006 multi-stream capability is already built in, we've created a multithreaded streamloader that feeds detect.py:
https://github.com/ultralytics/yolov5/blob/77940c3f42d0f0542d346bfe5fa913f8b0033b5c/utils/datasets.py#L255

To use multiple streams you simply create a text file with the addresses (https, rtsp etc), one per line, and pass it as a source. For 16 simultaneous streams for example:

python detect.py --batch 16 --source streams.txt

It works! you are awesome guys! thanks for this fantastic tool. It has helped me a lot!!

Thanks @glenn-jocher !

Just to all the people who need it. I set my streams.txt file as following:

http://192.168.0...
rtsp://admin:...

Maybe this is another question @glenn-jocher, but I cannot see the batch parameter in the detect.py code. Am I wrong?

@hdnh2006 great, glad it works well!

--batch is an abbreviation of --batch-size. The argparser allows for passing abbreviations of full arguments.

Yes @glenn-jocher , but it seems this argument is not in detect.py code, just for train, that's why I don't understand good.

@hdnh2006 ah of course. Yes you are right.

The streamloader automatically composes a batch of the right size, so you don't need to take any action there. If you have 2 streams it will build you a batch-size 2 input automatically. If you have 16 streams it will build a batch size 16 input, etc.

One recommendation here is that you want a dedicated CPU thread per stream, to allow cv2 to decompress the multithreaded streams well. We found this in our own testing.

Thank you so much again for this fantastic tool you have created @glenn-jocher.

I will close the issue.

Hi, is there a way we can send the output to a stream / front-end UI rather than displaying them directly?
Thanks!

@imabhijit can you be more specific?

So I have some older code that takes an RTSP stream and pipes the frames using FFmpeg. Next, I have a second FFmpeg process taking those images from the pipe and outputting them as an HLS stream after using yolov4 with cv2.dnn to do some object detection. The HLS stream is then captured and displayed in the front-end.
I have trained a yolov5 model and would like to use that to replace the yolov4 model, however, I saw that export to use with dnn is not yet supported.
Now, it seems like detect.py already has many of the features that I need such as reading RTSP streams directly and doing image detection on them.
So my question is how can I run detect.py so that it outputs frame to a pipe or stream instead of displaying them directly (as it currently does).
Thanks :)

@imabhijit all of the YOLOv5 predictions, regardless of source are available in detect.py as python variables, so I'd assume you might add additional logic of your own to operate on those values as you see fit within the detection for loop:

You can access predictions here:
https://github.com/ultralytics/yolov5/blob/c8c5ef36c9a19c7843993ee8d51aebb685467eca/detect.py#L71-L78

Ok i see, Thank you!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

DucTaiVu picture DucTaiVu  路  3Comments

abhiksark picture abhiksark  路  3Comments

maykulkarni picture maykulkarni  路  3Comments

Single430 picture Single430  路  4Comments

FSNStefan picture FSNStefan  路  4Comments