Darknet: Is it possible to test the YOLO V3 network on several images with only one run of the network?

Created on 17 Feb 2019  ·  10Comments  ·  Source: AlexeyAB/darknet

How can it be done?

Solved

Most helpful comment

https://github.com/AlexeyAB/darknet#how-to-use

To process a list of images data/train.txt and save results of detection to result.txt use:
darknet.exe detector test cfg/coco.data yolov3.cfg yolov3.weights -dont_show -ext_output < data/train.txt > result.txt

All 10 comments

https://github.com/AlexeyAB/darknet#how-to-use

To process a list of images data/train.txt and save results of detection to result.txt use:
darknet.exe detector test cfg/coco.data yolov3.cfg yolov3.weights -dont_show -ext_output < data/train.txt > result.txt

Thanks Alexey.

I don’t want to test the images one by one and run the whole network for
detection separately. I would like to run a test batch with size larger
than 1.
How can it be done?

בתאריך יום א׳, 17 בפבר׳ 2019 ב-23:10 מאת Alexey notifications@github.com:

https://github.com/AlexeyAB/darknet#how-to-use

To process a list of images data/train.txt and save results of detection
to result.txt use:
darknet.exe detector test cfg/coco.data yolov3.cfg yolov3.weights
-dont_show -ext_output < data/train.txt > result.txt


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/AlexeyAB/darknet/issues/2424#issuecomment-464509689,
or mute the thread
https://github.com/notifications/unsubscribe-auth/Al94CgtYopdO_WXFalDFNoTjc9SYu4EDks5vOcVGgaJpZM4a_yix
.

I don’t want to test the images one by one and run the whole network for detection separately.

The network will be loaded only once.

If you want to test 64 by 64 instead of 1 by 1, then it isn't implemented batch testing yet.
Since the current aim low latency for real-time object detection on a single video-stream.

I have been trying to do exactly the same thing but can't seem to get it to actually run. I am simply trying to run it on my same training data (I was able to train successfully with all of the following .cfg/.data, etc). However, I still get output such as:

Cannot load image "data/labels/32_0.png"
Cannot load image "data/labels/33_0.png"
Cannot load image "data/labels/34_0.png"
Cannot load image "data/labels/35_0.png"
...

Prior to the model loading, then it simply hangs with the output:

Total BFLOPS 65.297
Allocate additional workspace_size = 33.55 MB
Loading weights from gemini_final.weights...Done!

Despite having also changed the beginning my .cfg as follows:

[net]
# Testing
batch=1
subdivisions=1
# Training
#batch=64
#subdivisions=64

My arguments to start it are as follows:
./darknet detector test data/gemini.data gemini.cfg gemini_final.weights -dont_show -ext_output < data/gemini_train.txt > results.txt

I don't suppose you have any insight?

EDIT: Oh I'm dumb, it redirected output to results.txt... Everything seems to be working perfectly!

@Stoltec

However, I still get output such as:

Cannot load image "data/labels/32_0.png"
Cannot load image "data/labels/33_0.png"
Cannot load image "data/labels/34_0.png"
Cannot load image "data/labels/35_0.png"

Because your removed this directory: https://github.com/AlexeyAB/darknet/tree/master/data/labels

Hi Alexey.
Thank you for the details.
We used some solution provided by another post on the net that externally
advances the output pointer of each layer to get the result of different
batches.
We still have some issues that you might be able to help with:

  1. Currently, we would like to work with Dual stream. For some reason,
    there are several functions within the darknet that check specifically for
    batch size two and in that case flips the input. When we try to run the
    network with batch size 2 it crashes. To overcome it we run with batch size
    3 or four but feed only the first two images and look at the result of the
    first two batches. The downside is working with 3/4 batches increase memory
    allocation and run time.
    Do you know what is going on with batch at size two and if it is possible
    to prevent that behavior?
  2. When working on the GPU we have an issue that there is a difference
    between debug and release compilation regarding the allocation size. If the
    memory allocation of the process if above 1.5GB in Release the application
    crashes while in debug it does not. We have verified that if we use smaller
    input (by changing the input size in the configuration file) the release
    version also works fine. We have also verified that if we run in the CPU
    the release version can work properly even when the allocation is above
    3GB. Do you know why release version GPU memory has a lower limit than the
    Debug version and if there is any way to overcome it? I know for CPU
    allocation the compiler has /HEAP flag but haven't seen anything similar
    for the CUDA compiler.

Thanks,
Avia & Guy.

On Sun, Feb 17, 2019 at 11:47 PM Alexey notifications@github.com wrote:

I don’t want to test the images one by one and run the whole network for
detection separately.

The network will be loaded only once.

If you want to test 64 by 64 instead of 1 by 1, then there isn't
implemented batch testing yet.
Since the current aim low latency for real-time object detection on a
single video-stream.


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/AlexeyAB/darknet/issues/2424#issuecomment-464513961,
or mute the thread
https://github.com/notifications/unsubscribe-auth/Al94CvMl25ooCmtzAu_wj7-Ul4O1prYNks5vOc3sgaJpZM4a_yix
.

@aviaisr

  1. This is related to this line, you can just comment it: https://github.com/AlexeyAB/darknet/blob/5e850c24897a5eb65941703059a85ead2ea5ff8c/src/yolo_layer.c#L407

If the memory allocation of the process if above 1.5GB in Release the application crashes

  1. I never met with it. And I can't reproduce it. All my models work well until all 8 GB of my GPU RTX 2070 will used.

Thank you Alexey,

  1. Commenting the line you mentioned did work. Can you please explain why
    this line was there in the first place?
    We would like to only enable it when it’s relevant.

BR,
Guy & Avia

בתאריך יום ד׳, 20 בפבר׳ 2019 ב-10:51 מאת Alexey notifications@github.com:

@aviaisr https://github.com/aviaisr

  1. This is related to this line, you can just comment it:
    https://github.com/AlexeyAB/darknet/blob/5e850c24897a5eb65941703059a85ead2ea5ff8c/src/yolo_layer.c#L407

If the memory allocation of the process if above 1.5GB in Release the
application crashes

  1. I never met with it. And I can't reproduce it. All my models work
    well until all 8 GB of my GPU RTX 2070 will used.


You are receiving this because you were mentioned.

Reply to this email directly, view it on GitHub
https://github.com/AlexeyAB/darknet/issues/2424#issuecomment-465482171,
or mute the thread
https://github.com/notifications/unsubscribe-auth/Al94Ct4H2xGwDLdzjNw4zAfDmQ_PiJcBks5vPQyIgaJpZM4a_yix
.

@aviaisr Just experiment to increase accuracy by detection on both non-flipped and flipped images. You can leave it commented.

@aviaisr

We used some solution provided by another post on the net that externally
advances the output pointer of each layer to get the result of different
batches.

Hi, I have the same need as you, to test the YOLO V3 network on several images with only one forward and get detection results for these image at one time.
You mentioned you accomplish this with the solution of another repo, could you share your solution please?

Thanks!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jasleen137 picture jasleen137  ·  3Comments

bit-scientist picture bit-scientist  ·  3Comments

HilmiK picture HilmiK  ·  3Comments

shootingliu picture shootingliu  ·  3Comments

Cipusha picture Cipusha  ·  3Comments