Darknet: How to run YOLO on multiple images and save predictions to a txt file

Created on 23 Jun 2018  路  32Comments  路  Source: pjreddie/darknet

Demonstration of YOLO is impressive! However, I'm wondering if there is a way to get predictions for a batch of images, say from a given directory, and save the names of detected classes to a txt file? I think it should be possible but I'm unfamiliar with DarkNet, so any advise will be much appreciated!

Most helpful comment

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

All 32 comments

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

Hi, @AlexeyAB is there a way to create multiple txt files for each prediction. Because parsing result.txt file is not so easy. I would like to use this project mAP that requires all predictions in separated files. Thx
P.S.: I'm using YOLOv3

@EscVM Hi,
To get mAP - you can use this repo: https://github.com/AlexeyAB/darknet
and such command:
./darknet detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights


If you want to get mAP by this repo https://github.com/Cartucho/mAP then try to ask how to obtain predictions in separated files in the Issues: https://github.com/Cartucho/mAP/issues

Ok, thank you. So, no one, that you know, has already made a code that parse that result.txt file?

Try this command: ./darknet detector valid cfg/voc.data yolo-voc.cfg yolo-voc.weights
If you take a quick look in this function "validate_detector" from file darknet/src/detector.c, it actually saves detection results in all validation data list which is defined in your data cfg file. So what you can do is simply modify your data cfg file to point to your own batch of images. It outputs are much cleaner compared to ./darknet detector test.

Here is an example output from valid command:
00082 0.999969 504.637390 651.370789 610.118347 736.534363
00083 0.999979 524.560852 676.153137 664.041809 758.356995
00084 0.999882 556.716858 706.351868 727.970886 782.629456
00085 0.999651 588.336853 747.259827 803.692688 815.355164
00086 0.999325 641.701050 805.085388 901.960693 843.564392
00087 0.999820 730.968018 745.703369 953.834717 817.448608
00088 0.999969 810.657593 706.231934 989.481201 785.879639

Whereas here is what test command give you:
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00082.png: Predicted in 0.016813 seconds.
trigger: 100% (left_x: 504 top_y: 650 width: 105 height: 85)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00083.png: Predicted in 0.016660 seconds.
trigger: 100% (left_x: 524 top_y: 675 width: 139 height: 82)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00084.png: Predicted in 0.016925 seconds.
trigger: 100% (left_x: 556 top_y: 705 width: 171 height: 76)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00085.png: Predicted in 0.017895 seconds.
trigger: 100% (left_x: 587 top_y: 746 width: 215 height: 68)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00086.png: Predicted in 0.017027 seconds.
trigger: 100% (left_x: 641 top_y: 804 width: 260 height: 38)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00087.png: Predicted in 0.016829 seconds.
trigger: 100% (left_x: 730 top_y: 745 width: 223 height: 72)

Might be useful to someone who is new to yolo like me :)

@fate3439 but how it ll work for the case of mutiple objects in the frames
00082 0.999969 504.637390 651.370789 610.118347 736.534363

i know 00082 is the frame number, 0.999969 is the confidence score and rest are bbox coordinates
just consider the case below
Enter Image Path: /home/akhan/yolo_data/labels/001471.jpg: Predicted in 0.000000 milli-seconds.
handicap: 87%
car: 91%
handicap: 84%
0.411916 0.405841 0.209267 0.074270
0.375321 0.649909 0.324759 0.150611
0.442939 0.200636 0.348126 0.348058

and when i execute ./darknet detector valid cfg/obj.data cfg/yolo.cfg backup/yolo_10000.weights result.txt

i get this
image

@fate3439 hi now i understand it saves results here like that it saves even those results where the confidence score is less then a specific threshold, the only confusing aspect here in my case is how to give the images the proper path . in case its randomly slecting images. if u see the images
image

@fate3439 hi now i understand it saves results here like that it saves even those results where the confidence score is less then a specific threshold, the only confusing aspect here in my case is how to give the images the proper path . in case its randomly slecting images. if u see the images
image

where does the folder that saved the result ?

mask_scale: Using default '1.000000'
Total BFLOPS 62.669
Loading weights from yolov2_30000.weights...
seen 64
Done!
Learning Rate: 0.001, Momentum: 0.9, Decay: 0.0005
eval: Using default 'voc'
4
Segmentation fault (core dumped)

I have got this error .Can you anyone else get this error ?

Try this command: ./darknet detector valid cfg/voc.data yolo-voc.cfg yolo-voc.weights
If you take a quick look in this function "validate_detector" from file darknet/src/detector.c, it actually saves detection results in all validation data list which is defined in your data cfg file. So what you can do is simply modify your data cfg file to point to your own batch of images. It outputs are much cleaner compared to ./darknet detector test.

Here is an example output from valid command:
00082 0.999969 504.637390 651.370789 610.118347 736.534363
00083 0.999979 524.560852 676.153137 664.041809 758.356995
00084 0.999882 556.716858 706.351868 727.970886 782.629456
00085 0.999651 588.336853 747.259827 803.692688 815.355164
00086 0.999325 641.701050 805.085388 901.960693 843.564392
00087 0.999820 730.968018 745.703369 953.834717 817.448608
00088 0.999969 810.657593 706.231934 989.481201 785.879639

Whereas here is what test command give you:
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00082.png: Predicted in 0.016813 seconds.
trigger: 100% (left_x: 504 top_y: 650 width: 105 height: 85)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00083.png: Predicted in 0.016660 seconds.
trigger: 100% (left_x: 524 top_y: 675 width: 139 height: 82)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00084.png: Predicted in 0.016925 seconds.
trigger: 100% (left_x: 556 top_y: 705 width: 171 height: 76)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00085.png: Predicted in 0.017895 seconds.
trigger: 100% (left_x: 587 top_y: 746 width: 215 height: 68)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00086.png: Predicted in 0.017027 seconds.
trigger: 100% (left_x: 641 top_y: 804 width: 260 height: 38)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00087.png: Predicted in 0.016829 seconds.
trigger: 100% (left_x: 730 top_y: 745 width: 223 height: 72)

Might be useful to someone who is new to yolo like me :)

I could not get this outputs.

I got ' Segmentation fault (core dumped) ' error.Can you please help me ?

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

I tried this and got "Cannot load image "-dont_show"" as an error.

You can use this repo: https://github.com/AlexeyAB/darknet
To process a list of images data/train.txt and save results of detection to result.txt use:
./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

I tried this and got "Cannot load image "-dont_show"" as an error.

I also had the same error. Try to reinstall darknet one more time and run make file again. It helped me.

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

i am running this command for my network and weights, but i'm getting an error.
This is the command that i tried running with the ouput.
image

This is what the test.txt file contains.
image

It will be great if you can help me out.

I am currently using another repo that I forked of YOLOv3 AlexeyAB darknet that makes it way easier to store all your input images in a folder, get your output images in another folder and a text file with all the confidence percentage of predictions done.

https://github.com/Vic-TheGreat/VG_AlexeyAB_darknet.git

it's quite easy to follow.

CHEERS!!

@Vic-TheGreat But it doesn't work with custom weights on saving the images with the bboxes it predicted right?

@EscVM Hi,
To get mAP - you can use this repo: https://github.com/AlexeyAB/darknet
and such command:
./darknet detector map data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights

If you want to get mAP by this repo https://github.com/Cartucho/mAP then try to ask how to obtain predictions in separated files in the Issues: https://github.com/Cartucho/mAP/issues

I get this error:

calculation mAP (mean average precision)...
Couldn't open file: coco_testdev

How do make it write an different image with the bounding boxes?

(So it saves as prediction-1.jpg, prediction-2.jpg, prediction-3.jpg... and so forth)

Ok, thank you. So, no one, that you know, has already made a code that parse that result.txt file?

I try go get the mAP in txt using ./darknet detector map obj.data obj.cfg backup/obj_100.weights -map | tee result_map.txt, but doen't work. Any ideas?

My training of custom objects is ongoing on this repo. Can anyone help me figure out how to stop the training and where to find the trained weights?(P.S. :-Sorry if i posted it in wrong space I am naive to using this section)

Try this command: ./darknet detector valid cfg/voc.data yolo-voc.cfg yolo-voc.weights
If you take a quick look in this function "validate_detector" from file darknet/src/detector.c, it actually saves detection results in all validation data list which is defined in your data cfg file. So what you can do is simply modify your data cfg file to point to your own batch of images. It outputs are much cleaner compared to ./darknet detector test.

Here is an example output from valid command:
00082 0.999969 504.637390 651.370789 610.118347 736.534363
00083 0.999979 524.560852 676.153137 664.041809 758.356995
00084 0.999882 556.716858 706.351868 727.970886 782.629456
00085 0.999651 588.336853 747.259827 803.692688 815.355164
00086 0.999325 641.701050 805.085388 901.960693 843.564392
00087 0.999820 730.968018 745.703369 953.834717 817.448608
00088 0.999969 810.657593 706.231934 989.481201 785.879639

Whereas here is what test command give you:
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00082.png: Predicted in 0.016813 seconds.
trigger: 100% (left_x: 504 top_y: 650 width: 105 height: 85)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00083.png: Predicted in 0.016660 seconds.
trigger: 100% (left_x: 524 top_y: 675 width: 139 height: 82)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00084.png: Predicted in 0.016925 seconds.
trigger: 100% (left_x: 556 top_y: 705 width: 171 height: 76)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00085.png: Predicted in 0.017895 seconds.
trigger: 100% (left_x: 587 top_y: 746 width: 215 height: 68)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00086.png: Predicted in 0.017027 seconds.
trigger: 100% (left_x: 641 top_y: 804 width: 260 height: 38)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00087.png: Predicted in 0.016829 seconds.
trigger: 100% (left_x: 730 top_y: 745 width: 223 height: 72)

Might be useful to someone who is new to yolo like me :)

Just to expand on this answer but with a more explicit description of usage. Just to note that I am using the most recent version of YOLO as of git commit 0ff2343.

  1. Create a text file containing the absolute path to a list of images for inference.
find  /path/to/images -type f > /path/to/images_list.txt 
  1. Modify coco.data config file for your task which defines where to find the list of images for evaluation and where to store the results.
# From darknet root directory
sed -e 's-coco_testdev-/path/to/image_list.txt-' \
    -e 's-/home/pjreddie/backup/-/directory/to/save/results-' \
    cfg/coco.data > cfg/my_eval_config.data
  1. Modify src/detector.c to include the image names as id.
    change this line (at or near line number 449)
    sprintf(buff, "{\"image_id\":%d, \"category_id\":%d, \"bbox\":[%f, %f, %f, %f], \"score\":%f},\n", image_id, coco_ids[j], bx, by, bw, bh, dets[i].prob[j]);
    to
    sprintf(buff, "{\"image_id\":\"%s\", \"category_id\":%d, \"bbox\":[%f, %f, %f, %f], \"score\":%f},\n", image_path, coco_ids[j], bx, by, bw, bh, dets[i].prob[j]);

  2. Remake.

make
  1. Run inference on images using your config file.
./darknet detector valid cfg/my_eval_config.data cfg/yolov3.cfg yolov3.weights 
  1. Output will be saved as a JSON-like file with the name coco_results.json to the results path you specified in your cfg/my_eval_config.data file in step 2. In this case we called it /directory/to/save/results.

  2. The output JSON includes all detected objects regardless of their probability score. This can be used as is and parsed at your leisure with whatever method you please. Nevertheless, I include a post processing step here to extract objects within a given threshold. You will need jq for this step.

jq  '[ .[] | select(.score >= 0.1) ]' results/coco_results.json > results/yolo_objects_0p1thresh.json

You can use this repo: https://github.com/AlexeyAB/darknet
To process a list of images data/train.txt and save results of detection to result.txt use:
./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

i am running this command for my network and weights, but i'm getting an error.
This is the command that i tried running with the ouput.
image

This is what the test.txt file contains.
image

It will be great if you can help me out.

the text file should contain content as /data/obj ... so on all the previous shit should be removed

Try this command: ./darknet detector valid cfg/voc.data yolo-voc.cfg yolo-voc.weights
If you take a quick look in this function "validate_detector" from file darknet/src/detector.c, it actually saves detection results in all validation data list which is defined in your data cfg file. So what you can do is simply modify your data cfg file to point to your own batch of images. It outputs are much cleaner compared to ./darknet detector test.
Here is an example output from valid command:
00082 0.999969 504.637390 651.370789 610.118347 736.534363
00083 0.999979 524.560852 676.153137 664.041809 758.356995
00084 0.999882 556.716858 706.351868 727.970886 782.629456
00085 0.999651 588.336853 747.259827 803.692688 815.355164
00086 0.999325 641.701050 805.085388 901.960693 843.564392
00087 0.999820 730.968018 745.703369 953.834717 817.448608
00088 0.999969 810.657593 706.231934 989.481201 785.879639
Whereas here is what test command give you:
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00082.png: Predicted in 0.016813 seconds.
trigger: 100% (left_x: 504 top_y: 650 width: 105 height: 85)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00083.png: Predicted in 0.016660 seconds.
trigger: 100% (left_x: 524 top_y: 675 width: 139 height: 82)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00084.png: Predicted in 0.016925 seconds.
trigger: 100% (left_x: 556 top_y: 705 width: 171 height: 76)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00085.png: Predicted in 0.017895 seconds.
trigger: 100% (left_x: 587 top_y: 746 width: 215 height: 68)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00086.png: Predicted in 0.017027 seconds.
trigger: 100% (left_x: 641 top_y: 804 width: 260 height: 38)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00087.png: Predicted in 0.016829 seconds.
trigger: 100% (left_x: 730 top_y: 745 width: 223 height: 72)
Might be useful to someone who is new to yolo like me :)

Just to expand on this answer but with a more explicit description of usage. Just to note that I am using the most recent version of YOLO as of git commit 0ff2343.

  1. Create a text file containing the absolute path to a list of images for inference.
find  /path/to/images -type f > /path/to/images_list.txt 
  1. Modify coco.data config file for your task which defines where to find the list of images for evaluation and where to store the results.
# From darknet root directory
sed -e 's-coco_testdev-/path/to/image_list.txt-' \
    -e 's-/home/pjreddie/backup/-/directory/to/save/results-' \
    cfg/coco.data > cfg/my_eval_config.data
  1. Modify src/detector.c to include the image names as id.
    change this line (at or near line number 449)
    sprintf(buff, "{\"image_id\":%d, \"category_id\":%d, \"bbox\":[%f, %f, %f, %f], \"score\":%f},\n", image_id, coco_ids[j], bx, by, bw, bh, dets[i].prob[j]);
    to
    sprintf(buff, "{\"image_id\":\"%s\", \"category_id\":%d, \"bbox\":[%f, %f, %f, %f], \"score\":%f},\n", image_path, coco_ids[j], bx, by, bw, bh, dets[i].prob[j]);
  2. Remake.
make
  1. Run inference on images using your config file.
./darknet detector valid cfg/my_eval_config.data cfg/yolov3.cfg yolov3.weights 
  1. Output will be saved as a JSON-like file with the name coco_results.json to the results path you specified in your cfg/my_eval_config.data file in step 2. In this case we called it /directory/to/save/results.
  2. The output JSON includes all detected objects regardless of their probability score. This can be used as is and parsed at your leisure with whatever method you please. Nevertheless, I include a post processing step here to extract objects within a given threshold. You will need jq for this step.
jq  '[ .[] | select(.score >= 0.1) ]' results/coco_results.json > results/yolo_objects_0p1thresh.json

Thank you very much Sir . It helped a lot and my training amd testing is successfully completed .

Can you help how to crop the detected portion while testing . I got a solution on another threads to modify image.c here
https://github.com/pjreddie/darknet/issues/1673#issuecomment-531499894

But even after modifying it didn't worked for me(i re-build the project after changes).

Thanks in Advance.

Try this command: ./darknet detector valid cfg/voc.data yolo-voc.cfg yolo-voc.weights
If you take a quick look in this function "validate_detector" from file darknet/src/detector.c, it actually saves detection results in all validation data list which is defined in your data cfg file. So what you can do is simply modify your data cfg file to point to your own batch of images. It outputs are much cleaner compared to ./darknet detector test.

Here is an example output from valid command:
00082 0.999969 504.637390 651.370789 610.118347 736.534363
00083 0.999979 524.560852 676.153137 664.041809 758.356995
00084 0.999882 556.716858 706.351868 727.970886 782.629456
00085 0.999651 588.336853 747.259827 803.692688 815.355164
00086 0.999325 641.701050 805.085388 901.960693 843.564392
00087 0.999820 730.968018 745.703369 953.834717 817.448608
00088 0.999969 810.657593 706.231934 989.481201 785.879639

Whereas here is what test command give you:
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00082.png: Predicted in 0.016813 seconds.
trigger: 100% (left_x: 504 top_y: 650 width: 105 height: 85)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00083.png: Predicted in 0.016660 seconds.
trigger: 100% (left_x: 524 top_y: 675 width: 139 height: 82)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00084.png: Predicted in 0.016925 seconds.
trigger: 100% (left_x: 556 top_y: 705 width: 171 height: 76)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00085.png: Predicted in 0.017895 seconds.
trigger: 100% (left_x: 587 top_y: 746 width: 215 height: 68)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00086.png: Predicted in 0.017027 seconds.
trigger: 100% (left_x: 641 top_y: 804 width: 260 height: 38)
Enter Image Path: /usr/local/faststorage/chuwang/suncg/prepared_data/yolo_2d_full_trigger/JPEGImages/00087.png: Predicted in 0.016829 seconds.
trigger: 100% (left_x: 730 top_y: 745 width: 223 height: 72)

Might be useful to someone who is new to yolo like me :)

how to do with custom data?

what the eval=coco should be replaced with?

thanks

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

Hi, @AlexeyAB ... result.txt is successfully generated. But how to visualise them? I mean, any existing script to read this result.txt?
Lazy me.... ^_^

Thank you

@AlexeyAB I have tried running the command you suggested:

"./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt"

I made modifications to get it to run with the system I'm using:
"./darknet detector test cfg/voc.data cfg/yolo-voc.cfg ../../YOLO-weights/yolov4.weights -dont_show -ext_output < data/train.txt > result.txt"

The result:
https://drive.google.com/drive/folders/1O1JtsnLQrh2MNKPGz8Nn76GJ2ej4oXyK?usp=sharing - 'see result.png'

My build:
https://drive.google.com/drive/folders/1O1JtsnLQrh2MNKPGz8Nn76GJ2ej4oXyK?usp=sharing -- see 'build.png'

After I run "cat result.txt" the file output is emply.

Thank you for your help, and please let me know what i can do to resolve this error.

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

#
Duplicate of #

You can use this repo: https://github.com/AlexeyAB/darknet

To process a list of images data/train.txt and save results of detection to result.txt use:

./darknet detector test cfg/voc.data yolo-voc.cfg yolo-voc.weights -dont_show -ext_output < data/train.txt > result.txt

Hi, Sir I would like to ask if I could be able to add the total number of detection of the specific class after running all the images at once. How do I get the total number of the detection in the those images. Please help need it for project.

I created a simple command that allows to save all predictions on images from folder test_set:
for i in test_set/*.jpg; do ./darknet detector test obj.data yolov3.cfg yolov3_10000.weights "$i" -dont_show; mv predictions.jpg "${i%.jpg}"_det.jpg; done
When it is finished executing, you have in the folder test_set images with predictions, whose names start with the same name as of the original image and end with "_det.jpg". You can then move these images to folder "predictions", for example, with mv test_set/*_det.jpg predictions. This code was run on Ubuntu.

I created a simple command that allows to save all predictions on images from folder test_set:
for i in test_set/*.jpg; do ./darknet detector test obj.data yolov3.cfg yolov3_10000.weights "$i" -dont_show; mv predictions.jpg "${i%.jpg}"_det.jpg; done
When it is finished executing, you have in the folder test_set images with predictions, whose names start with the same name as of the original image and end with "_det.jpg". You can then move these images to folder "predictions", for example, with mv test_set/*_det.jpg predictions. This code was run on Ubuntu.

Hi, Sir can you please give me the command for windows as I'm running darknet in windows. So what I want is I'm detecting a class called cans once I run the command that you mentioned above for bunch of images it creates the txt file but Sir I would like to go through that txt file and output the total number of cans at the bottom in that txt file can you please please help me with that.

Was this page helpful?
0 / 5 - 0 ratings