Hello there! Is it really possible to improve the quality of detection? I have a trained model, the height and width is now 448x448 (I do not do it anymore, because FPS is decreasing). Detects not very well, basically sees only close objects. -thresh is not exactly what I need. Is it possible to improve the quality? The model is trained as well as possible
Thanks!
Hi,
yolov3.cfg?Train with batch=64 random=1 and good dataset.
More: https://github.com/AlexeyAB/darknet#how-to-improve-object-detection
@AlexeyAB
Nope, i using v2 yolo.
btw, yolo the default resolution is 640x480, can I improve the quality of the video from my web camera? my web camera have hd resolution
btw, yolo the default resolution is 640x480, can I improve the quality of the video from my web camera? my web camera have hd resolution
Do you want to increase quality of the video or quality of detection?
@AlexeyAB, quality of the video, cuz I'm use hd web camera, but in darknet i have only 640x480 resolution
@AshleyRoth , see the last post in this topic:
https://github.com/AlexeyAB/darknet/issues/751
@AlexeyAB, I came across this technique called pseudo labeling concept in fast.ai where the predictions are pumped back into the training data and the model is made to predict again that resulted in better accuracy. Using some script here, can I get the bounding box coordinates in a .txt file (like as in using YOLO-mark .txt output) on each predicted output image so that I can use those to re-train the model again?
@kmsravindra
Do you want to train on the labeled Dataset1. Then just do pseudo labeling on the non-labeled Dataset2, and then train on this Dataset2?
Try to compare mAP after training on Dataset1, and after training on Dataset2, how does it improove accuracy?
Just add this code before this line: https://github.com/AlexeyAB/darknet/blob/94c84da85015f0dcc6deaff53b24ff46a730711c/src/detector.c#L1111
// pseudo labeling concept - fast.ai
{
char labelpath[4096];
find_replace(input, ".jpg", ".txt", labelpath);
find_replace(labelpath, ".png", ".txt", labelpath);
find_replace(labelpath, ".bmp", ".txt", labelpath);
find_replace(labelpath, ".JPG", ".txt", labelpath);
find_replace(labelpath, ".JPEG", ".txt", labelpath);
find_replace(labelpath, ".ppm", ".txt", labelpath);
FILE* fw = fopen(labelpath, "wb");
int i;
for (i = 0; i < nboxes; ++i) {
char buff[1024];
int class_id = -1;
float prob = 0;
for (j = 0; j < l.classes; ++j) {
if (dets[i].prob[j] > thresh && dets[i].prob[j] > prob) {
prob = dets[i].prob[j];
class_id = j;
}
}
if (class_id >= 0) {
sprintf(buff, "%d %2.4f %2.4f %2.4f %2.4f\n", class_id, dets[i].bbox.x, dets[i].bbox.y, dets[i].bbox.w, dets[i].bbox.h);
fwrite(buff, sizeof(char), strlen(buff), fw);
}
}
fclose(fw);
}
And run. images.txt - list with paths to images:
./darknet detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights -dont_show < images.txt
You will get txt-files with Yolo labels in the same directory where are images.
Yes. Will do this and let you know how it improves. Thanks a lot!
@kmsravindra I added -save_labels flag for this.
With the latest code you can do it without code changing:
./darknet detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights -dont_show -save_labels < images.txt
@AlexeyAB, Thanks for adding this!
@kmsravindra Hi, did you get good results by using pseudo labeling concept?
@AlexeyAB, Thanks for reminding. I am yet to try this enhancement and will keep you posted (maybe in a weeks time).
@AlexeyAB, I owe you the performance results after running it through pseudo labeling...And here it is - The mAP jumped by almost 10% from 80% to 90%. But it was on the same test data that I pseudo labelled. So I compared this new model and earlier model on totally unseen data for both models. Even then, I saw there was an observable improvement in objects detected and reduced FP's and FN's! So this worked for me. Thanks for this enhancement!
@kmsravindra Thanks!
So I compared this new model and earlier model on totally unseen data for both models. Even then, I saw there was an observable improvement in objects detected and reduced FP's and FN's! So this worked for me. Thanks for this enhancement!
What mAP improvement did you get for unseen data for both models?
@AlexeyAB, Right now it is a visible subjective inference. I will need to do annotation of unseen data and report that metric for objective inference. Will post here once done.
Hi @AlexeyAB ,
Currently the pseudo-labelling implemented here gives us hard targets (depending on the threshold that we set on the teacher network)., meaning if the teacher network has a threshold of 0.25, then all the bounding boxes above 0.25 are pseudo-labelled as belonging to that particular class and all the bounding boxes below 0.25 probability get dropped.
Is there any way that we can include the bounding box probabilities in addition to the pseudo labels (soft targets) to train the student network (assuming the student network to be darknet)?
Student network trained on soft targets (rather than hard targets) like the way I described above was shown to improve the accuracy, in general. Is this going to be a big change in the darknet architecture itself OR is it a doable change?
Most helpful comment
@kmsravindra
Do you want to train on the labeled Dataset1. Then just do
pseudo labelingon the non-labeled Dataset2, and then train on this Dataset2?Try to compare mAP after training on Dataset1, and after training on Dataset2, how does it improove accuracy?
Just add this code before this line: https://github.com/AlexeyAB/darknet/blob/94c84da85015f0dcc6deaff53b24ff46a730711c/src/detector.c#L1111
And run.
images.txt- list with paths to images:./darknet detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights -dont_show < images.txtYou will get txt-files with Yolo labels in the same directory where are images.