Darknet: bounding box and offsets are sligthly wrong

Created on 8 Nov 2019  路  11Comments  路  Source: AlexeyAB/darknet

hi all. i did use darknet for a few different things and never had this issue. i used both default and calculated anchors.

the base image is like 250 x 150 pixels

default anchors:
detection-wrong.png

calculated anchors:
detection-wrong-anchors.png

bounding box is marked tight and correclty in yolo_mark: (sorry for the colors)
yolo-mark-bb.png

cfg with anchors (yolo tiny):

[net]
# Testing
batch=1
subdivisions=1
# Training
#batch=64
#subdivisions=16
width=832
height=832
channels=3
momentum=0.9
decay=0.0005
angle=0
saturation = 1.5
exposure = 1.5
hue=.1

learning_rate=0.001
burn_in=1000
max_batches=12000
policy=steps
steps=9600,10800
scales=.1,.1

[convolutional]
batch_normalize=1
filters=16
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=32
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=64
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=128
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=2

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[maxpool]
size=2
stride=1

[convolutional]
batch_normalize=1
filters=1024
size=3
stride=1
pad=1
activation=leaky

###########

[convolutional]
batch_normalize=1
filters=256
size=1
stride=1
pad=1
activation=leaky

[convolutional]
batch_normalize=1
filters=512
size=3
stride=1
pad=1
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=33
activation=linear



[yolo]
mask = 3,4,5
anchors =  19,5,  25,50,  34,45,  30,54,  34,55,  32,60,  37,56,  35,61,  38,69
classes=6
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1

[route]
layers = -4

[convolutional]
batch_normalize=1
filters=128
size=1
stride=1
pad=1
activation=leaky

[upsample]
stride=2

[route]
layers = -1, 8

[convolutional]
batch_normalize=1
filters=256
size=3
stride=1
pad=1
activation=leaky

[convolutional]
size=1
stride=1
pad=1
filters=33
activation=linear

[yolo]
mask = 0,1,2
anchors =  19,5,  25,50,  34,45,  30,54,  34,55,  32,60,  37,56,  35,61,  38,69
classes=6
num=6
jitter=.3
ignore_thresh = .7
truth_thresh = 1
random=1

what is the problem and how can i solve this?

THX <3

Most helpful comment

@ycelik #4125

All 11 comments

  • What is the problem? Not very accurately bbox?
  • Use default anchors
  • set ignore_thresh=0.9 in [yolo] layers
  • add to each [yolo] layer lines:
iou_normalizer=0.5
iou_loss=giou
  • What is the problem? Not very accurately bbox?
  • Use default anchors
  • set ignore_thresh=0.9 in [yolo] layers
  • add to each [yolo] layer lines:
iou_normalizer=0.5
iou_loss=giou

thanks @AlexeyAB. Yes my problem are the bboxes which are not accurate. I need very accurate offests for my application for correct calculation.

  • I used default anchors
  • added ignore_thresh=0.9 to both of my yolo layers
  • added iou_normalizer=0.5 and iou_loss=giou to both of my yolo layers

The detection is still the same. Do I need to retrain with these values?

@ycelik #4125

Yes you should train from scratch.

Unfortunately the issue isn't solved. I didn't use the most recent darknet code tho. I'll check it again after updating and report back. Thx for the help so far @zpmmehrdad and @AlexeyAB :)

Unfortunately the changes (default anchors, ignore_thresh=0.9, iou_normalizer=0.5 and iou_loss=giou) didn't fix the problem. I did update my local darknet installation to the most recent version. I didn't use yolov3-spp.cfg tho I sticked to tiny yolo. Any further ideas?

Edit: this is not very important to me anymore since the real offsets seem to be accurate. Only the display in the cv window seems to be buggy. In yolo_mark the offsets are good. I'll close the ticket but if you want feel free to suggest possible solutions :)

@AlexeyAB, @zpmmehrdad FYI

@ycelik Hi,

I had the same problem. You should continue the training even if the mAP is high. For example I trained only 1 class and I got about 100% mAP at 2k iteration but I continued the training until I got the highest iou_thresh. So keep going to the training.

Iteration 2k, mAP: ~100%, -iou_thresh 0.9: 20%
Iteration 8k, mAP: ~100%, -iou_thresh 0.9: 78%

@ycelik Hi,

I had the same problem. You should continue the training even if the mAP is high. For example I trained only 1 class and I got about 100% mAP at 2k iteration but I continued the training until I got the highest iou_thresh. So keep going to the training.

Iteration 2k, mAP: ~100%, -iou_thresh 0.9: 20%
Iteration 8k, mAP: ~100%, -iou_thresh 0.9: 78%

@zpmmehrdad thank you very much I'm gonna test it and report back =)
If you don't mind: How did you continue training? Did you add new train set and change max_batches or did you train from scratch?

@ycelik Hi,
You can continue the training or train from scratch with your last weights (fine tuning).

@ycelik Hi,
I had the same problem. You should continue the training even if the mAP is high. For example I trained only 1 class and I got about 100% mAP at 2k iteration but I continued the training until I got the highest iou_thresh. So keep going to the training.
Iteration 2k, mAP: ~100%, -iou_thresh 0.9: 20%
Iteration 8k, mAP: ~100%, -iou_thresh 0.9: 78%

@zpmmehrdad thank you very much I'm gonna test it and report back =)
If you don't mind: How did you continue training? Did you add new train set and change max_batches or did you train from scratch?

just add -clear flag at the end of training command.
reference : https://github.com/AlexeyAB/darknet/issues/4353

@ycelik Hi,
I had the same problem. You should continue the training even if the mAP is high. For example I trained only 1 class and I got about 100% mAP at 2k iteration but I continued the training until I got the highest iou_thresh. So keep going to the training.
Iteration 2k, mAP: ~100%, -iou_thresh 0.9: 20%
Iteration 8k, mAP: ~100%, -iou_thresh 0.9: 78%

@zpmmehrdad thank you very much I'm gonna test it and report back =)
If you don't mind: How did you continue training? Did you add new train set and change max_batches or did you train from scratch?

just add -clear flag at the end of training command.
reference : #4353

Thanks for the info brother appreciate it

Was this page helpful?
0 / 5 - 0 ratings

Related issues

Yumin-Sun-00 picture Yumin-Sun-00  路  3Comments

shootingliu picture shootingliu  路  3Comments

Cipusha picture Cipusha  路  3Comments

HanSeYeong picture HanSeYeong  路  3Comments

off99555 picture off99555  路  3Comments