Darknet: Self-adversarial training - data augmentation

Created on 26 Mar 2020  路  32Comments  路  Source: AlexeyAB/darknet

I added Self-adversarial training.
How to use:

[net]
adversarial_lr=1
#attention=1  # just to show attention

Note for Classifier: it seems it makes training unstable for high learning rate, so you should train 50 of iteratios the model as usual, then add adversarial_lr=0.05 and continue training.

Explanation: If we run attention-algorithm or adversarial-attack-algorithm, then we find that network looks only at rare areas of the object, since it considers them to be the most important, but often the network makes mistakes - these parts of the object are not the most important or do not belong to the object at all, and the network does not notice other details of the object.

Our goal: to make the network take into account a large area

A way to achieve the goal: during training, for every second iteration, the network conducts an Adversarial attack on itself:

  1. In the first forward-backward pass, the network tries to remove from the image all the details that relate to the objects, and makes it believe itself that there is not a single object in the image.
  2. In the second run forward-backward, the network teaches its weights that there are still objects here, despite the fact that it seems to it that it is not here.

For example: default yolov3.cfg/weights
| Adversarial attack | Attention during training | Attention during training on Adversarial-attacked image |
|---|---|---|
| train already trained model for 500 iteration, but optimize the input image instead of weights (weights are freezed) https://github.com/AlexeyAB/darknet/issues/5105 | [net] adversarial_lr=0.05 attention=1 the network sees dog/bicycle/car | [net] adversarial_lr=0.05 attention=1 (image from the first column) the network sees cat here, without dog/bicycle/car |
| 77472580-375e6580-6e25-11ea-8179-ae95d674e844 | image| image |

As you can see in the edited image (adversarial attack) in the 1st/3d column, the network doesn't pay attention on dog/bicycle/car, because network thinks that that there are no dog/bicycle/car, and there is a cat instead of a dog. So network should be trained on this augmented image to pay attention to the more obvious details, as here you can clearly see the dog/bicycle/car.


Train https://github.com/AlexeyAB/darknet/blob/master/cfg/yolov3-tiny_3l.cfg on this small dataset for 10000 iterations: https://github.com/AlexeyAB/darknet/issues/3114#issuecomment-494148968

|---| default model | [net] adversarial_lr=0.05 |
---|---|---|
| 1st try | chart_simple_1 | chart_adversarial_with_burnin |
| 2nd try | chart_simple2 | chart_adversarial_without_burnin |

enhancement

Most helpful comment

@AlexeyAB

I will modify some code of ultralytics and examine the performance.
Due to training time of Pytorch is about 1 week while Darknet takes more than 1 month, I think I can check if new features are suitable for our models faster.
For example, anchor-free based methods, instance segmentation, simultaneously detect and track...
Then move new features which perform well into Darknet.

OK, will do these experiments as soon as possible.

All 32 comments

@WongKinYiu You can try to train some small model f.e. yolov3-tiny-prn.cfg with [net] adversarial_lr=0.05 and compare the mAP.

@AlexeyAB OK, i will also add this into ablation study waiting list.

@WongKinYiu I improved Self-adversarial training in the latest code: https://github.com/AlexeyAB/darknet/commit/4f62a01cd338f4a9039559eba45b293264b6dbd8

So use the latest code
and [net] adversarial_lr=1 in cfg-file

chart_adversarial_new_1

@AlexeyAB OK, i will retrain the model.

@WongKinYiu What model do you train?

@AlexeyAB yolov3-tiny-prn.

@WongKinYiu Sorry, another one fix ) https://github.com/AlexeyAB/darknet/commit/9a2344759b119faf9df98dbeed81650c03650ecd
Please run training again.


Is training of CSResNext/Darknet + PANet + MISH currently in progress?

@AlexeyAB
OK.
512x512: 42.3/64.2/45.8 - a little bit lower than w\o mish.

@WongKinYiu

512x512: 42.3/64.2/45.8 - a little bit lower than w\o mish.

Is it CSResNext-Panet or Darknet-PANet?
Is it trained by using top model from https://github.com/WongKinYiu/CrossStagePartialNetworks/blob/master/imagenet/results.md ?
Is it trained with higher subdivisions (lower mini-batch) than without mish?

@AlexeyAB

| Model (all with optimal setting) | Size | AP | AP50 | AP75 |
| :---- | :--: | :--: | :--: | :--: |
| CSPResNeXt50-PANet-SPP | 512脳512 | 42.4 | 64.4 | 45.9 |
| CSPResNeXt50-PANet-SPP (better imagenet) | 512脳512 | 42.3 | 64.3 | 45.7 |
| CSPResNeXt50-PANet-SPP (better imagenet+mish) | 512脳512 | 42.3 | 64.2 | 45.8 |
| CSPDarknet53-PANet-SPP (better imagenet) | 512脳512 | 42.4 | 64.5 | 46.0 |
| CSPDarknet53-PANet-SPP (better imagenet+mish) | 512脳512 | 43.0 | 64.9 | 46.5 |

@WongKinYiu Thanks!

  • Is CSPResNeXt-50 better imagenet == CutMix + Mosaic + Label Smoothing = 78.5% / 94.8% ?

  • What is the result of CSPDarknet53-PANet-SPP without better imagenet?

  • Does it mean that CutMix + Mosaic + Label Smoothing and/or mish worsen the results of CSPResNeXt but improves the results of CSPDarknet53?

@AlexeyAB

  • yes.

  • i do not train it.

  • i think for cspresnext50, all of models get almost same results.
    but for cspdarknet53, yes.

@WongKinYiu

So at the moment it is unclear whether such features as CBN, Dropblock and Adversarial-training improve accuracy:

  • It seems that CBN works fine for Detector, but doesn't work for Classifier.
  • Classifier with Dropblock was trained with broken weighted-[shortcut]-layers (without constrains/burnin_update/lrelu/softmax) so the results are incomprehensible
  • Adversarial-training is in progress

  1. How long does it take to train yolov3-tiny-prn with Adversarial-training?
    Currently added display of the remaining training time
    image

  2. You can try to train

    • csdarknet53-omega-mi.cfg.txt = (better imagenet+mish) + weighted-[shortcut]-multi-input-softmax

    • csdarknet53-omega-mi-db.cfg.txt = (better imagenet+mish) + weighted-[shortcut]-multi-input-softmax + dropblock (since weighted-[shortcut]-multi-input-softmax works very well with csresnext50, we will see if the dropblock works really well)

  3. Then you can try to train CSPDarknet53-PANet-SPP (better imagenet+mish) + CBN + May be Adversarial-training with the best backbone of: csdarknet53-omega.cfg.txt / csdarknet53-omega-mi.cfg.txt / csdarknet53-omega-mi-db.cfg.txt

  4. Does training of csresnext50morelayers-spp-asff-bifpn-rfb-db.cfg go well?

@AlexeyAB

  1. 85~90.

  2. OK, i will get some free gpus after 1 week.

  3. i will design experiments according to ablation studies.
    by the way, due to the training process of darknet is really slow, i may develop new methods using pytorch, if it works, then move it to darknet.

  4. currently 30k epochs.

@WongKinYiu

It's just that most Pytorch models have accuracy that is noticeably lower than in Darknet, based on these tables:


I think these are the last 2 models that we can train on Darknet before reproduce them back on Pytorch:

  1. Classifier - csdarknet53-omega-mi.cfg.txt
  2. Detector - CSPDarknet53-omega-mi-PANet-SPP (better imagenet+mish) + CBN + May be Adversarial-training + without-iou_thresh

Then we can use Darknet just for low-level optimizations xnor-models/..., or for new-recurrent-layers (changed gru/lstm/... layers), ...


@AlexeyAB

I will modify some code of ultralytics and examine the performance.
Due to training time of Pytorch is about 1 week while Darknet takes more than 1 month, I think I can check if new features are suitable for our models faster.
For example, anchor-free based methods, instance segmentation, simultaneously detect and track...
Then move new features which perform well into Darknet.

OK, will do these experiments as soon as possible.

@WongKinYiu Hi,

Why did you use leaky instead of mish for the PANet-head of CSPDarknet53-PANet-SPP (better imagenet+mish) ? There is used mish for backbone, and leaky for PANet-head: https://github.com/AlexeyAB/darknet/issues/5117#issuecomment-605405985

I think Mish can get better accuracy.

When you will train CSPDarknet53-PANet-SPP (better imagenet+mish) + CBN batch_normalize=2 + May be adversarial-training adversarial_lr=1 + May be label_smooth_eps=0.1 that is based on csdarknet53-omega-mi.cfg.txt , try to train with Mish-activation instead of Leak for PANet-head too. https://github.com/AlexeyAB/darknet/issues/5117#issuecomment-605460850

OK, thanks.

@WongKinYiu Also I added iou_thresh_kind= parameter to the [yolo] and [Gaussian_yolo] layers.
Now you can use it without changing source code:

[yolo]
iou_thresh_kind=giou  # by default: iou
iou_thresh=0.213

@WongKinYiu Hi, Have you finished training yolov3-tiny-prn model with [net] adversarial_lr=1 and what result did you get?

@AlexeyAB

will finish training in 10 min, currently the ap50 of val data is 30.22%

update:
with adversarial: 29.83% val ap50.
without: 32.78% val ap50.

update:
finetune adversarial: 30.03 val ap50.

@WongKinYiu Thanks.

So Self-adversarial training decreases ~ -3% AP50, at least for the small model.


Have you checked the CBN again on the small model like Tiny-PRN?

@WongKinYiu Can you share cfg and weights file of yolov3-tiny-prn with Self-adversarial?

The model that is trained with Self-adversarial training (data augmentation) is more robustness for Self-Adversarial attack and requires much more image changes than default model:

| adversarial-trained yolov3-tiny-prn-adversarial.weights | default yolov3-tiny-prn.weights |
|---|---|
| image | image |
| Click image to enlarge image | Click image to enlarge image |

ck and requires much more image change

I can see some noise on the left cat. Could you please explain the difference between these two cats? What does the not atttacked image look like?

  • Left - how much noise is required to trick a neural network that uses self-adversarial-training (you can tune hyperparameters, so it will require even much more noise)
  • Right - how much noise is required to trick a neural network that uses regular-training

A non-attacked image looks without any noise.

Many thanks!
Since the left cat is under the _default yolov3-tiny-prn.weights_ curve, I thought it as the one you explained as RIGHT.
On the right cat, the noise is mainly in this part , right?
image
On the non-attacked image, a cat rather than a person can be detected, right?

The captions for the drawings were mixed up, I fixed it)

@AlexeyAB Did you use adversarial_lr in yolov4 training ?
I can't find any source code or cfg related to adversarial_lr.
Can anyone help me.

How can i visualise SAT while training?
Write images with predicted boxes in path

Set in cfg-file

[net]
adversarial_lr=1.0

@AlexeyAB
What value of adversarial_lr should be set for yolov4-tiny and yolov4-custom? Is it data-set dependent or how the value will affect self adversarial train? Thanks!

Was this page helpful?
0 / 5 - 0 ratings

Related issues

jasleen137 picture jasleen137  路  3Comments

Yumin-Sun-00 picture Yumin-Sun-00  路  3Comments

zihaozhang9 picture zihaozhang9  路  3Comments

HilmiK picture HilmiK  路  3Comments

Mididou picture Mididou  路  3Comments