Darknet: Adversarial attack on Detector

Created on 24 Mar 2020  路  6Comments  路  Source: AlexeyAB/darknet

Try to find Dog, Car, Bicycle on this image: drawn.zip

By using Yolov3-model
./darknet detector test cfg/coco.data yolov3.cfg yolov3.weights drawn.png

image


How to use:

  1. Build Darknet with GPU=1 CUDNN=1 OPENCV=1
  2. Run command:
    ./darknet detector draw cfg/coco.data yolov3.cfg yolov3.weights -thresh 0.25 dog.jpg

  3. Move trackbar:

    • iterations=200
    • learning_rate exp = 10
    • class_id = 15
  4. Select any area on the image by Left-mouse-button

  5. Will be saved image drawn.png

image

enhancement

Most helpful comment

@HagegeR This is a utility for this to fool the detector. It does't train the model, it train the image.
To train the model use this: https://github.com/AlexeyAB/darknet/issues/5117

All 6 comments

@AlexeyAB Hi,

I could detect the dog with these parameters. Can you explain more about this feature?

ddd

@zpmmehrdad

Can you explain more about this feature?

  • Regular training - network during training changes their weights to required detect objects
  • Adversarial attack - network during training changes initial image to required detect objects, so you can make this neural network detect the objects you need, and not detect unnecessary

I improved adversarial attack and training. Download the latest Darknet code.
Now use default settings.

image

this is a kind of transfer learning or fine tuning? won't it affect global performance (improving one class while degrading others) ?

@HagegeR This is a utility for this to fool the detector. It does't train the model, it train the image.
To train the model use this: https://github.com/AlexeyAB/darknet/issues/5117

@zpmmehrdad

Can you explain more about this feature?

  • Regular training - network during training changes their weights to required detect objects
  • Adversarial attack - network during training changes initial image to required detect objects, so you can make this neural network detect the objects you need, and not detect unnecessary

I improved adversarial attack and training. Download the latest Darknet code.
Now use default settings.

image

Should "to required detect objects" be "to detect required objects" ?
By adding some noise on the dog to make it look like a cat (to machine but not to human), then training the model to detect the modified dog , "so you can make this neural network detect the objects you need (dog)" , right?
Similar to this?
image

@sisrfeng Yes, but since we have access to the structure+weights of neural network (not just black-box), so we can do such attack more efficient (with less changes - like side effects of image-compression).
But we can also teach it to defend herself against such attacks.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

qianyunw picture qianyunw  路  3Comments

Mididou picture Mididou  路  3Comments

siddharth2395 picture siddharth2395  路  3Comments

Yumin-Sun-00 picture Yumin-Sun-00  路  3Comments

louisondumont picture louisondumont  路  3Comments