With the development of artificial intelligence, the object detection model based on deep learning has also achieved great results. The detection model has also developed from the traditional manual extraction of features to the current neural network extraction. The classic single-stage detection model is based on YOLO series is representative. However, with constant research, it is discovered that the detection model based on deep neural network also inherits the shortcomings of neural network and is vulnerable to adversarial attacks. This paper proposes an optimized attack algorithm based on PGD, which realizes the adversarial attack on the YOLOv4 object detection model. Experiments have proved that this attack method in this paper reduces the mAP indicator from 87.61% to 0.12% on the VOC data set, and from 69.17% to 0.37% on the COCO data set. It has a certain improvement in the evaluation indicators PSNR and SSIM, and the attack effect Compared with the original PGD, the quality of the generated adversarial example is better.
|