Sparse Adversarial Attack to Object Detection

Adversarial examples have gained tons of attention in recent years. Many adversarial attacks have been proposed to attack image classifiers, but few work shift attention to object detectors.In this paper, we propose Sparse Adversarial Attack (SAA) which enables adversaries to perform effective evasion attack on detectors with bounded \emph{l$_{0}$} norm perturbation. We select the fragile position of the image and designed evasion loss function for the task. Experiment results on YOLOv4 and FasterRCNN reveal the effectiveness of our method. In addition, our SAA shows great transferability across different detectors in the black-box attack setting. Codes are available at \emph{https://github.com/THUrssq/Tianchi04}.

对目标检测的稀疏对抗攻击

近年来,对抗性例子引起了很多关注。已经提出了许多对抗攻击来攻击图像分类器,但是很少有工作将注意力转移到物体检测器上。.. 在本文中,我们提出了稀疏对抗攻击(SAA),它使攻击者能够对具有\ emph {l 0 规范扰动。我们选择图像的脆弱位置,并为任务设计了规避损失的功能。YOLOv4和FasterRCNN的实验结果证明了我们方法的有效性。此外,我们的SAA在黑匣子攻击环境中显示了跨不同检测器的出色传输能力。可以在\ emph {https://github.com/THUrssq/Tianchi04}上找到代码。 (阅读更多)