Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and

qqdevelopment9048 14 0 .pdf 2021-01-24 06:01:24

Color Channel Perturbation Attacks for Fooling Convolutional Neural Networks and A Defense Against Such Attacks

The Convolutional Neural Networks (CNNs) have emerged as a very powerful data dependent hierarchical feature extraction method. It is widely used in several computer vision problems.The CNNs learn the important visual features from training samples automatically. It is observed that the network overfits the training samples very easily. Several regularization methods have been proposed to avoid the overfitting. In spite of this, the network is sensitive to the color distribution within the images which is ignored by the existing approaches. In this paper, we discover the color robustness problem of CNN by proposing a Color Channel Perturbation (CCP) attack to fool the CNNs. In CCP attack new images are generated with new channels created by combining the original channels with the stochastic weights. Experiments were carried out over widely used CIFAR10, Caltech256 and TinyImageNet datasets in the image classification framework. The VGG, ResNet and DenseNet models are used to test the impact of the proposed attack. It is observed that the performance of the CNNs degrades drastically under the proposed CCP attack. Result show the effect of the proposed simple CCP attack over the robustness of the CNN trained model. The results are also compared with existing CNN fooling approaches to evaluate the accuracy drop. We also propose a primary defense mechanism to this problem by augmenting the training dataset with the proposed CCP attack. The state-of-the-art performance using the proposed solution in terms of the CNN robustness under CCP attack is observed in the experiments. The code is made publicly available at \url{https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack}.

愚弄卷积神经网络的彩色通道摄动攻击及其防御

卷积神经网络(CNN)已经成为一种非常强大的数据相关分层特征提取方法。它被广泛用于一些计算机视觉问题。.. CNN会从训练样本中自动学习重要的视觉功能。可以看出,网络非常容易拟合训练样本。已经提出了几种正则化方法来避免过度拟合。尽管如此,网络对图像内的颜色分布很敏感,而现有方法忽略了它。在本文中,我们通过提出一种颜色通道扰动(CCP)攻击来欺骗CNN来发现CNN的颜色健壮性问题。在CCP攻击中,将通过将原始通道与随机权重结合在一起创建的新通道生成新图像。在图像分类框架中,对广泛使用的CIFAR10,Caltech256和TinyImageNet数据集进行了实验。VGG,ResNet和DenseNet模型用于测试提议的攻击的影响。可以看出,在提出的CCP攻击下,CNN的性能急剧下降。结果显示了所提出的简单CCP攻击对CNN训练模型的鲁棒性的影响。还将结果与现有的CNN欺骗方法进行比较,以评估准确性下降。我们还通过使用建议的CCP攻击扩充训练数据集,提出了针对此问题的主要防御机制。在实验中观察到了使用所提出的解决方案在CCP攻击下CNN鲁棒性方面的最新性能。该代码可在\ url {https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack}上公开获得。结果显示了所提出的简单CCP攻击对CNN训练模型的鲁棒性的影响。还将结果与现有的CNN欺骗方法进行比较,以评估准确性下降。我们还通过使用建议的CCP攻击扩充训练数据集,提出了针对此问题的主要防御机制。在实验中观察到了使用所提出的解决方案在CCP攻击下CNN鲁棒性方面的最新性能。该代码可在\ url {https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack}上公开获得。结果显示了所提出的简单CCP攻击对CNN训练模型的鲁棒性的影响。还将结果与现有的CNN欺骗方法进行比较,以评估准确性下降。我们还通过使用建议的CCP攻击扩充训练数据集,提出了针对此问题的主要防御机制。在实验中观察到了使用所提出的解决方案在CCP攻击下CNN鲁棒性方面的最新性能。该代码可在\ url {https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack}上公开获得。在实验中观察到了使用所提出的解决方案在CCP攻击下CNN鲁棒性方面的最新性能。该代码可在\ url {https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack}上公开获得。在实验中观察到了使用所提出的解决方案在CCP攻击下CNN鲁棒性方面的最新性能。该代码可在\ url {https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack}上公开获得。 (阅读更多)

用户评论
请输入评论内容
评分:
暂无评论