AEPH
Home > Industry Science and Engineering > Vol. 2 No. 10 (ISE 2025) >
Research on Image Adversarial Example Generation Method Based on Diff-AIGAN
DOI: https://doi.org/10.62381/I255A06
Author(s)
Hanqi Liu, Jiaming Xu, Yanbing Liang
Affiliation(s)
School of Science, North China University of Technology, Tangshan, Hebei, China
Abstract
To address the issues of disturbance deviating from key areas and insufficient controllability in adversarial sample generation methods based on Generative Adversarial Networks (AIGAN), which lead to suboptimal attack effectiveness and low authenticity, this paper proposes the Diff-AIGAN adversarial sample generation method. First, a Channel-Spatial Attention Module (Convolutional Block Attention Module, CBAM) is introduced to re-calibrate the feature maps using a "channel-first, spatial-later" attention mechanism, guiding the network to focus automatically on more important channels and positions. Next, the fused feature maps are input into the generator to generate the initial disturbance, and a Stochastic Differential Guide Module (SDGM) is used to enhance the controllability of the disturbance, generating better adversarial samples. Finally, the adversarial samples are input into the discriminator and target model, and the loss value is iteratively computed and fed back to the generator to optimize the generation of more effective perturbations. Experimental results show that the Diff-AIGAN method achieves an attack success rate of over 99% on LeNetC and VGG11 in the MNIST dataset, and an attack success rate of 96.15% and 96.43% on ResNet18 and ResNet32 models in the CIFAR-10 dataset, respectively. At the same time, the generated disturbances focus on key image areas, with high sparsity and small magnitude, and outperform comparison methods across various metrics.
Keywords
Adversarial Examples; Generative Adversarial Networks (GANs); Diffusion Models; Perturbations; Image Generations
References
[1] Qian S, Ning C, Hu Y. MobileNetV3 for image classification [C]//2021 IEEE 2nd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE). IEEE, 2021:490-497. [2] Xie X, Cheng G, Wang J, et al. Oriented R-CNN for object detection [C] //Proceedings of the IEEE/CVF international conference on computer vision. 2021:3520-3529. [3] Wang Min, Li Sheng, Zhuang zhihao, et al. Ground-based Cloud Graph Segmentation Method Based on Deep Learning Semantic Segmentation Network. 2023, 23(31). (in Chinese) [4] Chib P S, Singh P. Recent advancements in end-to-end autonomous driving using deep learning: A survey [J]. IEEE Transactions on Intelligent Vehicles, 2023, 9(1):103-118. [5] Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks [J]. arXiv preprint arXiv:1312.6199, 2013. [6] Goodfellow I J, Shlens J, Szegedy C. Explaining and harnessing adversarial examples [J]. arXiv preprint arXiv:1412.6572, 2014. [7] Kurakin A, Goodfellow I J, Bengio S. Adversarial examples in the physical world [M]//Artificial intelligence safety and security. Chapman and Hall/CRC, 2018:99-112. [8] Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks [J]. arXiv preprint arXiv:1706.06083, 2017. [9] Moosavi-Dezfooli S M, Fawzi A, Frossard P. Deepfool: a simple and accurate method to fool deep neural networks [C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016:2574-2582. [10] Carlini N, Wagner D. Towards evaluating the robustness of neural networks [C]//2017 ieee symposium on security and privacy (sp). Ieee, 2017:39-57. [11] Xiao C, Li B, Zhu J Y, et al. Generating adversarial examples with adversarial networks [J]. arXiv preprint arXiv:1801.02610, 2018. [12] Jandial S, Mangla P, Varshney S, et al. Advgan++: Harnessing latent layers for adversary generation [C]//Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. 2019:0-0. [13] He X, Luo Z, Li Q, et al. DG-GAN: a high quality defect image generation method for defect detection [J]. Sensors, 2023, 23(13):5922. [14] Zhu Z, Chen H, Wang X, et al. Ge-advgan: Improving the transferability of adversarial samples by gradient editing-based adversarial generative model [C]//Proceedings of the 2024 SIAM international conference on data mining (SDM). Society for Industrial and Applied Mathematics, 2024:706-714. [15] Bai T, Zhao J, Zhu J, et al. Ai-gan: Attack-inspired generation of adversarial examples [C]//2021 IEEE International Conference on Image Processing (ICIP). IEEE, 2021:2543-2547. [16] Woo S, Park J, Lee J Y, et al. Cbam: Convolutional block attention module [C]//Proceedings of the European conference on computer vision (ECCV). 2018:3-19. [17] Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models [J]. Advances in neural information processing systems, 2020, 33:6840-6851. [18] Tramèr F, Kurakin A, Papernot N, et al. Ensemble adversarial training: Attacks and defenses [J]. arXiv preprint arXiv:1705.07204, 2017. [19] Madry A, Makelov A, Schmidt L, et al. Towards deep learning models resistant to adversarial attacks [J]. arXiv preprint arXiv:1706.06083, 2017
Copyright @ 2020-2035 Academic Education Publishing House All Rights Reserved