Share Email Print
cover

Proceedings Paper • new

XGAN: adversarial attacks with GAN
Author(s): Xiaoyu Fang; Guoxu Cao; Huapeng Song; Zhiyou Ouyang
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Recent studies have demonstrated that deep neural networks can be attacked by adding small pixel-level perturbations to the input data. In general, such disturbances are indistinguishable to the human eye, but can completely subvert the output of the deep neural network classifier to achieve non-target or target attacks. The current common practice is to superimpose the original image after generating a disturbance for the neural network. In this paper, we applied a method of generating target images directly using GAN to achieve a method of attacking deep neural networks. This method has excellent results on black-box attacks and is also suitable for the preconditions of most neural network attacks. Using this method, we achieved an 82% success rate on the black-box target attack on the cifar10 dataset and the MNIST dataset, while ensuring that the generated image is comparable to the original image.

Paper Details

Date Published: 27 November 2019
PDF: 6 pages
Proc. SPIE 11321, 2019 International Conference on Image and Video Processing, and Artificial Intelligence, 113211G (27 November 2019); doi: 10.1117/12.2543218
Show Author Affiliations
Xiaoyu Fang, Nanjing Univ. of Posts and Telecommunications (China)
Guoxu Cao, Nanjing Univ. of Posts and Telecommunications (China)
Huapeng Song, Nanjing Univ. of Posts And Telecommunications (China)
Zhiyou Ouyang, Nanjing Univ. of Posts and Telecommunications (China)


Published in SPIE Proceedings Vol. 11321:
2019 International Conference on Image and Video Processing, and Artificial Intelligence
Ruidan Su, Editor(s)

© SPIE. Terms of Use
Back to Top