Share Email Print
cover

Proceedings Paper • new

Understanding adversarial attack and defense towards deep compressed neural networks
Author(s): Qi Liu; Tao Liu; Wujie Wen
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Modern deep neural networks (DNNs) have been demonstrating a phenomenal success in many exciting appli- cations such as computer vision, speech recognition, and natural language processing, thanks to recent machine learning model innovation and computing hardware advancement. However, recent studies show that state-of- the-art DNNs can be easily fooled by carefully crafted input perturbations that are even imperceptible to human eyes, namely “adversarial examples”, causing the emerging security concerns for DNN based intelligent systems. Moreover, to ease the intensive computation and memory resources requirement imposed by the fast-growing DNN model size, aggressively pruning the redundant model parameters through various hardware-favorable DNN techniques (i.e. hash, deep compression, circulant projection) has become a necessity. This procedure further complicates the security issues of DNN systems. In this paper, we first study the vulnerabilities of hardware-oriented deep compressed DNNs under various adversarial attacks. Then we survey the existing mitigation approaches such as gradient distillation, which is originally tailored to the software-based DNN systems. Inspired by the gradient distillation and weight reshaping, we further develop a near zero-cost but effective gradient silence (GS) method to protect both software and hardware-based DNN systems against adversarial attacks. Compared with defensive distillation, our gradient salience method can achieve better resilience to adversarial attacks without additional training, while still maintaining very high accuracies across small and large DNN models for various image classification benchmarks like MNIST and CIFAR10.

Paper Details

Date Published: 3 May 2018
PDF: 12 pages
Proc. SPIE 10630, Cyber Sensing 2018, 106300Q (3 May 2018); doi: 10.1117/12.2305226
Show Author Affiliations
Qi Liu, Florida International Univ. (United States)
Tao Liu, Florida International Univ. (United States)
Wujie Wen, Florida International Univ. (United States)


Published in SPIE Proceedings Vol. 10630:
Cyber Sensing 2018
Igor V. Ternovskiy; Peter Chin, Editor(s)

© SPIE. Terms of Use
Back to Top