Share Email Print
cover

Proceedings Paper • new

Utilizing full neuronal states for adversarial robustness
Author(s): Alex Gain; Hava T. Siegelmann
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Small, imperceptible perturbations of the input data can lead to DNNs making egregious errors during inference, such as misclassifying an image of a dog as a cat with high probability. Thus, defending against adversarial examples for deep neural networks (DNNs) is of great interest to sensing technologies and the machine learning community to ensure the security of practical systems where DNNs are used. Whereas many approaches have been explored for defending against adversarial attacks, few have made use of the full state of the entire network, opting instead to only consider the output layer and gradient information. We develop several motivated techniques that make use of the full network state, improving adversarial robustness. We provide principled motivation of our techniques via analysis of attractor dynamics, shown to occur in the highly recurrent human brain, and validate our improvements via empirical results on standard datasets and white-box attacks.

Paper Details

Date Published: 12 November 2019
PDF: 3 pages
Proc. SPIE 11197, SPIE Future Sensing Technologies, 1119712 (12 November 2019); doi: 10.1117/12.2542804
Show Author Affiliations
Alex Gain, Johns Hopkins Univ. (United States)
Hava T. Siegelmann, Univ. of Massachusetts Amherst (United States)


Published in SPIE Proceedings Vol. 11197:
SPIE Future Sensing Technologies
Masafumi Kimata; Christopher R. Valenta, Editor(s)

© SPIE. Terms of Use
Back to Top