Share Email Print
cover

Proceedings Paper

Using image quality metrics to identify adversarial imagery for deep learning networks
Author(s): Josh Harguess; Jeremy Miclat; Julian Raheema
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Deep learning has continued to gain momentum in applications across many critical areas of research in computer vision and machine learning. In particular, deep learning networks have had much success in image classification, especially when training data are abundantly available, as is the case with the ImageNet project. However, several researchers have exposed potential vulnerabilities of these networks to carefully crafted adversarial imagery. Additionally, researchers have shown the sensitivity of these networks to some types of noise and distortion. In this paper, we investigate the use of no-reference image quality metrics to identify adversarial imagery and images of poor quality that could potentially fool a deep learning network or dramatically reduce its accuracy. Results are shown on several adversarial image databases with comparisons to popular image classification databases.

Paper Details

Date Published: 1 May 2017
PDF: 7 pages
Proc. SPIE 10199, Geospatial Informatics, Fusion, and Motion Video Analytics VII, 1019907 (1 May 2017); doi: 10.1117/12.2263584
Show Author Affiliations
Josh Harguess, Space and Naval Warfare Systems Ctr. Pacific (United States)
Jeremy Miclat, Space and Naval Warfare Systems Ctr. Pacific (United States)
Julian Raheema, Space and Naval Warfare Systems Ctr. Pacific (United States)


Published in SPIE Proceedings Vol. 10199:
Geospatial Informatics, Fusion, and Motion Video Analytics VII
Kannappan Palaniappan; Peter J. Doucette; Gunasekaran Seetharaman; Anthony Stefanidis, Editor(s)

© SPIE. Terms of Use
Back to Top