Share Email Print
cover

Proceedings Paper

Design of adversarial targets: fooling deep ATR systems
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Deep Convolutional Neural Networks (DCNN) have proven to be an exceptional tool for object recognition in various computer vision applications. However, recent findings have shown that such state of the art models can be easily deceived by inserting slight imperceptible perturbations to key pixels in the input image. In this paper, we focus on deceiving Automatic Target Recognition(ATR) classiers. These classiers are built to recognize specified targets in a scene and also simultaneously identify their class types. In our work, we explore the vulnerabilities of DCNN-based target classifiers. We demonstrate significant progress in developing infrared adversarial target by adding small perturbations to the input image such that the image perturbation cannot be easily detected. The algorithm is built to adapt to both targeted and non-targeted adversarial attacks. Our findings reveal promising results that reflect serious implications of adversarial attacks.

Paper Details

Date Published: 14 May 2019
PDF: 10 pages
Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880F (14 May 2019); doi: 10.1117/12.2518945
Show Author Affiliations
Uche M. Osahor, West Virginia Univ. (United States)
Nasser M. Nasrabadi, West Virginia Univ. (United States)


Published in SPIE Proceedings Vol. 10988:
Automatic Target Recognition XXIX
Riad I. Hammoud; Timothy L. Overman, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray