Share Email Print
cover

Proceedings Paper • new

Deep adversarial attack on target detection systems
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Target detection systems identify targets by localizing their coordinates on the input image of interest. This is ideally achieved by labeling each pixel in an image as a background or a potential target pixel. Deep Convolutional Neural Network (DCNN) classifiers have proven to be successful tools for computer vision applications. However, prior research confirms that even state of the art classifier models are susceptible to adversarial attacks. In this paper, we show how to generate adversarial infrared images by adding small perturbations to the targets region to deceive a DCNN-based target detector at remarkable levels. We demonstrate significant progress in developing visually imperceptible adversarial infrared images where the targets are visually recognizable by an expert but a DCNN-based target detector cannot detect the targets in the image.

Paper Details

Date Published: 10 May 2019
PDF: 9 pages
Proc. SPIE 11006, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, 110061Q (10 May 2019); doi: 10.1117/12.2518970
Show Author Affiliations
Uche M. Osahor, West Virginia Univ. (United States)
Nasser M. Nasrabadi, West Virginia Univ. (United States)


Published in SPIE Proceedings Vol. 11006:
Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications
Tien Pham, Editor(s)

© SPIE. Terms of Use
Back to Top