Share Email Print

Proceedings Paper

Integrating visual learning within a model-based ATR system
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of visual information for detecting, classifying, and identifying manmade objects in aerial imagery. We describe the integration of a visual learning component into the Image Data Conditioner (IDC) for target/clutter and other visual classification tasks. The component is based on an implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual learning in an ATR context requires the ability to recognize objects independent of location, scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate target locations. A bootstrap learning method effectively extends the operation of the classifier beyond the training set and provides a measure of confidence. We show how the classifier can be used to learn other features that are difficult to compute from imagery such as target direction, and to assess the performance of the visual learning process itself.

Paper Details

Date Published: 2 May 2017
PDF: 11 pages
Proc. SPIE 10200, Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI, 102000Z (2 May 2017); doi: 10.1117/12.2264806
Show Author Affiliations
Mark Carlotto, General Dynamics Mission Systems (United States)
Mark Nebrich, General Dynamics Mission Systems (United States)

Published in SPIE Proceedings Vol. 10200:
Signal Processing, Sensor/Information Fusion, and Target Recognition XXVI
Ivan Kadar, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?