Share Email Print

Proceedings Paper

Characterization of CNN classifier performance with respect to variation in optical contrast, using synthetic electro-optical data
Author(s): Christopher Menart; Colin Leong; Olga Mendoza-Schrock; Edmund Zelnio
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Deep neural networks demonstrate high performance at classifying high-dimensional signals, but often fail to generalize to data that is different from the data they were trained on. In this paper, we investigate the resilience of convolutional neural networks (CNNs) to unforeseen operating conditions. Specifically, we empirically evaluate the ability of CNN models to generalize across changes in image contrast. Multiple models are trained on electro- optical (EO) or near-infrared (IR) data, and are evaluated in environments with degraded contrast compared to training. Experiments are replicated across varying architectures, including state-of-the-art classification models such as Resnet-152, and across both synthetic and measured datasets. In comparison to models trained and evaluated on identically-distributed data, these models can generalize well when contrast invariance is built up through data augmentation. Future work will investigate CNN ability to generalize to other changes in operating conditions.

Paper Details

Date Published: 14 May 2019
PDF: 11 pages
Proc. SPIE 10988, Automatic Target Recognition XXIX, 109880N (14 May 2019); doi: 10.1117/12.2519494
Show Author Affiliations
Christopher Menart, AFRL/RYA (United States)
Colin Leong, Univ. of Dayton Research Institute (United States)
Olga Mendoza-Schrock, AFRL/RYA (United States)
Edmund Zelnio, AFRL/RYA (United States)

Published in SPIE Proceedings Vol. 10988:
Automatic Target Recognition XXIX
Riad I. Hammoud; Timothy L. Overman, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?