Share Email Print
cover

Proceedings Paper

Efficient generation of image chips for training deep learning algorithms
Author(s): Sanghui Han; Alex Fafard; John Kerekes; Michael Gartley; Emmett Ientilucci; Andreas Savakis; Charles Law; Jason Parhan; Matt Turek; Keith Fieldhouse; Todd Rovito
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Training deep convolutional networks for satellite or aerial image analysis often requires a large amount of training data. For a more robust algorithm, training data need to have variations not only in the background and target, but also radiometric variations in the image such as shadowing, illumination changes, atmospheric conditions, and imaging platforms with different collection geometry. Data augmentation is a commonly used approach to generating additional training data. However, this approach is often insufficient in accounting for real world changes in lighting, location or viewpoint outside of the collection geometry. Alternatively, image simulation can be an efficient way to augment training data that incorporates all these variations, such as changing backgrounds, that may be encountered in real data. The Digital Imaging and Remote Sensing Image Image Generation (DIRSIG) model is a tool that produces synthetic imagery using a suite of physics-based radiation propagation modules. DIRSIG can simulate images taken from different sensors with variation in collection geometry, spectral response, solar elevation and angle, atmospheric models, target, and background. Simulation of Urban Mobility (SUMO) is a multi-modal traffic simulation tool that explicitly models vehicles that move through a given road network. The output of the SUMO model was incorporated into DIRSIG to generate scenes with moving vehicles. The same approach was used when using helicopters as targets, but with slight modifications. Using the combination of DIRSIG and SUMO, we quickly generated many small images, with the target at the center with different backgrounds. The simulations generated images with vehicles and helicopters as targets, and corresponding images without targets. Using parallel computing, 120,000 training images were generated in about an hour. Some preliminary results show an improvement in the deep learning algorithm when real image training data are augmented with the simulated images, especially when obtaining sufficient real data was particularly challenging.

Paper Details

Date Published: 1 May 2017
PDF: 9 pages
Proc. SPIE 10202, Automatic Target Recognition XXVII, 1020203 (1 May 2017); doi: 10.1117/12.2261702
Show Author Affiliations
Sanghui Han, Rochester Institute of Technology (United States)
Alex Fafard, Rochester Institute of Technology (United States)
John Kerekes, Rochester Institute of Technology (United States)
Michael Gartley, Rochester Institute of Technology (United States)
Emmett Ientilucci, Rochester Institute of Technology (United States)
Andreas Savakis, Rochester Institute of Technology (United States)
Charles Law, Kitware Inc. (United States)
Jason Parhan, Rensselaer Polytechnic Institute (United States)
Matt Turek, Kitware Inc. (United States)
Keith Fieldhouse, Kitware Inc. (United States)
Todd Rovito, Air Force Research Lab. (United States)


Published in SPIE Proceedings Vol. 10202:
Automatic Target Recognition XXVII
Firooz A. Sadjadi; Abhijit Mahalanobis, Editor(s)

© SPIE. Terms of Use
Back to Top