Share Email Print
cover

Proceedings Paper

Augmented reality data generation for training deep learning neural network
Author(s): Kevin Payumo; Alexander Huyen; Landan Seguin; Thomas T. Lu; Edward Chow; Gil Torres
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

One of the major challenges in deep learning is retrieving sufficiently large labeled training datasets, which can become expensive and time consuming to collect. A unique approach to training segmentation is to use Deep Neural Network (DNN) models with a minimal amount of initial labeled training samples. The procedure involves creating synthetic data and using image registration to calculate affine transformations to apply to the synthetic data. The method takes a small dataset and generates a highquality augmented reality synthetic dataset with strong variance while maintaining consistency with real cases. Results illustrate segmentation improvements in various target features and increased average target confidence.

Paper Details

Date Published: 30 April 2018
PDF: 12 pages
Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 106490U (30 April 2018); doi: 10.1117/12.2305202
Show Author Affiliations
Kevin Payumo, Univ. of California, Irvine (United States)
Alexander Huyen, Jet Propulsion Lab. (United States)
Landan Seguin, Georgia Institute of Technology (United States)
Thomas T. Lu, Jet Propulsion Lab. (United States)
Edward Chow, Jet Propulsion Lab. (United States)
Gil Torres, Naval Air Warfare Ctr. (United States)


Published in SPIE Proceedings Vol. 10649:
Pattern Recognition and Tracking XXIX
Mohammad S. Alam, Editor(s)

© SPIE. Terms of Use
Back to Top