Share Email Print
cover

Proceedings Paper • new

Reducing the cost of visual DL datasets
Author(s): Philip R. Osteen; Jason L. Owens; Brian Kaukeinen
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Intelligent military systems require perception capabilities that are flexible, dynamic, and robust to unstructured environments and new situations. However, current state-of-the-art algorithms are based on deep learning, require large amounts of data, and require a proportionally large human effort in collection and annotation. To help improve this situation, we define a method of comparing 3D environment reconstructions without ground truth based on the exploitation of available reflexive information, and use the method to evaluate existing RGBD mapping algorithms in an effort to generate a large, fully-annotated data set for visual learning tasks. In addition, we describe algorithms and software that support rapid manual annotation of these reconstructed 3D environments for a variety of vision tasks. Our results show that we can use existing data sets as well as synthetic data to bootstrap tools that allow us to quickly and efficiently label larger data sets without ground truth, maximizing human effort without requiring crowd sourcing techniques.

Paper Details

Date Published: 10 May 2019
PDF: 19 pages
Proc. SPIE 11006, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications, 110060F (10 May 2019); doi: 10.1117/12.2519114
Show Author Affiliations
Philip R. Osteen, U.S. Army Research Lab. (United States)
Jason L. Owens, U.S. Army Research Lab. (United States)
Brian Kaukeinen, U.S. Army Research Lab. (United States)


Published in SPIE Proceedings Vol. 11006:
Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications
Tien Pham, Editor(s)

© SPIE. Terms of Use
Back to Top