Share Email Print
cover

Proceedings Paper

Toward a large-scale multimodal event-based dataset for neuromorphic deep learning applications
Author(s): Sarah Leung; E. Jared Shamwell; Christopher Maxey; William D. Nothwang
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

We discuss our efforts with event-based vision and describe our large-scale, heterogeneous robotic dataset that will add to the growing number of event-based datasets currently publicly available. Our dataset comprises over 10 hours of runtime from a mobile robot equipped with two DAVIS240C cameras and an Astra depth camera randomly wandering in an indoor environment while two other independently moving robots randomly wander in the same scene. Vicon ground truth pose is provided for all three robots. To our knowledge, this is the largest event-based dataset with ground truthed independently moving entities.

Paper Details

Date Published: 14 May 2018
PDF: 10 pages
Proc. SPIE 10639, Micro- and Nanotechnology Sensors, Systems, and Applications X, 106391T (14 May 2018); doi: 10.1117/12.2305504
Show Author Affiliations
Sarah Leung, U.S. Army Research Lab. (United States)
General Technical Services, LLC (United States)
E. Jared Shamwell, U.S. Army Research Lab. (United States)
General Technical Services, LLC (United States)
Christopher Maxey, U.S. Army Research Lab. (United States)
Oak Ridge Associated Univs. (United States)
Univ. of Maryland, College Park (United States)
William D. Nothwang, U.S. Army Research Lab. (United States)


Published in SPIE Proceedings Vol. 10639:
Micro- and Nanotechnology Sensors, Systems, and Applications X
Thomas George; Achyut K. Dutta; M. Saif Islam, Editor(s)

© SPIE. Terms of Use
Back to Top