Share Email Print
cover

Proceedings Paper

The Johns Hopkins University multimodal dataset for human action recognition
Author(s): Thomas S. Murray; Daniel R. Mendat; Philippe O. Pouliquen; Andreas G. Andreou
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The Johns Hopkins University MultiModal Action (JHUMMA) dataset contains a set of twenty-one actions recorded with four sensor systems in three different modalities. The data was collected with a data acquisition system that includes three independent active sonar devices at three different frequencies and a Microsoft Kinect sensor that provides both RGB and Depth data. We have developed algorithms for human action recognition from active acoustics and provide benchmark baseline recognition performance results.

Paper Details

Date Published: 21 May 2015
PDF: 16 pages
Proc. SPIE 9461, Radar Sensor Technology XIX; and Active and Passive Signatures VI, 94611U (21 May 2015); doi: 10.1117/12.2189349
Show Author Affiliations
Thomas S. Murray, The Johns Hopkins Univ. (United States)
Daniel R. Mendat, The Johns Hopkins Univ. (United States)
Philippe O. Pouliquen, The Johns Hopkins Univ. (United States)
Andreas G. Andreou, The Johns Hopkins Univ. (United States)


Published in SPIE Proceedings Vol. 9461:
Radar Sensor Technology XIX; and Active and Passive Signatures VI
G. Charmaine Gilbreath; Kenneth I. Ranney; Armin Doerry; Chadwick Todd Hawley, Editor(s)

© SPIE. Terms of Use
Back to Top