
Proceedings Paper
Video-based convolutional neural networks for activity recognition from robot-centric videosFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
In this evaluation paper, we discuss convolutional neural network (CNN)-based approaches for human activity recognition. In particular, we investigate CNN architectures designed to capture temporal information in videos and their applications to the human activity recognition problem. There have been multiple previous works to use CNN-features for videos. These include CNNs using 3-D XYT convolutional filters, CNNs using pooling operations on top of per-frame image-based CNN descriptors, and recurrent neural networks to learn temporal changes in per-frame CNN descriptors. We experimentally compare some of these different representatives CNNs while using first-person human activity videos. We especially focus on videos from a robots viewpoint, captured during its operations and human-robot interactions.
Paper Details
Date Published: 13 May 2016
PDF: 6 pages
Proc. SPIE 9837, Unmanned Systems Technology XVIII, 98370R (13 May 2016); doi: 10.1117/12.2229531
Published in SPIE Proceedings Vol. 9837:
Unmanned Systems Technology XVIII
Robert E. Karlsen; Douglas W. Gage; Charles M. Shoemaker; Grant R. Gerhart, Editor(s)
PDF: 6 pages
Proc. SPIE 9837, Unmanned Systems Technology XVIII, 98370R (13 May 2016); doi: 10.1117/12.2229531
Show Author Affiliations
M. S. Ryoo, Indiana Univ. (United States)
Larry Matthies, Jet Propulsion Lab. (United States)
Published in SPIE Proceedings Vol. 9837:
Unmanned Systems Technology XVIII
Robert E. Karlsen; Douglas W. Gage; Charles M. Shoemaker; Grant R. Gerhart, Editor(s)
© SPIE. Terms of Use
