Share Email Print
cover

Proceedings Paper • new

Spatial and temporal segmented dense trajectories for gesture recognition
Author(s): Kaho Yamada; Takeshi Yoshida; Kazuhiko Sumi; Hitoshi Habe; Ikuhisa Mitsugami
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Recently, dense trajectories [1] have been shown to be a successful video representation for action recognition, and have demonstrated state-of-the-art results with a variety of datasets. However, if we apply these trajectories to gesture recognition, recognizing similar and fine-grained motions is problematic. In this paper, we propose a new method in which dense trajectories are calculated in segmented regions around detected human body parts. Spatial segmentation is achieved by body part detection [2]. Temporal segmentation is performed for a fixed number of video frames. The proposed method removes background video noise and can recognize similar and fine-grained motions. Only a few video datasets are available for gesture classification; therefore, we have constructed a new gesture dataset and evaluated the proposed method using this dataset. The experimental results show that the proposed method outperforms the original dense trajectories.

Paper Details

Date Published: 14 May 2017
PDF: 8 pages
Proc. SPIE 10338, Thirteenth International Conference on Quality Control by Artificial Vision 2017, 103380F (14 May 2017); doi: 10.1117/12.2266859
Show Author Affiliations
Kaho Yamada, Aoyama Gakuin Univ. (Japan)
Takeshi Yoshida, Aoyama Gakuin Univ. (Japan)
Kazuhiko Sumi, Aoyama Gakuin Univ. (Japan)
Hitoshi Habe, Kindai Univ. (Japan)
Ikuhisa Mitsugami, Osaka Univ. (Japan)


Published in SPIE Proceedings Vol. 10338:
Thirteenth International Conference on Quality Control by Artificial Vision 2017
Hajime Nagahara; Kazunori Umeda; Atsushi Yamashita, Editor(s)

© SPIE. Terms of Use
Back to Top