Share Email Print

Proceedings Paper

Human action recognition using Kinect multimodal information
Author(s): Chao Tang; Miao-hui Zhang; Xiao-feng Wang ; Wei Li; Feng Cao; Chun-ling Hu
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

With the successful introduction and popularization of Kinect, it has been widely applied in intelligent surveillance, human-machine interaction and human action recognition and so on. This paper presents a human action recognition based on multimodal information using the Kinect sensor. Firstly, the HOG feature based on RGB modal information, the space-time interest points feature based on depth modal information, and the human body joints relative position feature based on skeleton modal information are extracted respectively for expressing human action. Then, the three kinds of nearest neighbor classifiers with different distance measurement formulas are used to predict the class label for a test sample which is respectively expressed by three different modal features. The experimental results show that the proposed method is simple, fast and efficient compared with other action recognition algorithms on public datasets.

Paper Details

Date Published: 31 August 2018
PDF: 12 pages
Proc. SPIE 10835, Global Intelligence Industry Conference (GIIC 2018), 1083507 (31 August 2018); doi: 10.1117/12.2505416
Show Author Affiliations
Chao Tang, Hefei Univ. of Technology (China)
Miao-hui Zhang, Jiangxi Academy of Sciences (China)
Xiao-feng Wang , Hefei Univ. of Technology (China)
Wei Li, Xiamen Univ. of Technology (China)
Feng Cao, Shanxi Univ. (China)
Chun-ling Hu, Hefei Univ. of Technology (China)

Published in SPIE Proceedings Vol. 10835:
Global Intelligence Industry Conference (GIIC 2018)
Yueguang Lv, Editor(s)

© SPIE. Terms of Use
Back to Top