Share Email Print

Proceedings Paper

Weighted fusion of depth and inertial data to improve view invariance for real-time human action recognition
Author(s): Chen Chen; Huiyan Hao; Roozbeh Jafari; Nasser Kehtarnavaz
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper presents an extension to our previously developed fusion framework [10] involving a depth camera and an inertial sensor in order to improve its view invariance aspect for real-time human action recognition applications. A computationally efficient view estimation based on skeleton joints is considered in order to select the most relevant depth training data when recognizing test samples. Two collaborative representation classifiers, one for depth features and one for inertial features, are appropriately weighted to generate a decision making probability. The experimental results applied to a multi-view human action dataset show that this weighted extension improves the recognition performance by about 5% over equally weighted fusion deployed in our previous fusion framework.

Paper Details

Date Published: 1 May 2017
PDF: 9 pages
Proc. SPIE 10223, Real-Time Image and Video Processing 2017, 1022307 (1 May 2017); doi: 10.1117/12.2261823
Show Author Affiliations
Chen Chen, The Univ. of Texas at Dallas (United States)
Huiyan Hao, The Univ. of Texas at Dallas (United States)
North Univ. of China (China)
Roozbeh Jafari, Texas A&M Univ. (United States)
Nasser Kehtarnavaz, The Univ. of Texas at Dallas (United States)

Published in SPIE Proceedings Vol. 10223:
Real-Time Image and Video Processing 2017
Nasser Kehtarnavaz; Matthias F. Carlsohn, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?