Share Email Print

Proceedings Paper

Human body tracking using LMS-VSMM from monocular video sequences
Author(s): Hong Han; Zhichao Chen; LC Jiao; Youjian Fan
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

A new model-based human body tracking framework with learned-based theory is proposed in this paper. This framework introduces a likely model set-variable structure multiple models (LMS-VSMM) to track articulated human motion in monocular images sequences. The key joint points are selected as image feature, which are detected automatically and the undetected points are estimated with Particle filters, multiple motion models are learned from CMU motion capture database with ridge regression method to direct tracking. In tracking, motion models currently in effect switches from one to another in order to match the present human motion mode. The motion model is activated according to the change in projection angle of kinematic chain, and topological and compatibility relationship among them. It is terminated according to their model probabilities. And likely model set schemes of VSMM is used to estimate the quaternion vectors of joints rotation. Experiments using two videos demonstrate this tracking framework is efficient with respect to 3D pose and 2D projection.

Paper Details

Date Published: 19 May 2011
PDF: 10 pages
Proc. SPIE 8049, Automatic Target Recognition XXI, 80490K (19 May 2011);
Show Author Affiliations
Hong Han, Xidian Univ. (China)
Zhichao Chen, Xidian Univ. (China)
LC Jiao, Xidian Univ. (China)
Youjian Fan, Xidian Univ. (China)

Published in SPIE Proceedings Vol. 8049:
Automatic Target Recognition XXI
Firooz A. Sadjadi; Abhijit Mahalanobis, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?