Share Email Print

Proceedings Paper

Multilevel model for 2D human motion analysis and description
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

This paper deals with the proposition of a model for human motion analysis in a video. Its main caracteristic is to adapt itself automatically to the current resolution, the actual quality of the picture, or the level of precision required by a given application, due to its possible decomposition into several hierarchical levels. The model is region-based to address some analysis processing needs. The top level of the model is only defined with 5 ribbons, which can be cut into sub-ribbons regarding to a given (or an expected) level of details. Matching process between model and current picture consists in the comparison of extracted subject shape with a graphical rendering of the model built on the base of some computed parameters. The comparison is processed by using a chamfer matching algorithm. In our developments, we intend to realize a platform of interaction between a dancer and tools synthetizing abstract motion pictures and music in the conditions of a real-time dialogue between a human and a computer. In consequence, we use this model in a perspective of motion description instead of motion recognition: no a priori gestures are supposed to be recognized as far as no a priori application is specially targeted. The resulting description will be made following a Description Scheme compliant with the movement notation called "Labanotation".

Paper Details

Date Published: 10 January 2003
PDF: 11 pages
Proc. SPIE 5018, Internet Imaging IV, (10 January 2003); doi: 10.1117/12.476183
Show Author Affiliations
Thomas Foures, IRIT/Univ. Paul Sabatier (France)
Philippe Joly, IRIT/Univ. Paul Sabatier (France)

Published in SPIE Proceedings Vol. 5018:
Internet Imaging IV
Simone Santini; Raimondo Schettini, Editor(s)

© SPIE. Terms of Use
Back to Top