Share Email Print
cover

Proceedings Paper

Transform invariant based motion segmentation
Author(s): Yufeng Chen; Fengxia Li; Peng Lu
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Motion segmentation is being paid more and more attention in computer vision with the rapid increasing requirement of content based coding, motion based recognition and etc. However, the robustness and efficiency of motion segmentation is still a challenging problem. In this paper we propose a novel motion segmentation method, which is based on the transform invariant of local motion, to try to segment motion features in an efficient way. Generally a complex motion can be viewed as a combination of local rigid motion, a certain kind of relationships between features in the same rigid parts remain the same under arbitrary transform. Once a number of feature points are considered as the same motion parts by the invariants, the transform parameters of the motion can be retrieved. To consider the motion segmentation globally, the motion segmentation process can be refined and their corresponding feature point set can be segmented. Experiments have been implemented to segment human body parts and show the effectiveness of the computation and satisfaction of the results compared with traditional methods.

Paper Details

Date Published: 30 October 2009
PDF: 8 pages
Proc. SPIE 7495, MIPPR 2009: Automatic Target Recognition and Image Analysis, 749526 (30 October 2009); doi: 10.1117/12.833496
Show Author Affiliations
Yufeng Chen, Beijing Institute of Technology (China)
Fengxia Li, Beijing Institute of Technology (China)
Peng Lu, Beijing Univ. (China)


Published in SPIE Proceedings Vol. 7495:
MIPPR 2009: Automatic Target Recognition and Image Analysis
Tianxu Zhang; Bruce Hirsch; Zhiguo Cao; Hanqing Lu, Editor(s)

© SPIE. Terms of Use
Back to Top