Share Email Print

Proceedings Paper

Video objects segmentation based on spatio-temporal information and its realization in CNNUM
Author(s): Qingli Chang; Yulong Mo; Xiaomei Lin
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In this paper, we propose a new segmentation method aimed at separating the moving objects from the background in a generic video sequence using Cellular Neural Networks (CNN). This task may be accomplished to support the functionalities foreseen by new multimedia scenarios, and in particular the content-based functionalities focused by the MPEG-4 activity. Extraction of motion information from video series is very power consuming, the proposed scheme extracts moving objects based on both motion and spatial information. Initially, a symmetrical inter-frame difference is performed on a group of gray image, so the approximate area of the video object was presented, then this area can be divided into some flat zones with uninterrupted grey scale information. Finally some zones are merged and forming the object according to a certain rule, others are discarded. It is the case of stationary background hereinbefore, in the case of moving, we will do some motion estimation at first. For the good of laborsaving, some work will be realized by CNN,. At the end of this paper, some typical results obtained on MPEG-4 sequences are here shown, in order to illustrate the segmentation algorithm performance using Aladdin V1.3 simulator system.

Paper Details

Date Published: 28 May 2004
PDF: 9 pages
Proc. SPIE 5298, Image Processing: Algorithms and Systems III, (28 May 2004); doi: 10.1117/12.525009
Show Author Affiliations
Qingli Chang, Shanghai Univ. (China)
Yulong Mo, Shanghai Univ. (China)
Xiaomei Lin, Changchun Univ. of Technology (China)

Published in SPIE Proceedings Vol. 5298:
Image Processing: Algorithms and Systems III
Edward R. Dougherty; Jaakko T. Astola; Karen O. Egiazarian, Editor(s)

© SPIE. Terms of Use
Back to Top