Share Email Print
cover

Proceedings Paper

An unsupervised video foreground co-localization and segmentation process by incorporating motion cues and frame features
Author(s): Chao Zhang; Qian Zhang; Chi Zheng; Guoping Qiu
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Video foreground segmentation is one of the key problems in video processing. In this paper, we proposed a novel and fully unsupervised approach for foreground object co-localization and segmentation of unconstrained videos. We firstly compute both the actual edges and motion boundaries of the video frames, and then align them by their HOG feature maps. Then, by filling the occlusions generated by the aligned edges, we obtained more precise masks about the foreground object. Such motion-based masks could be derived as the motion-based likelihood. Moreover, the color-base likelihood is adopted for the segmentation process. Experimental Results show that our approach outperforms most of the State-of-the-art algorithms.

Paper Details

Date Published: 10 April 2018
PDF: 8 pages
Proc. SPIE 10615, Ninth International Conference on Graphic and Image Processing (ICGIP 2017), 1061529 (10 April 2018); doi: 10.1117/12.2303460
Show Author Affiliations
Chao Zhang, The Univ. of Nottingham Ningbo (China)
International Doctor Innovation Ctr. (China)
Qian Zhang, The Univ. of Nottingham Ningbo (China)
Chi Zheng, Ningbo Yongxin Optics Co., Ltd. (China)
Guoping Qiu, The Univ. of Nottingham (United Kingdom)


Published in SPIE Proceedings Vol. 10615:
Ninth International Conference on Graphic and Image Processing (ICGIP 2017)
Hui Yu; Junyu Dong, Editor(s)

© SPIE. Terms of Use
Back to Top