Share Email Print
cover

Proceedings Paper

Semi-automatic video semantic annotation based on active learning
Author(s): Yan Song; Xian-Sheng Hua; Li-Rong Dai; Ren-Hua Wang
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

In this paper, we propose a novel semi-automatic annotation scheme for home videos based on active learning. It is well-known that there is a large gap between semantics and low-level features. To narrow down this gap, relevance feedback has been introduced in a number of literatures. Furthermore, to accelerate the convergence to the optimal result, several active learning schemes, in which the most informative samples are chosen to be annotated, have been proposed in literature instead of randomly selecting samples. In this paper, a representative active learning method is proposed, which local consistency of video content is effectively taken into consideration. The main idea is to exploit the global and local statistical characteristics of videos, and the temporal relationship between shots. The global model is trained on a smaller pre-labeled video dataset, and the local information is obtained online in the process of active learning, and will be used to adjust the initial global model adaptively. The experiment results show that the proposed active learning scheme has significantly improved the annotation performance compared with random selecting and common active learning method.

Paper Details

Date Published: 31 July 2006
PDF: 8 pages
Proc. SPIE 5960, Visual Communications and Image Processing 2005, 59600R (31 July 2006); doi: 10.1117/12.631380
Show Author Affiliations
Yan Song, Univ. of Science and Technology of China (China)
Xian-Sheng Hua, Microsoft Research Asia (China)
Li-Rong Dai, Univ. of Science and Technology of China (China)
Ren-Hua Wang, Univ. of Science and Technology of China (China)


Published in SPIE Proceedings Vol. 5960:
Visual Communications and Image Processing 2005

© SPIE. Terms of Use
Back to Top