Share Email Print

Optical Engineering

Cross-domain active learning for video concept detection
Author(s): Huan Li; Chao Li; Yuan Shi; Zhang Xiong; Alexander G. Hauptmann
Format Member Price Non-Member Price
PDF $20.00 $25.00

Paper Abstract

As video data from a variety of different domains (e.g., news, documentaries, entertainment) have distinctive data distributions, cross-domain video concept detection becomes an important task, in which one can reuse the labeled data of one domain to benefit the learning task in another domain with insufficient labeled data. In this paper, we approach this problem by proposing a cross-domain active learning method which iteratively queries labels of the most informative samples in the target domain. Traditional active learning assumes that the training (source domain) and test data (target domain) are from the same distribution. However, it may fail when the two domains have different distributions because querying informative samples according to a base learner that initially learned from source domain may no longer be helpful for the target domain. In our paper, we use the Gaussian random field model as the base learner which has the advantage of exploring the distributions in both domains, and adopt uncertainty sampling as the query strategy. Additionally, we present an instance weighting trick to accelerate the adaptability of the base learner, and develop an efficient model updating method which can significantly speed up the active learning process. Experimental results on TRECVID collections highlight the effectiveness.

Paper Details

Date Published: 1 August 2011
PDF: 9 pages
Opt. Eng. 50(8) 087203 doi: 10.1117/1.3615236
Published in: Optical Engineering Volume 50, Issue 8
Show Author Affiliations
Huan Li, BeiHang Univ. (China)
Chao Li, BeiHang Univ. (China)
Yuan Shi, The Univ. of Southern California (United States)
Zhang Xiong, BeiHang Univ. (China)
Alexander G. Hauptmann, Carnegie Mellon Univ. (United States)

© SPIE. Terms of Use
Back to Top