Share Email Print

Journal of Electronic Imaging

Self-paced model learning for robust visual tracking
Author(s): Wenhui Huang; Jason J. Gu; Xin Ma; Yibin Li
Format Member Price Non-Member Price
PDF $20.00 $25.00

Paper Abstract

In visual tracking, learning a robust and efficient appearance model is a challenging task. Model learning determines both the strategy and the frequency of model updating, which contains many details that could affect the tracking results. Self-paced learning (SPL) has recently been attracting considerable interest in the fields of machine learning and computer vision. SPL is inspired by the learning principle underlying the cognitive process of humans, whose learning process is generally from easier samples to more complex aspects of a task. We propose a tracking method that integrates the learning paradigm of SPL into visual tracking, so reliable samples can be automatically selected for model learning. In contrast to many existing model learning strategies in visual tracking, we discover the missing link between sample selection and model learning, which are combined into a single objective function in our approach. Sample weights and model parameters can be learned by minimizing this single objective function. Additionally, to solve the real-valued learning weight of samples, an error-tolerant self-paced function that considers the characteristics of visual tracking is proposed. We demonstrate the robustness and efficiency of our tracker on a recent tracking benchmark data set with 50 video sequences.

Paper Details

Date Published: 22 February 2017
PDF: 13 pages
J. Electron. Imag. 26(1) 013016 doi: 10.1117/1.JEI.26.1.013016
Published in: Journal of Electronic Imaging Volume 26, Issue 1
Show Author Affiliations
Wenhui Huang, Shandong Univ. (China)
Jason J. Gu, Dalhousie Univ. (Canada)
Xin Ma, Shandong Univ. (China)
Yibin Li, Shandong Univ. (China)

© SPIE. Terms of Use
Back to Top