Share Email Print
cover

Proceedings Paper

Robust visual tracking based on online learning of joint sparse dictionary
Author(s): Qiaozhe Li; Yu Qiao; Jie Yang; Li Bai
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In this paper, we propose a robust visual tracking algorithm based on online learning of a joint sparse dictionary. The joint sparse dictionary consists of positive and negative sub-dictionaries, which model foreground and background objects respectively. An online dictionary learning method is developed to update the joint sparse dictionary by selecting both positive and negative bases from bags of positive and negative image patches/templates during tracking. A linear classifier is trained with sparse coefficients of image patches in the current frame, which are calculated using the joint sparse dictionary. This classifier is then used to locate the target in the next frame. Experimental results show that our tracking method is robust against object variation, occlusion and illumination change.

Paper Details

Date Published: 24 December 2013
PDF: 5 pages
Proc. SPIE 9067, Sixth International Conference on Machine Vision (ICMV 2013), 90671E (24 December 2013); doi: 10.1117/12.2051541
Show Author Affiliations
Qiaozhe Li, Shanghai Jiao Tong Univ. (China)
Yu Qiao, Shanghai Jiao Tong Univ. (China)
Jie Yang, Shanghai Jiao Tong Univ. (China)
Li Bai, Univ. of Nottingham (United Kingdom)


Published in SPIE Proceedings Vol. 9067:
Sixth International Conference on Machine Vision (ICMV 2013)
Branislav Vuksanovic; Antanas Verikas; Jianhong Zhou, Editor(s)

© SPIE. Terms of Use
Back to Top