Share Email Print
cover

Proceedings Paper

Robust visual track using an ensemble cascade of convolutional neural networks
Author(s): Dan Hu; Xingshe Zhou; Xiaohao Yu; Zhiqiang Hou
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Convolutional Neural Networks (CNN) have dramatically boosted the performance of various computer vision tasks except visual tracking due to the lack of training data. In this paper, we pre-train a deep CNN offline to classify the 1 million images from 256 classes with very leaky non-saturating neurons for training acceleration, which is transformed to a discriminative classifier by adding an additional classification layer. In addition, we propose a novel approach for combining increasingly our CNN classifiers in a “cascade” structure through a modification of the AdaBoost framework, and then transfer the selected discriminative features from the ensemble of CNN classifiers to the robust visual tracking task, by updating online to robustly discard the background regions from promising object-like region to cope with appearance changes of the target. Extensive experimental evaluations on an open tracker benchmark demonstrate outstanding performance of our tracker by improving tracking success rate and tracking precision on an average of 9.2% and 13.9% at least over other state-of-the-art trackers.

Paper Details

Date Published: 9 December 2015
PDF: 11 pages
Proc. SPIE 9817, Seventh International Conference on Graphic and Image Processing (ICGIP 2015), 98170W (9 December 2015); doi: 10.1117/12.2228001
Show Author Affiliations
Dan Hu, Northwestern Polytechnical Univ. (China)
Air Force Engineering Univ. (China)
Xingshe Zhou, Northwestern Polytechnical Univ. (China)
Xiaohao Yu, Equipment Academy of Air Force (China)
Zhiqiang Hou, Air Force Engineering Univ. (China)


Published in SPIE Proceedings Vol. 9817:
Seventh International Conference on Graphic and Image Processing (ICGIP 2015)
Yulin Wang; Xudong Jiang, Editor(s)

© SPIE. Terms of Use
Back to Top