Share Email Print
cover

Proceedings Paper

Tracker fusion for robustness in visual feature tracking
Author(s): Kentaro Toyama; Gregory D. Hager
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Task-directed vision obviates the need for general image comprehension by focusing attention only on features which contribute useful information to the task at hand. Window-based visual tracking fits into this paradigm as motion tracking becomes a problem of local search in a small image region. While the gains in speed from such methods allow for real-time feature tracking on off-the-shelf hardware, they lose robustness by giving up a more global perspective: Window-based feature trackers are prone to such problems as distraction, illumination changes, fast features, and so forth. To add robustness to feature tracking, we present `tracker fusion,' where multiple trackers simultaneously track the same feature while watching for various problematic circumstances and combine their estimates in a meaningful way. By categorizing different situations in which mistracking occurs, finding appropriate trackers to deal with each such situation, and fusing the resulting trackers together, we construct robust feature trackers which maintain the speed of simple window-based trackers, yet afford greater resistance to mistracking.

Paper Details

Date Published: 15 September 1995
PDF: 12 pages
Proc. SPIE 2589, Sensor Fusion and Networked Robotics VIII, (15 September 1995); doi: 10.1117/12.220965
Show Author Affiliations
Kentaro Toyama, Yale Univ. (United States)
Gregory D. Hager, Yale Univ. (United States)


Published in SPIE Proceedings Vol. 2589:
Sensor Fusion and Networked Robotics VIII
Paul S. Schenker; Gerard T. McKee, Editor(s)

© SPIE. Terms of Use
Back to Top