Share Email Print
cover

Proceedings Paper

Motion integration in visual attention models for predicting simple dynamic scenes
Author(s): A. Bur; P. Wurtz; R. M. Müri; H. Hügli
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Visual attention models mimic the ability of a visual system, to detect potentially relevant parts of a scene. This process of attentional selection is a prerequisite for higher level tasks such as object recognition. Given the high relevance of temporal aspects in human visual attention, dynamic information as well as static information must be considered in computer models of visual attention. While some models have been proposed for extending to motion the classical static model, a comparison of the performances of models integrating motion in different manners is still not available. In this article, we present a comparative study of various visual attention models combining both static and dynamic features. The considered models are compared by measuring their respective performance with respect to the eye movement patterns of human subjects. Simple synthetic video sequences, containing static and moving objects, are used to assess the model suitability. Qualitative and quantitative results provide a ranking of the different models.

Paper Details

Date Published: 16 February 2007
PDF: 11 pages
Proc. SPIE 6492, Human Vision and Electronic Imaging XII, 649219 (16 February 2007); doi: 10.1117/12.704185
Show Author Affiliations
A. Bur, Univ. of Neuchâtel (Switzerland)
P. Wurtz, Univ. of Bern (Switzerland)
R. M. Müri, Univ. of Bern (Switzerland)
H. Hügli, Univ. of Neuchâtel (Switzerland)


Published in SPIE Proceedings Vol. 6492:
Human Vision and Electronic Imaging XII
Bernice E. Rogowitz; Thrasyvoulos N. Pappas; Scott J. Daly, Editor(s)

© SPIE. Terms of Use
Back to Top