Share Email Print

Proceedings Paper

Data-driven approach to dynamic visual attention modelling
Author(s): Dubravko Culibrk; Srdjan Sladojevic; Nicolas Riche; Matei Mancas; Vladimir Crnojevic
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Visual attention deployment mechanisms allow the Human Visual System to cope with an overwhelming amount of visual data by dedicating most of the processing power to objects of interest. The ability to automatically detect areas of the visual scene that will be attended to by humans is of interest for a large number of applications, from video coding, video quality assessment to scene understanding. Due to this fact, visual saliency (bottom-up attention) models have generated significant scientific interest in recent years. Most recent work in this area deals with dynamic models of attention that deal with moving stimuli (videos) instead of traditionally used still images. Visual saliency models are usually evaluated against ground-truth eye-tracking data collected from human subjects. However, there are precious few recently published approaches that try to learn saliency from eyetracking data and, to the best of our knowledge, no approaches that try to do so when dynamic saliency is concerned. The paper attempts to fill this gap and describes an approach to data-driven dynamic saliency model learning. A framework is proposed that enables the use of eye-tracking data to train an arbitrary machine learning algorithm, using arbitrary features derived from the scene. We evaluate the methodology using features from a state-of-the art dynamic saliency model and show how simple machine learning algorithms can be trained to distinguish between visually salient and non-salient parts of the scene.

Paper Details

Date Published: 30 April 2012
PDF: 11 pages
Proc. SPIE 8436, Optics, Photonics, and Digital Technologies for Multimedia Applications II, 84360N (30 April 2012); doi: 10.1117/12.923559
Show Author Affiliations
Dubravko Culibrk, Univ. of Novi Sad (Serbia)
Srdjan Sladojevic, Univ. of Novi Sad (Serbia)
Nicolas Riche, Univ. of Mons (Belgium)
Matei Mancas, Univ. of Mons (Belgium)
Vladimir Crnojevic, Univ. of Novi Sad (Serbia)

Published in SPIE Proceedings Vol. 8436:
Optics, Photonics, and Digital Technologies for Multimedia Applications II
Peter Schelkens; Touradj Ebrahimi; Gabriel Cristóbal; Frédéric Truchetet; Pasi Saarikko, Editor(s)

© SPIE. Terms of Use
Back to Top