Share Email Print

Proceedings Paper

Learned saliency transformations for gaze guidance
Author(s): Eleonora Vig; Michael Dorr; Erhardt Barth
Format Member Price Non-Member Price
PDF $17.00 $21.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The saliency of an image or video region indicates how likely it is that the viewer of the image or video fixates that region due to its conspicuity. An intriguing question is how we can change the video region to make it more or less salient. Here, we address this problem by using a machine learning framework to learn from a large set of eye movements collected on real-world dynamic scenes how to alter the saliency level of the video locally. We derive saliency transformation rules by performing spatio-temporal contrast manipulations (on a spatio-temporal Laplacian pyramid) on the particular video region. Our goal is to improve visual communication by designing gaze-contingent interactive displays that change, in real time, the saliency distribution of the scene.

Paper Details

Date Published: 2 February 2011
PDF: 11 pages
Proc. SPIE 7865, Human Vision and Electronic Imaging XVI, 78650W (2 February 2011); doi: 10.1117/12.876377
Show Author Affiliations
Eleonora Vig, Univ. of Lübeck (Germany)
Michael Dorr, Univ. of Lübeck (Germany)
Harvard Medical School (United States)
Erhardt Barth, Univ. of Lübeck (Germany)

Published in SPIE Proceedings Vol. 7865:
Human Vision and Electronic Imaging XVI
Bernice E. Rogowitz; Thrasyvoulos N. Pappas, Editor(s)

© SPIE. Terms of Use
Back to Top