Share Email Print
cover

Proceedings Paper • new

Depth in the visual attention modelling from the egocentric perspective of view
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

An extensive research has been held in the field of the visual attention modelling throughout the past years. However, the egocentric visual attention in real environments has still not been thoroughly studied. We introduce a method proposal for conducting automated user studies on the egocentric visual attention in a laboratory. Goal of our method is to study distance of the objects from the observer (their depth) and its influence on the egocentric visual attention. The user studies based on the method proposal were conducted on a sample of 37 participants and our own egocentric dataset was created. The whole experimental and evaluation process was designed and realized using advanced methods of computer vision. Results of our research are ground-truth values of the egocentric visual attention and their relation to the depth of the scene approximated as a depth-weighting saliency function. The depth-weighting function was applied on the state-of-the-art models and evaluated. Our enhanced models provided better results than the current depthweighting saliency models.

Paper Details

Date Published: 15 March 2019
PDF: 11 pages
Proc. SPIE 11041, Eleventh International Conference on Machine Vision (ICMV 2018), 110411A (15 March 2019); doi: 10.1117/12.2523059
Show Author Affiliations
Miroslav Laco, Slovak Univ. of Technology (Slovakia)
Wanda Benesova, Slovenska Technicka Univ. (Slovakia)


Published in SPIE Proceedings Vol. 11041:
Eleventh International Conference on Machine Vision (ICMV 2018)
Antanas Verikas; Dmitry P. Nikolaev; Petia Radeva; Jianhong Zhou, Editor(s)

© SPIE. Terms of Use
Back to Top