
Proceedings Paper
Visual attention in egocentric field-of-view using RGB-D dataFormat | Member Price | Non-Member Price |
---|---|---|
$17.00 | $21.00 |
Paper Abstract
Most of the existing solutions predicting visual attention focus solely on referenced 2D images and disregard any depth information. This aspect has always represented a weak point since the depth is an inseparable part of the biological vision. This paper presents a novel method of saliency map generation based on results of our experiments with egocentric visual attention and investigation of its correlation with perceived depth. We propose a model to predict the attention using superpixel representation with an assumption that contrast objects are usually salient and have a sparser spatial distribution of superpixels than their background. To incorporate depth information into this model, we propose three different depth techniques. The evaluation is done on our new RGB-D dataset created by SMI eye-tracker glasses and KinectV2 device.
Paper Details
Date Published: 17 March 2017
PDF: 8 pages
Proc. SPIE 10341, Ninth International Conference on Machine Vision (ICMV 2016), 103410T (17 March 2017); doi: 10.1117/12.2268617
Published in SPIE Proceedings Vol. 10341:
Ninth International Conference on Machine Vision (ICMV 2016)
Antanas Verikas; Petia Radeva; Dmitry P. Nikolaev; Wei Zhang; Jianhong Zhou, Editor(s)
PDF: 8 pages
Proc. SPIE 10341, Ninth International Conference on Machine Vision (ICMV 2016), 103410T (17 March 2017); doi: 10.1117/12.2268617
Show Author Affiliations
Veronika Olesova, Slovenska Technicka Univ. (Slovakia)
Wanda Benesova, Slovenska Technicka Univ. (Slovakia)
Wanda Benesova, Slovenska Technicka Univ. (Slovakia)
Patrik Polatsek, Slovenska Technicka Univ. (Slovakia)
Published in SPIE Proceedings Vol. 10341:
Ninth International Conference on Machine Vision (ICMV 2016)
Antanas Verikas; Petia Radeva; Dmitry P. Nikolaev; Wei Zhang; Jianhong Zhou, Editor(s)
© SPIE. Terms of Use
