Share Email Print

Proceedings Paper

An evaluation of attention models for use in SLAM
Author(s): Samuel Dodge; Lina Karam
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In this paper we study the application of visual saliency models for the simultaneous localization and mapping (SLAM) problem. We consider visual SLAM, where the location of the camera and a map of the environment can be generated using images from a single moving camera. In visual SLAM, the interest point detector is of key importance. This detector must be invariant to certain image transformations so that features can be matched across di erent frames. Recent work has used a model of human visual attention to detect interest points, however it is unclear as to what is the best attention model for this purpose. To this aim, we compare the performance of interest points from four saliency models (Itti, GBVS, RARE, and AWS) with the performance of four traditional interest point detectors (Harris, Shi-Tomasi, SIFT, and FAST). We evaluate these detectors under several di erent types of image transformation and nd that the Itti saliency model, in general, achieves the best performance in terms of keypoint repeatability.

Paper Details

Date Published: 3 February 2014
PDF: 7 pages
Proc. SPIE 9025, Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques, 90250M (3 February 2014); doi: 10.1117/12.2043042
Show Author Affiliations
Samuel Dodge, Arizona State Univ. (United States)
Lina Karam, Arizona State Univ. (United States)

Published in SPIE Proceedings Vol. 9025:
Intelligent Robots and Computer Vision XXXI: Algorithms and Techniques
Juha Röning; David Casasent, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?