Share Email Print
cover

Proceedings Paper

A cognitive approach to vision for a mobile robot
Author(s): D. Paul Benjamin; Christopher Funk; Damian Lyons
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

We describe a cognitive vision system for a mobile robot. This system works in a manner similar to the human vision system, using saccadic, vergence and pursuit movements to extract information from visual input. At each fixation, the system builds a 3D model of a small region, combining information about distance, shape, texture and motion. These 3D models are embedded within an overall 3D model of the robot's environment. This approach turns the computer vision problem into a search problem, with the goal of constructing a physically realistic model of the entire environment. At each step, the vision system selects a point in the visual input to focus on. The distance, shape, texture and motion information are computed in a small region and used to build a mesh in a 3D virtual world. Background knowledge is used to extend this structure as appropriate, e.g. if a patch of wall is seen, it is hypothesized to be part of a large wall and the entire wall is created in the virtual world, or if part of an object is recognized, the whole object's mesh is retrieved from the library of objects and placed into the virtual world. The difference between the input from the real camera and from the virtual camera is compared using local Gaussians, creating an error mask that indicates the main differences between them. This is then used to select the next points to focus on. This approach permits us to use very expensive algorithms on small localities, thus generating very accurate models. It also is task-oriented, permitting the robot to use its knowledge about its task and goals to decide which parts of the environment need to be examined. The software components of this architecture include PhysX for the 3D virtual world, OpenCV and the Point Cloud Library for visual processing, and the Soar cognitive architecture, which controls the perceptual processing and robot planning. The hardware is a custom-built pan-tilt stereo color camera. We describe experiments using both static and moving objects.

Paper Details

Date Published: 29 May 2013
PDF: 7 pages
Proc. SPIE 8756, Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2013, 87560I (29 May 2013); doi: 10.1117/12.2018856
Show Author Affiliations
D. Paul Benjamin, Pace Univ. (United States)
Christopher Funk, Pace Univ. (United States)
Damian Lyons, Fordham Univ. (United States)


Published in SPIE Proceedings Vol. 8756:
Multisensor, Multisource Information Fusion: Architectures, Algorithms, and Applications 2013
Jerome J. Braun, Editor(s)

© SPIE. Terms of Use
Back to Top