Share Email Print

Proceedings Paper

Fusion without Representation
Author(s): Monnett Hanvey Soldo
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

The topic of this conference is how various sensors can be used together to support robot mobility and other related tasks. The support that is needed - what you want to use the sensors to create - is an understanding of the layout of the environment, the nature of its (other) mobile elements, etc., so that the robot can at least navigate and avoid collisions. This "understanding" may take the form of an explicit, symbolic representation (model) whose symbols can be manipulated by a planner and eventually used to influence/direct robot motion. We demonstrate, however, that it is possible and sometimes desirable to bypass this representation phase, allowing the sensors to directly influence robot behavior (i.e., allowing the "understanding" to be a procedural one). And we show that one can achieve, using this approach, effective robot motion. We present results obtained on a real robot that procedurally integrates odometry, sonar, and vision - fusing not only different sensors but also data from the same sensors over time - in real-time navigation and exploration.

Paper Details

Date Published: 1 March 1990
PDF: 7 pages
Proc. SPIE 1198, Sensor Fusion II: Human and Machine Strategies, (1 March 1990); doi: 10.1117/12.970005
Show Author Affiliations
Monnett Hanvey Soldo, Columbia University (United States)

Published in SPIE Proceedings Vol. 1198:
Sensor Fusion II: Human and Machine Strategies
Paul S. Schenker, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?