Share Email Print

Proceedings Paper

Cooperative Integration Of Vision And Touch
Author(s): Peter K. Allen
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Vision and touch have proved to be powerful sensing modalities in humans. In order to build robots capable of complex behavior, analogues of human vision and taction need to be created. In addition, strategies for intelligent use of these sensors in tasks such as object recognition need to be developed. Two overriding principles that dictate a good strategy for cooperative use of these sensors are the following: 1) sensors should complement each other in the kind and quality of data they report, and 2) each sensor system be used in the most robust manner possible. We demonstrate this with a contour following algorithm that recovers the shape of surfaces of revolution from sparse tactile sensor data. The absolute location in depth of an object can be found more accurately through touch than vision; but the global properties of where to actively explore with the hand are better found through vision.

Paper Details

Date Published: 1 March 1990
PDF: 6 pages
Proc. SPIE 1198, Sensor Fusion II: Human and Machine Strategies, (1 March 1990); doi: 10.1117/12.969990
Show Author Affiliations
Peter K. Allen, Columbia University (United States)

Published in SPIE Proceedings Vol. 1198:
Sensor Fusion II: Human and Machine Strategies
Paul S. Schenker, Editor(s)

© SPIE. Terms of Use
Back to Top