Share Email Print

Proceedings Paper

Explorations In A Sensor Fusion Space
Author(s): Peter K. Allen; Kenneth S. Roberts; Paul Michelman
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Using a dextrous hand involves understanding the relation-ships between the three kinds of sensor outputs available: joint space positions, tendon forces, and tactile contacts. Dextrous manipulation then is a fusion of these three sensing modalities. This paper is an exploration of using a dextrous, multi-fingered hand (Utah/MIT hand) for high-level object recognition tasks. The paradigm is model-based recognition in which the objects are modeled and recovered as superquadrics, which are shown to have a number of important attributes that make them well suited for such a task. Experiments have been performed to recover the shape of objects using sparse contact point data from the hand kinematics and forces with promising results. We also present our approach to using tactile data in conjunction with the dextrous hand to build a library of grasping and exploration primitives that can be used in recognizing and grasping more complex multi-part objects.

Paper Details

Date Published: 5 January 1989
PDF: 6 pages
Proc. SPIE 1003, Sensor Fusion: Spatial Reasoning and Scene Interpretation, (5 January 1989);
Show Author Affiliations
Peter K. Allen, Columbia University (United States)
Kenneth S. Roberts, Columbia University (United States)
Paul Michelman, Columbia University (United States)

Published in SPIE Proceedings Vol. 1003:
Sensor Fusion: Spatial Reasoning and Scene Interpretation
Paul S. Schenker, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?