Share Email Print

Proceedings Paper

Fusion Of Vision And Touch For Spatio-Temporal Reasoning In Learning Manipulation Tasks
Author(s): Jan M. Zytkow; Peter W. Pachowicz
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper presents a framework for the fusion of vision and touch, useful in learning various manipulation tasks by a robot arm. Initially the robot has poor knowledge of the laws that govern the behavior of objects, and incomplete knowledge about physical features of individual objects. We analyse the fusion of vision and touch for learning object manipulation tasks, various methods of features acquisition, and an architecture of the system that provides feedback between sensing, manipulating and learning. Simple control loops allow the system to execute the manipulation tasks and to learn such a selection of the values of control parameters that prevents faults and object damage. The main emphasis is on learning. In sections 5 and 6 we demonstrate how the system discovers new regularities, how it recognizes new and useful object properties, and how the performance on similar tasks can be improved by application of newly acquired knowledge. Sections 1-4 describe a preliminary design of an architecture that allows for application of sensor fusion and for learning by improving manipulation skills by a robot arm.

Paper Details

Date Published: 1 March 1990
PDF: 12 pages
Proc. SPIE 1198, Sensor Fusion II: Human and Machine Strategies, (1 March 1990); doi: 10.1117/12.969994
Show Author Affiliations
Jan M. Zytkow, George Mason University (United States)
Peter W. Pachowicz, George Mason University (United States)

Published in SPIE Proceedings Vol. 1198:
Sensor Fusion II: Human and Machine Strategies
Paul S. Schenker, Editor(s)

© SPIE. Terms of Use
Back to Top