Share Email Print

Proceedings Paper

Visually guided touching and manual tracking
Author(s): Peter A. Sandon
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Animate vision depends on an ability to choose a region of the visual environment for task-specific processing. This processing may involve extraction of image features for object classification or identification, or it may involve extraction of viewpoint parameters, such as position, scale, and orientation, for guiding movement. It is the role of selective attention to choose the region to be processed in a task-dependent way. This paper describes a real-time implementation of a vision-robotics system that uses the location information provided by the attention mechanism to guide eye movements and arm movements in touching and manual tracking behaviors. The approach makes use of a 3-D retinocentric coordinate frame for representing position information, and differential kinematics for relating the eye and arm motor systems to this retinocentric sensory frame.

Paper Details

Date Published: 11 March 1993
PDF: 12 pages
Proc. SPIE 1964, Applications of Artificial Intelligence 1993: Machine Vision and Robotics, (11 March 1993); doi: 10.1117/12.141774
Show Author Affiliations
Peter A. Sandon, Dartmouth College (United States)

Published in SPIE Proceedings Vol. 1964:
Applications of Artificial Intelligence 1993: Machine Vision and Robotics
Kim L. Boyer; Louise Stark, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?