Share Email Print

Proceedings Paper

General learning algorithm for robot vision
Author(s): Shree K. Nayar; Hiroshi Murase; Sameer A. Nene
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The problem of vision-based robot positioning and tracking is addressed. A general learning algorithm is presented for determining the mapping between robot position and object appearance. The robot is first moved through several displacements with respect to its desired position, and a large set of object images is acquired. This image set is compressed using principal component analysis to obtain a low-dimensional subspace. Variations in object images due to robot displacements are represented as a compact parametrized manifold in the subspace. While positioning or tracking, errors in end-effector coordinates are efficiently computed from a single brightness image using the parametric manifold representation. The learning component enables accurate visual control without any prior hand-eye calibration. Several experiments have been conducted to demonstrate the practical feasibility of the proposed positioning/tracking approach and its relevance to industrial applications.

Paper Details

Date Published: 30 June 1994
PDF: 8 pages
Proc. SPIE 2304, Neural and Stochastic Methods in Image and Signal Processing III, (30 June 1994); doi: 10.1117/12.179225
Show Author Affiliations
Shree K. Nayar, Columbia Univ. (United States)
Hiroshi Murase, Columbia Univ. (United States)
Sameer A. Nene, Columbia Univ. (United States)

Published in SPIE Proceedings Vol. 2304:
Neural and Stochastic Methods in Image and Signal Processing III
Su-Shing Chen, Editor(s)

© SPIE. Terms of Use
Back to Top