Share Email Print
cover

Proceedings Paper

Pragmatic approach to visual serving of robots in a vision-oriented workcell
Author(s): W. S. Wijesoma; D. F. H. Wolfe; R. J. Richards
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The applications of robots in automating assembly tasks has been hampered by an inability to overcome problems of operation of robots within an unstructured and unconstrained environment at an acceptable cost and performance. Such operations necessitate the integration of high-level sensory capabilities -- and in particular vision sensing -- into the overall control system at various levels in the control hierarchy. The paper summarizes the past and present approaches to robot guidance using vision from the perspective of a classification due to Sanderson and Weiss. A two pronged approach is suggested as a promising strategy towards integrating vision into a robot for visual servoing. That is to directly feed back vision derived spatial information (optimized for robot position control) into a robot controller cast in task or operational space and to use vision as a primary sensor medium for all critical spatial measurement tasks in the workspace. This offers the potential for low-cost high performance vision servoing through a greatly reduced reliance on the fidelity of the individual components. The paper presents this new strategy for visual guidance supported by experimental results, which is based on a new model of natural vision systems and upon a strategy which addresses the identified performance needs of the diverse vision tasks within a vision oriented robotic workcell. A practical implementation of this generalized approach has been developed which offers sub-pixel resolution operation at low cost. This system has, in turn, been integrated into a direct vision feedback control application (involving a two-link rig and on 2 degrees of freedom of a PUMA560 arm) and has been used to demonstrate the execution of pick-and-place type tasks. Hence end-effector placement has been demonstrated with the positioning accuracy limit imposed by the camera, and significant robustness to kinematic model and vision calibration errors being observed. Overall, therefore it is shown that low-cost direct vision feedback is possible and that it can offer significant improvements over existing strategies.

Paper Details

Date Published: 13 October 1994
PDF: 12 pages
Proc. SPIE 2354, Intelligent Robots and Computer Vision XIII: 3D Vision, Product Inspection, and Active Vision, (13 October 1994); doi: 10.1117/12.189079
Show Author Affiliations
W. S. Wijesoma, Univ. of Moratuwa (Sri Lanka)
D. F. H. Wolfe, Univ. of Cambridge (United Kingdom)
R. J. Richards, Univ. of Cambridge (United Kingdom)


Published in SPIE Proceedings Vol. 2354:
Intelligent Robots and Computer Vision XIII: 3D Vision, Product Inspection, and Active Vision
David P. Casasent, Editor(s)

© SPIE. Terms of Use
Back to Top