Share Email Print
cover

Proceedings Paper

Knowledge representation and knowledge module structure for uncalibrated vision-guided robots
Author(s): Minh-Chinh Nguyen; Doan-Trong Bui
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

A new concept for knowledge representation and structure of the knowledge module for vision-guided robots is introduced. It allows the robot to acquire, accumulate and adapt automatically whatever knowledge it may need and to gain experience in the course of its normal operation, i.e., learning by doing, thus, to improve its skills and operating speed over time. The knowledge module is structured into a set of a fairly independent submodules each performing a limited task, and sub-knowledge bases each contains limited knowledge. Such a structure allows to use the acquired knowledge flexibly and efficiently. It makes also easily to extend the knowledge base when the robot's number of degrees of freedom that must be controlled increases. The concept was realized and evaluated in real-world experiments on an uncalibrated vision-guided 5-DOF manipulator to grasp a variety of differently shaped objects.

Paper Details

Date Published: 11 October 2000
PDF: 7 pages
Proc. SPIE 4197, Intelligent Robots and Computer Vision XIX: Algorithms, Techniques, and Active Vision, (11 October 2000); doi: 10.1117/12.403776
Show Author Affiliations
Minh-Chinh Nguyen, Bundeswehr Univ. Munich (Germany)
Doan-Trong Bui, National Ctr. for Science and Technology of Vietnam (Vietnam)


Published in SPIE Proceedings Vol. 4197:
Intelligent Robots and Computer Vision XIX: Algorithms, Techniques, and Active Vision
David P. Casasent, Editor(s)

© SPIE. Terms of Use
Back to Top