Share Email Print
cover

Proceedings Paper

Vision-based docking under variable lighting conditions
Author(s): Robin R. Murphy; Jeffrey A. Hyams; Brian W. Minten; Mark Micire
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

This paper describes our progress in near-range (within 0 to 2 meters) ego-centric docking using vision under variable lighting conditions (indoors, outdoors, dusk). The docking behavior is fully autonomous and reactive, where the robot directly responds to the ratio of the number of pixels of two colored fiducials without constructing an explicit model of the landmark. This is similar to visual homing in insects and has a low computational complexity of O(n2) and a fast update rate. In order to accurately segment the colored fiducials under constrained lighting conditions, the spherical coordinate transform (SCT) color space is used, rather than RGB or HSV, in conjunction with an adaptive segmentation algorithm. Experiments with a daughter robot docking with a mother robot were collected. Results showed that 1) vision-based docking is faster than teleoperation yet equivalent in performance and 2) adaptive segmentation is more robust under challenging lighting conditions, including outdoors.

Paper Details

Date Published: 10 July 2000
PDF: 7 pages
Proc. SPIE 4024, Unmanned Ground Vehicle Technology II, (10 July 2000); doi: 10.1117/12.391616
Show Author Affiliations
Robin R. Murphy, Univ. of South Florida (United States)
Jeffrey A. Hyams, Univ. of South Florida (United States)
Brian W. Minten, Univ. of South Florida (United States)
Mark Micire, Univ. of South Florida (United States)


Published in SPIE Proceedings Vol. 4024:
Unmanned Ground Vehicle Technology II
Grant R. Gerhart; Robert W. Gunderson; Chuck M. Shoemaker, Editor(s)

© SPIE. Terms of Use
Back to Top