Proceedings PaperEye-hand relations for sensor placement and object location determination
|Format||Member Price||Non-Member Price|
Eye-on-hand configuration is an important way to build the active vision system. Most tasks that the eye-on-hand system can perform are based on the estimation of eye-hand relation. Traditionally eye-hand relation is defined as a 3D to 3D coordinate transformation. This kind of definition only views the eye (camera) as a coordinate and it is useful for sensor placement. When this eye-hand relation is used for objects location determination (after the sensor is actively placed), it causes much larger errors than a more direct way which under this situation defines the eye-hand relation as the 3D to 2D perspective transformation between the last joint coordinate and the camera image plane. In this paper the meanings of the eye-hand relations are extended for different tasks: one for sensor placement and one for objects location determination. We also present a new method for the calculation of eye-hand relations by making the last joint coordinate `touchable.' We call it the direct method because some specially designed motions are performed by the robot arm to estimate the relation between the robot base frame and the world frame. When only the rotation matrix is obtained, moving the camera twice and calibrating the 3D camera pose at three stations, the eye-hand relations can then be computed. When the transformation matrix is obtained, calibrating the camera at one station may yield the solution of the eye-hand relations. Experimental results with real data are included. The advantages of the direct method are its efficiency, accuracy, and reproducibility.