Share Email Print
cover

Proceedings Paper

Using scale-invariant feature points in visual servoing
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

In this paper, we focus on the robust feature selection and investigate the application of scale-invariant feature transform (SIFT) in robotic visual servoing (RVS). We consider a camera mounted onto the endpoint of an anthropomorphic manipulator (eye-in-hand configuration). The objective of such RVS system is to control the pose of the camera so that a desired relative pose between the camera and the object of interest is maintained. It is seen that the SIFT feature point correspondences are not unique and hence those feature points with more than a unique match are disregarded. When the endpoint moves along a trajectory, the robust SIFT feature points are found and then for a similar trajectory the same selected feature points are used to keep track of the object in the current view. The point correspondences of the remaining robust feature points would provide the epipolar geometry of the two scenes, where knowing the camera calibration the motion of the camera is retrieved. The robot joint angle vector is then determined solving the inverse kinematics of the manipulator. We show how to select a set of robust features that are appropriate for the task of visual servoing. Robust SIFT feature points are scale and rotation invariant and effective when the current position of the endpoint is farther than and rotated with respect to the desired position.

Paper Details

Date Published: 25 October 2004
PDF: 8 pages
Proc. SPIE 5603, Machine Vision and its Optomechatronic Applications, (25 October 2004); doi: 10.1117/12.580714
Show Author Affiliations
Azad Shademan, Ryerson Univ. (Canada)
Farrokh Janabi-Sharifi, Ryerson Univ. (Canada)


Published in SPIE Proceedings Vol. 5603:
Machine Vision and its Optomechatronic Applications
Shun'ichi Kaneko; Hyungsuck Cho; George K. Knopf; Rainer Tutsch, Editor(s)

© SPIE. Terms of Use
Back to Top