Share Email Print
cover

Proceedings Paper

Occlusion-free next view planning
Author(s): Jianchao Zeng; Guang-you Xu
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In this paper, a new approach is proposed to plan occlusion-free next view for a light stripe range finder. To expand its viewing scope, the range finder is mounted on the gripper of a manipulator so that it can take range images at any point in the space from any direction. In order to avoid self occlusion from occurring, we make the light plane of each striping orthogonal to the tangent plane of some representative point (RP) on the object, and maintain constant the distance between the view point and the RP. Instead of blind scanning to obtain dense range images of the object, we utilize already acquired data as knowledge to plan the next scanning view purposively. First, the range finder is guided right above the object by processing an intensity image taken from above the worktable, and an optimal initial scanning direction is determined through test stripings. Secondly, four initial stripings along the scanning direction with a default displacement are carried out, and their images are segmented at abrupt and sharp turning points. The longest corresponding segments are first fitted with a B-splines surface, and the middle point of the boundary along the scanning direction is viewed as RP for the initial patch. The next view point is determined by approximating the surface with a cylindrical surface within a small neighboring area around the RP, calculating the curvature and torsion of the spiral curve on the cylindrical surface passing through the RP. Thirdly, the initial patch is extended to a new one by merging the stripe from the determined view point. This procedure is repeated until touch of the object with the worktable is reached. And finally, we get complete description by connecting all the patches starting from each initial segment. As can be seen from the above, the proposed approach acts very close to the perception process of a human. We utilize a simulation system to show the effectiveness of our approach and its advantages over the existing ones.

Paper Details

Date Published: 6 August 1993
PDF: 11 pages
Proc. SPIE 2056, Intelligent Robots and Computer Vision XII: Active Vision and 3D Methods, (6 August 1993); doi: 10.1117/12.150224
Show Author Affiliations
Jianchao Zeng, Tsinghua Univ. (China)
Guang-you Xu, Tsinghua Univ. (China)


Published in SPIE Proceedings Vol. 2056:
Intelligent Robots and Computer Vision XII: Active Vision and 3D Methods
David P. Casasent, Editor(s)

© SPIE. Terms of Use
Back to Top