Share Email Print

Journal of Electronic Imaging

Dense point-cloud representation of a scene using monocular vision
Author(s): Yakov Diskin; Vijayan Asari
Format Member Price Non-Member Price
PDF $20.00 $25.00

Paper Abstract

We present a three-dimensional (3-D) reconstruction system designed to support various autonomous navigation applications. The system presented focuses on the 3-D reconstruction of a scene using only a single moving camera. Utilizing video frames captured at different points in time allows us to determine the depths of a scene. In this way, the system can be used to construct a point-cloud model of its unknown surroundings. We present the step-by-step methodology and analysis used in developing the 3-D reconstruction technique. We present a reconstruction framework that generates a primitive point cloud, which is computed based on feature matching and depth triangulation analysis. To populate the reconstruction, we utilized optical flow features to create an extremely dense representation model. With the third algorithmic modification, we introduce the addition of the preprocessing step of nonlinear single-image super resolution. With this addition, the depth accuracy of the point cloud, which relies on precise disparity measurement, has significantly increased. Our final contribution is an additional postprocessing step designed to filter noise points and mismatched features unveiling the complete dense point-cloud representation (DPR) technique. We measure the success of DPR by evaluating the visual appeal, density, accuracy, and computational expense and compare with two state-of-the-art techniques.

Paper Details

Date Published: 6 March 2015
PDF: 25 pages
J. Electron. Imag. 24(2) 023003 doi: 10.1117/1.JEI.24.2.023003
Published in: Journal of Electronic Imaging Volume 24, Issue 2
Show Author Affiliations
Yakov Diskin, Univ. of Dayton (United States)
Vijayan Asari, Univ. of Dayton (United States)

© SPIE. Terms of Use
Back to Top