Share Email Print
cover

Proceedings Paper

A point cloud based pipeline for depth reconstruction from autostereoscopic sets
Author(s): Cédric Niquin; Stéphanie Prévost; Yannick Remion
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

This is a three step pipeline to construct a 3D mesh of a scene from a set of N images, destined to be viewed on auto-stereoscopic displays. The first step matches the pixels to create a point cloud using a new algorithm based on graph-cuts. It exploits the data redundancy of the N images to ensure the geometric consistency of the scene and to reduce the graph complexity, in order to speed up the computation. It performs an accurate detection of occlusions and its results can then be used in applications like view synthesis. The second step slightly moves the points along the Z-axis to refine the point cloud. It uses a new cost including both occlusion positions and light variations deduced from the matching. The Z values are selected using a dynamic programming algorithm. This step finally generates a point cloud, which is fine enough for applications like augmented reality. From any of the two previously defined point clouds, the last step creates a colored mesh, which is a convenient data structure to be used in graphics APIs. It also generates N depth maps, allowing a comparison between the results of our method with those of other methods.

Paper Details

Date Published: 25 February 2010
PDF: 12 pages
Proc. SPIE 7524, Stereoscopic Displays and Applications XXI, 75241O (25 February 2010); doi: 10.1117/12.838844
Show Author Affiliations
Cédric Niquin, TéléRelief (France)
Univ. de Reims Champagne-Ardenne (France)
Stéphanie Prévost, Univ. de Reims Champagne-Ardenne (France)
Yannick Remion, TéléRelief (France)
Univ. de Reims Champagne-Ardenne (France)


Published in SPIE Proceedings Vol. 7524:
Stereoscopic Displays and Applications XXI
Andrew J. Woods; Nicolas S. Holliman; Neil A. Dodgson, Editor(s)

© SPIE. Terms of Use
Back to Top