Share Email Print
cover

Proceedings Paper

Temporally consistent virtual camera generation from stereo image sequences
Author(s): Simon R. Fox; Julien Flack; Juliang Shao; Phil Harman
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The recent emergence of auto-stereoscopic 3D viewing technologies has increased demand for the creation of 3D video content. A range of glasses-free multi-viewer screens have been developed that require as many as 9 views generated for each frame of video. This presents difficulties in both view generation and transmission bandwidth. This paper examines the use of stereo video capture as a means to generate multiple scene views via disparity analysis. A machine learning approach is applied to learn relationships between disparity generated depth information and source footage, and to generate depth information in a temporally smooth manner for both left and right eye image sequences. A view morphing approach to multiple view rendering is described which provides an excellent 3D effect on a range of glasses-free displays, while providing robustness to inaccurate stereo disparity calculations.

Paper Details

Date Published: 21 May 2004
PDF: 10 pages
Proc. SPIE 5291, Stereoscopic Displays and Virtual Reality Systems XI, (21 May 2004); doi: 10.1117/12.527895
Show Author Affiliations
Simon R. Fox, Dynamic Digital Depth Inc. (Australia)
Julien Flack, Dynamic Digital Depth Inc. (Australia)
Juliang Shao, Dynamic Digital Depth Inc. (Australia)
Phil Harman, Dynamic Digital Depth Inc. (Australia)


Published in SPIE Proceedings Vol. 5291:
Stereoscopic Displays and Virtual Reality Systems XI
Mark T. Bolas; Andrew J. Woods; John O. Merritt; Stephen A. Benton, Editor(s)

© SPIE. Terms of Use
Back to Top