Share Email Print

Proceedings Paper

Depth-based 2D-3D combined scene images for 3D multiview displays
Author(s): Vikas Ramachandra; Matthias Zwicker; Truong Q. Nguyen
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Due to the limited display capacity of multiview/ automultiscopic 3D displays (and other 3D display methods which recreate lightfields), regions and objects and greater depths from the zero disparity plane appear aliased. One solution to this, namely prefiltering renders the scene visually very blurry. An alternative approach is proposed in this paper, wherein regions are large depths are identified in each view. The 3D scene points corresponding to these regions is rendered as 2D only. The rest of scene still retains parallax (hence the depth perception). The advantages are that both aliasing and blur are removed, and the resolution of such regions is greatly improved. A combination of the 2D and 3D visual cues still make the scene look realistic, and the relative depth information between objects in the scene is still preserved. Our method can prove to be particularly useful for the 3D video conference application, where the people in the conference will be shown as 3D objects, but the background will be displayed as a 2D object with high spatial resolution.

Paper Details

Date Published: 19 January 2009
PDF: 8 pages
Proc. SPIE 7257, Visual Communications and Image Processing 2009, 72570I (19 January 2009); doi: 10.1117/12.805449
Show Author Affiliations
Vikas Ramachandra, Univ. of California, San Diego (United States)
Matthias Zwicker, Univ. of California, San Diego (United States)
Truong Q. Nguyen, Univ. of California, San Diego (United States)

Published in SPIE Proceedings Vol. 7257:
Visual Communications and Image Processing 2009
Majid Rabbani; Robert L. Stevenson, Editor(s)

© SPIE. Terms of Use
Back to Top