Share Email Print
cover

Proceedings Paper

Automatic 2D-to-3D image conversion using 3D examples from the internet
Author(s): J. Konrad; G. Brown; M. Wang; P. Ishwar; C. Wu; D. Mukherjee
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D repository. While far from perfect, the presented results demonstrate that on-line repositories of 3D content can be used for effective 2D-to-3D image conversion. With the continuously increasing amount of 3D data on-line and with the rapidly growing computing power in the cloud, the proposed framework seems a promising alternative to operator-assisted 2D-to-3D conversion.

Paper Details

Date Published: 22 February 2012
PDF: 12 pages
Proc. SPIE 8288, Stereoscopic Displays and Applications XXIII, 82880F (22 February 2012); doi: 10.1117/12.910601
Show Author Affiliations
J. Konrad, Boston Univ. (United States)
G. Brown, Boston Univ. (United States)
M. Wang, Boston Univ. (United States)
P. Ishwar, Boston Univ. (United States)
C. Wu, Google Inc. (United States)
D. Mukherjee, Google Inc. (United States)


Published in SPIE Proceedings Vol. 8288:
Stereoscopic Displays and Applications XXIII
Andrew J. Woods; Nicolas S. Holliman; Gregg E. Favalora, Editor(s)

© SPIE. Terms of Use
Back to Top