3D ‘Monster’ Work

Research on 3D displays and applications is cutting edge.
29 June 2012

Cutting-edge research on 3D displays and applications, including advances for video games and YouTube3D and the development of a camera that focuses after the picture is taken, form a “monster” proceedings volume for the 2012 Stereoscopic Displays and Applications conference (spie.org/SDA2012) at the IS&T/SPIE Electronic Imaging symposium.

Andrew WoodsConference co-chair Andrew Woods says the 3D conference broke several records this year, with 92 manuscripts accepted for publication in the SPIE Digital Library and the online posting of nearly 40 videos from presentations in Burlingame, CA (USA), in January.

A “monster volume” of the proceedings “has a wealth of great 3D information,” says Woods, a research engineer at Curtin University of Technology (Australia) and an SPIE Fellow.

The annual Stereoscopic Displays and Applications conference focuses on recent advances in stereoscopic systems, including 3D display hardware, computer software, image acquisition, electro-holography, standards, and algorithms.

It also included a demonstration session on 3D gaming; two joint sessions with the Human Vision and Electronic Imaging conference on quantifying stereoscopic perception and comfort; and two other sessions at which prizes were awarded.

Stereographer Eric Kurland accepted the “Best of Show” 3D Theater award this year for “All Is Not Lost” by the musical group OK Go and the Pilobolus dance company at the popular 3D Theatre session where attendees viewed clips and full videos on a 5.6-meter (18-foot) diagonal stereoscopic projection screen.


  “All Is Not Lost” 3D video

Hirotsugu Yamamoto, Hiroki Bando, and Shiro Suyama of University of Tokushima, authors of “Design of cross-mirror array to form floating 3D LED signs,” won the top prize at the 3D demonstration session, open to all attendees of Electronic Imaging.

“The energy in the demonstration session was astounding and many demonstrators were still going strong after two and a half hours,” Woods says.

Selected conference presentations were video-recorded and are available at www.stereoscopic.org/2012. “Making these presentations freely available to the 3D community is a good way to extend the reach of the conference,” Woods says.

After-image focusing

Among the technical research presentations captured on video is one on the development of an image-capturing technique that enables computational refocus and depth estimation — after the image is acquired.


  The dice at left are shown in focus. At center, moving the camera focal plane in front of the foreground blurs the image. The same picture, computationally refocused using the Cornell technique, is shown at right.

Cornell University graduate student Albert Wang explains how a light-field camera that he and colleagues Sheila S. Hemami and Alyosha Molnar developed is able to capture information about both the intensity and the incident angle of the light they see, thus allowing synthetic refocus and depth-map computation (see figure, above). The innovative design could result in the next generation of 3D cameras as well as the ability to focus photos after they’re taken.

Their paper, “Angle-sensitive pixels: a new paradigm for low-power, low-cost 2D and 3D sensing,” presents a mathematical framework for analyzing the behavior of a newly developed class of sensor pixels.

The camera is a standard complementary metal-oxide semiconductor (CMOS) image sensor composed of angle-sensitive pixels and a conventional camera lens. “Because these pixels acquire a richer description of incident light than conventional intensity-sensitive pixels, our sensor requires only a simple camera objective to recover light-field information from a visual scene,” the authors say.

Other presentations captured on video include SPIE Fellow Lenny Lipton, founder of StereoGraphics International, discussing a polarizing-aperture stereoscopic cinema camera and Didier Doyen of Technicolor discussing 3D-cinema-to-3DTV-content adaptation. There were also talks on focus mismatch detection; diagnosing perceptual distortion; the Cornsweet illusion; and perceived depth of multi-parallel, overlapping, transparent, stereoscopic surfaces.

Keynote and plenary presentation videos include:

• Panasonic Corp.’s stereoscopic 3D technologies and business strategy, by Masayuki Kozuka, general manager of media and content alliance at Panasonic

• Development and future of YouTube3D, by Pete Bradshaw and Debargha Mukherjee of Google

• Object recognition, by David Forsyth, University of Illinois, Urbana-Champaign

“Anyone working in 3D will find inspiration and new insights from these presentation videos,” Woods says.

PlayStation goes 3D

A highlight of the presentation videos is “Case study: the introduction of stereoscopic games on the Sony PlayStation 3,” in which Ian Bickerstaff from Sony Computer Entertainment (UK) outlines the steps leading up to Sony introducing 3D games to PlayStation 3 in 2010.

Nicolas HollimanWoods and conference co-chairs Nicolas Holliman of Durham University (UK) and Gregg E. Favalora of Optics for Hire (USA) chose Bickerstaff’s presentation for a prize for the best use of the stereoscopic projection tools during the technical presentations.Gregg E. Favalora

Bickerstaff presented a range of techniques that Sony developed to compensate for the dynamic and unpredictable environments in games and to solve the problem of creating twice as many images as the 2D version of the game without excessively compromising the frame rate or image quality. A firmware update on the PlayStation 3 console “provides the potential to increase enormously the popularity of stereoscopic 3D in the home,” Bickerstaff says. “New technologies such as viewer head tracking and head-mounted displays should increase the appeal of 3D in the home still further.”

For more highlights from the Stereoscopic Displays and Applications conference, go to the SPIE Digital Library


VR device refined for archaeologists

A team at University of California, San Diego (UCSD) is working to refine virtual reality devices so that archaeologists can have a new, portable tool to generate detailed and reliable 3D models of spaces, people, and objects at excavation sites.

Daniel Tenedorio et al. presented “Capturing geometry in real time using a tracked Microsoft Kinect” at the Engineering Reality of Virtual Reality conference in January at IS&T/SPIE Electronic Imaging.

The team demonstrated the abilities of a Microsoft Kinect camera from the Xbox video game and a new geometry scanning system in acquiring 3D models at human scale.

Researchers used a tracked Kinect to scan a stuffed bear on a box (left), producing a textured triangle mesh (right). The system developed at UCSD previews the model during the scanning process (center) to allow the user to find and fill holes in real time. 



Join the SPIE group and/or the Stereoscopic Displays and Applications group on LinkedIn and network with others developing 3D systems.

 


Logo for SPIE EI

 

 

 

Call for Papers

Abstracts are due 23 July for IS&T/SPIE Electronic Imaging 2013, to be held 4-7 February. Go to spie.org/ei to submit your abstract and find more ingormation about new conference topics, courses, and special events.

 


Have a question or comment about this article? Write to us at spieprofessional@spie.org.

To receive a print copy of SPIE Professional, the SPIE member magazine, become an SPIE member.


Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research