Share Email Print

Proceedings Paper

Efficient streaming of stereoscopic depth-based 3D videos
Author(s): Dogancan Temel; Mohammed Aabed; Mashhour Solh; Ghaassan AlRegib
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

In this paper, we propose a method to extract depth from motion, texture and intensity. We first analyze the depth map to extract a set of depth cues. Then, based on these depth cues, we process the colored reference video, using texture, motion, luminance and chrominance content, to extract the depth map. The processing of each channel in the YCRCB-color space is conducted separately. We tested this approach on different video sequences with different monocular properties. The results of our simulations show that the extracted depth maps generate a 3D video with quality close to the video rendered using the ground truth depth map. We report objective results using 3VQM and subjective analysis via comparison of rendered images. Furthermore, we analyze the savings in bitrate as a consequence of eliminating the need for two video codecs, one for the reference color video and one for the depth map. In this case, only the depth cues are sent as a side information to the color video.

Paper Details

Date Published: 21 February 2013
PDF: 10 pages
Proc. SPIE 8666, Visual Information Processing and Communication IV, 86660I (21 February 2013); doi: 10.1117/12.2005161
Show Author Affiliations
Dogancan Temel, Georgia Institute of Technology (United States)
Mohammed Aabed, Georgia Institute of Technology (United States)
Mashhour Solh, Texas Instruments Inc. (United States)
Ghaassan AlRegib, Georgia Institute of Technology (United States)

Published in SPIE Proceedings Vol. 8666:
Visual Information Processing and Communication IV
Amir Said; Onur G. Guleryuz; Robert L. Stevenson, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?