Share Email Print
cover

Optical Engineering

Efficient multiview depth video coding using depth synthesis prediction
Author(s): Cheon Lee; Yo-Sung Ho; Byeongho Choi
Format Member Price Non-Member Price
PDF $20.00 $25.00

Paper Abstract

The view synthesis prediction (VSP) method utilizes interview correlations between views by generating an additional reference frame in the multiview video coding. This paper describes a multiview depth video coding scheme that incorporates depth view synthesis and additional prediction modes. In the proposed scheme, we exploit the reconstructed neighboring depth frame to generate an additional reference depth image for the current viewpoint to be coded using the depth image-based-rendering technique. In order to generate high-quality reference depth images, we used pre-processing on depth, depth image warping, and two types of hole filling methods depending on the number of available reference views. After synthesizing the additional depth image, we encode the depth video using the proposed additional prediction modes named VSP modes; those additional modes refer to the synthesized depth image. In particular, the VSP_SKIP mode refers to the co-located block of the synthesized frame without the coding motion vectors and residual data, which gives most of the coding gains. Experimental results demonstrate that the proposed depth view synthesis method provides high-quality depth images for the current view and the proposed VSP modes provide high coding gains, especially on the anchor frames.

Paper Details

Date Published: 1 July 2011
PDF: 15 pages
Opt. Eng. 50(7) 077004 doi: 10.1117/1.3600575
Published in: Optical Engineering Volume 50, Issue 7
Show Author Affiliations
Cheon Lee, Gwangju Institute of Science and Technology (Korea, Republic of)
Yo-Sung Ho, Gwangju Institute of Science and Technology (Korea, Republic of)
Byeongho Choi, Korea Electronics Technology Institute (Korea, Republic of)


© SPIE. Terms of Use
Back to Top