Share Email Print

Proceedings Paper

A content-based method for perceptually driven joint color/depth compression
Author(s): E. Bosc; L. Morin; M. Pressigout
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Multi-view Video plus Depth (MVD) data refer to a set of conventional color video sequences and an associated set of depth video sequences, all acquired at slightly different viewpoints. This huge amount of data necessitates a reliable compression method. However, there is no standardized compression method for MVD sequences. H.264/MVC compression method, which was standardized for Multi-View-Video representation (MVV), has been the subject of many adaptations to MVD. However, it has been shown that MVC is not well adapted to encode multi-view depth data. We propose a novel option as for compression of MVD data. Its main purpose is to preserve joint color/depth consistency. The originality of the proposed method relies on the use of the decoded color data as a prior for the associated depth compression. This is meant to ensure consistency in both types of data after decoding. Our strategy is motivated by previous studies of artifacts occurring in synthesized views: most annoying distortions are located around strong depth discontinuities and these distortions are due to misalignment of depth and color edges in decoded images. Thus the method is meant to preserve edges and to ensure cosistent localization of color edges and depth edges. To ensure compatibility, colored sequences are encoded with H.264. Depth maps compression is based on a 2D still image codec, namely LAR (locally adapted resolution). It consists in a quad-tree representation of the images. The quad-tree representation contributes in the preservation of edges in both color and depth data. The adopted strategy is meant to be more perceptually driven than state-of-the-art methods. The proposed approach is compared to H.264 encoding of depth images. Objective metrics scores are similar with H.264 and with the proposed method, and visual quality of synthesized views is improved with the proposed approach.

Paper Details

Date Published: 27 February 2012
PDF: 12 pages
Proc. SPIE 8288, Stereoscopic Displays and Applications XXIII, 82882C (27 February 2012); doi: 10.1117/12.906642
Show Author Affiliations
E. Bosc, IETR, CNRS, Institut National des Sciences Appliquées de Rennes (France)
L. Morin, IETR, CNRS, Institut National des Sciences Appliquées de Rennes (France)
M. Pressigout, IETR, CNRS, Institut National des Sciences Appliquées de Rennes (France)

Published in SPIE Proceedings Vol. 8288:
Stereoscopic Displays and Applications XXIII
Andrew J. Woods; Nicolas S. Holliman; Gregg E. Favalora, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?