Share Email Print

Proceedings Paper

Real-time 3D video compression for tele-immersive environments
Author(s): Zhenyu Yang; Yi Cui; Zahid Anwar; Robert Bocchino; Nadir Kiyanclar; Klara Nahrstedt; Roy H. Campbell; William Yurcik
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Tele-immersive systems can improve productivity and aid communication by allowing distributed parties to exchange information via a shared immersive experience. The TEEVE research project at the University of Illinois at Urbana-Champaign and the University of California at Berkeley seeks to foster the development and use of tele-immersive environments by a holistic integration of existing components that capture, transmit, and render three-dimensional (3D) scenes in real time to convey a sense of immersive space. However, the transmission of 3D video poses significant challenges. First, it is bandwidth-intensive, as it requires the transmission of multiple large-volume 3D video streams. Second, existing schemes for 2D color video compression such as MPEG, JPEG, and H.263 cannot be applied directly because the 3D video data contains depth as well as color information. Our goal is to explore from a different angle of the 3D compression space with factors including complexity, compression ratio, quality, and real-time performance. To investigate these trade-offs, we present and evaluate two simple 3D compression schemes. For the first scheme, we use color reduction to compress the color information, which we then compress along with the depth information using zlib. For the second scheme, we use motion JPEG to compress the color information and run-length encoding followed by Huffman coding to compress the depth information. We apply both schemes to 3D videos captured from a real tele-immersive environment. Our experimental results show that: (1) the compressed data preserves enough information to communicate the 3D images effectively (min. PSNR > 40) and (2) even without inter-frame motion estimation, very high compression ratios (avg. > 15) are achievable at speeds sufficient to allow real-time communication (avg. ≈ 13 ms per 3D video frame).

Paper Details

Date Published: 16 January 2006
PDF: 12 pages
Proc. SPIE 6071, Multimedia Computing and Networking 2006, 607102 (16 January 2006); doi: 10.1117/12.642513
Show Author Affiliations
Zhenyu Yang, Univ. of Illinois at Urbana-Champaign (United States)
Yi Cui, Univ. of Illinois at Urbana-Champaign (United States)
Zahid Anwar, Univ. of Illinois at Urbana-Champaign (United States)
Robert Bocchino, Univ. of Illinois at Urbana-Champaign (United States)
Nadir Kiyanclar, Univ. of Illinois at Urbana-Champaign (United States)
Klara Nahrstedt, Univ. of Illinois at Urbana-Champaign (United States)
Roy H. Campbell, Univ. of Illinois at Urbana-Champaign (United States)
William Yurcik, National Ctr. for Supercomputing Applications (United States)

Published in SPIE Proceedings Vol. 6071:
Multimedia Computing and Networking 2006
Surendar Chandra; Carsten Griwodz, Editor(s)

© SPIE. Terms of Use
Back to Top