Share Email Print
cover

Proceedings Paper

Temporally coherent 4D video segmentation for teleconferencing
Author(s): Jana Ehmann; Onur G. Guleryuz
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

We develop an algorithm for 4-D (RGB+Depth) video segmentation targeting immersive teleconferencing ap- plications on emerging mobile devices. Our algorithm extracts users from their environments and places them onto virtual backgrounds similar to green-screening. The virtual backgrounds increase immersion and interac- tivity, relieving the users of the system from distractions caused by disparate environments. Commodity depth sensors, while providing useful information for segmentation, result in noisy depth maps with a large number of missing depth values. By combining depth and RGB information, our work signi¯cantly improves the other- wise very coarse segmentation. Further imposing temporal coherence yields compositions where the foregrounds seamlessly blend with the virtual backgrounds with minimal °icker and other artifacts. We achieve said improve- ments by correcting the missing information in depth maps before fast RGB-based segmentation, which operates in conjunction with temporal coherence. Simulation results indicate the e±cacy of the proposed system in video conferencing scenarios.

Paper Details

Date Published: 26 September 2013
PDF: 11 pages
Proc. SPIE 8856, Applications of Digital Image Processing XXXVI, 88560L (26 September 2013); doi: 10.1117/12.2031740
Show Author Affiliations
Jana Ehmann, FutureWei Technologies, Inc. (United States)
Onur G. Guleryuz, LG Electronics USA (United States)


Published in SPIE Proceedings Vol. 8856:
Applications of Digital Image Processing XXXVI
Andrew G. Tescher, Editor(s)

© SPIE. Terms of Use
Back to Top