Share Email Print
cover

Proceedings Paper

Innovative hole-filling method for depth-image-based rendering (DIBR) based on context learning
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

A new convolutional neural network is proposed for hole filling in the synthesized virtual view generated by depth image-based rendering (DIBR). A context encoder in the network is trained to make predictions of the hole region based on the rendered virtual view, with an adversarial discriminator reducing the errors and producing sharper and more precise result. A texture network in the end of the framework extracts the style of the image and achieves a natural output which is closer to reality. The experiment results demonstrate both subjectively and objectively that the proposed method obtain better 3D video quality compared to previous methods. The average peak signal-to-noise ratio (PSNR) increases by 0.36 dB.

Paper Details

Date Published: 6 November 2018
PDF: 5 pages
Proc. SPIE 10817, Optoelectronic Imaging and Multimedia Technology V, 1081706 (6 November 2018); doi: 10.1117/12.2500779
Show Author Affiliations
Chao Li, Beijing Univ. of Posts and Telecommunications (China)
Xinzhu Sang, Beijing Univ. of Posts and Telecommunications (China)
Duo Chen, Beijing Univ. of Posts and Telecommunications (China)
Di Zhang, Beijing Univ. of Posts and Telecommunications (China)


Published in SPIE Proceedings Vol. 10817:
Optoelectronic Imaging and Multimedia Technology V
Qionghai Dai; Tsutomu Shimura, Editor(s)

© SPIE. Terms of Use
Back to Top