Share Email Print
cover

Proceedings Paper • new

Semantic image inpainting with dense and dilated deep convolutional autoencoder adversarial network
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

The developments of generative adversarial networks (GANs) make it possible to fill missing regions in broken images with convincing details. However, many existing approaches fail to keep the inpainted content and structures consistent with their surroundings. In this paper, we propose a GAN-based inpainting model which can restore the semantic damaged images visually reasonable and coherent. In our model, the generative network has an autoencoder frame and the discriminator network is a CNN classifier. Different from the classic autoencoder, we design a novel bottleneck layer in the middle of the autoencoder which is comprised of four dense-net blocks and each block contains vanilla convolution layers and dilated convolution layers. The kernels of dilated convolution are spread out and result in an effective enlargement of the receptive field. Thus the model can capture more widely semantic information to ensure the consistency of inpainted images. Furthermore, the multiplex of different level’s features in each dense-net block can help the model understand the whole image better to produce a convincing image. We evaluate our model over the public datasets CelebA and Stanford Cars with random position masks of different ratios. The effectiveness of our model is verified by qualitative and quantitative experiments.

Paper Details

Date Published: 18 November 2019
PDF: 9 pages
Proc. SPIE 11187, Optoelectronic Imaging and Multimedia Technology VI, 1118712 (18 November 2019); doi: 10.1117/12.2538756
Show Author Affiliations
Kun Ren, Beijing Univ. of Technology (China)
Ministry of Education of the People's Republic of China (China)
Beijing Lab. for Urban Mass Transit (China)
Chunqi Fan, Beijing Univ. of Technology (China)
Ministry of Education of the People's Republic of China (China)
Beijing Lab. for Urban Mass Transit (China)
Lisha Meng, Beijing Univ. of Technology (China)
Ministry of Education of the People's Republic of China (China)
Beijing Lab. for Urban Mass Transit (China)
Hong Huang, Beijing Univ. of Technology (China)
Ministry of Education of the People's Republic of China (China)
Beijing Lab. for Urban Mass Transit (China)


Published in SPIE Proceedings Vol. 11187:
Optoelectronic Imaging and Multimedia Technology VI
Qionghai Dai; Tsutomu Shimura; Zhenrong Zheng, Editor(s)

© SPIE. Terms of Use
Back to Top