Share Email Print
cover

Proceedings Paper

Single image super resolution based on generative adversarial networks
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Deep neural networks based on SRGAN single image super-resolution reconstruction can generate more realistic images than CNN-based super-resolution deep neural networks. However, when the network is deeper and more complex, unpleasant artifacts can result. Through a lot of experiments, we can use the ESRGAN model to avoid such problems. When using the ESRGAN model for super-resolution reconstruction, the perceived index of the resulting results does not reach a lower value. There are two reasons for this: (1)ESRGAN does not expand the feature maping. ESRGAN uses 128*128 to obtain the feature information of the image by default, and can't get more image information better. (2) ESRGAN did not re-optimize the generated image. Therefore, we propose ESRGAN-Pro to optimize ESRGAN for the above two aspects, combined with a large amount of training data, and get a better perception index and texture.

Paper Details

Date Published: 14 August 2019
PDF: 8 pages
Proc. SPIE 11179, Eleventh International Conference on Digital Image Processing (ICDIP 2019), 111790T (14 August 2019); doi: 10.1117/12.2539692
Show Author Affiliations
Kai Li, Qinghai Univ. (China)
Liang Ye, Rocket Force Univ. of Engineering (China)
Shenghao Yang, Qinghai Univ. (China)
Jinfang Jia, Qinghai Univ. (China)
Jianqiang Huang, Qinghai Univ. (China)
Xiaoying Wang, Qinghai Univ. (China)


Published in SPIE Proceedings Vol. 11179:
Eleventh International Conference on Digital Image Processing (ICDIP 2019)
Jenq-Neng Hwang; Xudong Jiang, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray