Share Email Print
cover

Proceedings Paper

Attention-guided GANs for human pose transfer
Author(s): Jinsong Zhang; Yuyang Zhao; Kun Li; Yebin Liu; Jingyu Yang; Qionghai Dai
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper presents a novel generative adversarial network for the task of human pose transfer, which aims at transferring the pose of a given person to a target pose. In order to deal with pixel-to-pixel misalignment due to the pose differences, we introduce an attention mechanism and propose Pose-Guided Attention Blocks. With these blocks, the generator can learn how to transfer the details from the conditional image to the target image based on the target pose. Our network can make the target pose truly guide the transfer of features. The effectiveness of the proposed network is validated on DeepFasion and Market-1501 datasets. Compared with state-of-the-art methods, our generated images are more realistic with better facial details.

Paper Details

Date Published: 18 November 2019
PDF: 8 pages
Proc. SPIE 11187, Optoelectronic Imaging and Multimedia Technology VI, 111870W (18 November 2019); doi: 10.1117/12.2538638
Show Author Affiliations
Jinsong Zhang, Tianjin Uinv. (China)
Yuyang Zhao, Tianjin Univ. (China)
Kun Li, Tianjin Univ. (China)
Yebin Liu, Tsinghua Univ. (China)
Jingyu Yang, Tianjin Univ. (China)
Qionghai Dai, Tsinghua Univ. (China)


Published in SPIE Proceedings Vol. 11187:
Optoelectronic Imaging and Multimedia Technology VI
Qionghai Dai; Tsutomu Shimura; Zhenrong Zheng, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray