Share Email Print
cover

Proceedings Paper

View synthesis by shared conditional adversarial autoencoder
Author(s): Xingya Chang; Dongyue Chen; Qiusheng Chen; Tong Jia; Hongyu Wang
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

An important problem for both image processing and computer version is to synthesize the novel view of a 3D object. We propose a shared conditional adversarial auto-encoder (SCAAE) network that is trained end-to-end on the task of rendering previously unseen object given a single image of this object. The model uses the advanced GAN framework to build the generator by introducing U-net, which can generate a novel view image based on the input image and a controllable condition signal. The FCN model is used to construct the D-network to distinguish real and fake images. We also propose a new objective function which considers both the distribution consistency and transformation persistence. We designed a SCAEE network to generate multi-view images of objects, instead of the three dimensional effect of physical models, which solves the shortcoming of artificial modeling. Experiments demonstrate that the new network structure is better than other already existing.

Paper Details

Date Published: 31 January 2020
PDF: 6 pages
Proc. SPIE 11427, Second Target Recognition and Artificial Intelligence Summit Forum, 114270T (31 January 2020); doi: 10.1117/12.2550551
Show Author Affiliations
Xingya Chang, Northeastern Univ. (China)
Dongyue Chen, Northeastern Univ. (China)
Qiusheng Chen, Northeastern Univ. (China)
Tong Jia, Northeastern Univ. (China)
Hongyu Wang, Northeastern Univ. (China)


Published in SPIE Proceedings Vol. 11427:
Second Target Recognition and Artificial Intelligence Summit Forum
Tianran Wang; Tianyou Chai; Huitao Fan; Qifeng Yu, Editor(s)

© SPIE. Terms of Use
Back to Top
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?
close_icon_gray