Share Email Print
cover

Proceedings Paper • new

Examining performance of sketch-to-image translation models with multiclass automatically generated paired training data
Author(s): Dichao Hu
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Image translation is a computer vision task that involves translating one representation of the scene into another. Various approaches have been proposed and achieved highly desirable results. Nevertheless, its accomplishment requires abundant paired training data which are expensive to acquire. Therefore, models for translation are usually trained on a set of paired training data which are carefully and laboriously designed. Our work is focused on learning through automatically generated paired data. We propose a method to generate fake sketches from images using an adversarial network and then pair the images with corresponding fake sketches to form large-scale multi-class paired training data for training a sketch-to-image translation model. Our model is an encoder-decoder architecture where the encoder generates fake sketches from images and the decoder performs sketch-to-image translation. Qualitative results show that the encoder can be used for generating large-scale multi-class paired data under low supervision. Our current dataset now contains 61255 image and (fake) sketch pairs from 256 different categories. These figures can be greatly increased in the future thanks to our weak reliance on manually labelled data.

Paper Details

Date Published: 22 March 2019
PDF: 6 pages
Proc. SPIE 11049, International Workshop on Advanced Image Technology (IWAIT) 2019, 110490F (22 March 2019); doi: 10.1117/12.2521309
Show Author Affiliations
Dichao Hu, Georgia Institute of Technology (United States)


Published in SPIE Proceedings Vol. 11049:
International Workshop on Advanced Image Technology (IWAIT) 2019
Qian Kemao; Kazuya Hayase; Phooi Yee Lau; Wen-Nung Lie; Yung-Lyul Lee; Sanun Srisuk; Lu Yu, Editor(s)

© SPIE. Terms of Use
Back to Top