Improved ArtGAN for Conditional Synthesis of Natural Image and Artwork

Document Type

Article

Publication Date

1-1-2019

Abstract

This paper proposes a series of new approaches to improve generative adversarial network (GAN) for conditional image synthesis and we name the proposed model as 'ArtGAN. ' One of the key innovation of ArtGAN is that, the gradient of the loss function w.r.t. the label (randomly assigned to each generated image) is back-propagated from the categorical discriminator to the generator. With the feedback from the label information, the generator is able to learn more efficiently and generate image with better quality. Inspired by recent works, an autoencoder is incorporated into the categorical discriminator for additional complementary information. Last but not least, we introduce a novel strategy to improve the image quality. In the experiments, we evaluate ArtGAN on CIFAR-10 and STL-10 via ablation studies. The empirical results showed that our proposed model outperforms the state-of-the-art results on CIFAR-10 in terms of Inception score. Qualitatively, we demonstrate that ArtGAN is able to generate plausible-looking images on Oxford-102 and CUB-200, as well as able to draw realistic artworks based on style, artist, and genre. The source code and models are available at: https://github.com/cs-chan/ArtGAN.

Keywords

ArtGAN, artwork synthesis, deep learning, Generative adversarial networks, image synthesis

Divisions

fsktm

Funders

Fundamental Research Grant Scheme (FRGS) MoHE from the Ministry of Education Malaysia under Grant FP004-2016,UM Frontier Research from University of Malaya under Grant FG002-17AFR

Publication Title

IEEE Transactions on Image Processing

Volume

28

Issue

1

Publisher

Institute of Electrical and Electronics Engineers

This document is currently not available here.

Share

COinS