Loading web-font TeX/Math/Italic
Least Squares Generative Adversarial Networks | IEEE Conference Publication | IEEE Xplore

Least Squares Generative Adversarial Networks


Abstract:

Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoi...Show More

Abstract:

Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on LSUN and CIFAR-10 datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.
Date of Conference: 22-29 October 2017
Date Added to IEEE Xplore: 25 December 2017
ISBN Information:
Electronic ISSN: 2380-7504
Conference Location: Venice, Italy

1. Introduction

Deep learning has launched a profound reformation and even been applied to many real-world tasks, such as image classification [7], object detection [27] and segmentation [18]. These tasks obviously fall into the scope of supervised learning, which means that a lot of labeled data are provided for the learning processes. Compared with supervised learning, however, unsupervised learning tasks, such as generative models, obtain limited impact from deep learning. Although some deep generative models, e.g. RBM [8], DBM [28] and VAE [14], have been proposed, these models face the difficulty of intractable functions or the difficulty of intractable inference, which in turn restricts the effectiveness of these models.

Contact IEEE to Subscribe

References

References is not available for this document.