On Positive-Unlabeled Classification in GAN | IEEE Conference Publication | IEEE Xplore

On Positive-Unlabeled Classification in GAN


Abstract:

This paper defines a positive and unlabeled classification problem for standard GANs, which then leads to a novel technique to stabilize the training of the discriminator...Show More

Abstract:

This paper defines a positive and unlabeled classification problem for standard GANs, which then leads to a novel technique to stabilize the training of the discriminator in GANs. Traditionally, real data are taken as positive while generated data are negative. This positive-negative classification criterion was kept fixed all through the learning process of the discriminator without considering the gradually improved quality of generated data, even if they could be more realistic than real data at times. In contrast, it is more reasonable to treat the generated data as unlabeled, which could be positive or negative according to their quality. The discriminator is thus a classifier for this positive and unlabeled classification problem, and we derive a new Positive-Unlabeled GAN (PUGAN). We theoretically discuss the global optimality the proposed model will achieve and the equivalent optimization goal. Empirically, we find that PUGAN can achieve comparable or even better performance than those sophisticated discriminator stabilization methods.
Date of Conference: 13-19 June 2020
Date Added to IEEE Xplore: 05 August 2020
ISBN Information:

ISSN Information:

Conference Location: Seattle, WA, USA
References is not available for this document.

1. Introduction

Recently, deep generative models have received remarkable achievements in image generation tasks [14], [22], [25], [5]. As a representative generative model, GANs [5] approximated a target distribution via playing a min-max game. In the standard framework of GAN [5], [23], a generator takes noise vectors from a prior distribution (e.g. Gaussian distribution and normal distribution) as the input and tends to produce data that follows the distribution of the reference natural images, while the discriminator aims to distinguish the generated data from the real data. Various GAN methods have been developed in many interesting applications. For example, in the image-to-image translation task, generators in GANs map the input image to output image. Representative methods include Pix2pix [10] over paired training images and cycleGAN [30] in an unsupervised way.

Select All
1.
Martin Arjovsky, Soumith Chintala and Léon Bottou, "Wasser-stein generative adversarial networks", Proc. of International Conference on Machine Learning, pp. 214-223, 2017.
2.
Javad Behboodian, "On the modes of a mixture of two normal distributions", Technometrics, vol. 12, no. 1, pp. 131-139, 1970.
3.
Andrew Brock, Jeff Donahue and Karen Simonyan, "Large scale gan training for high fidelity natural image synthesis", arXiv preprint, 2018.
4.
Marthinus C Du Plessis, Gang Niu and Masashi Sugiyama, "Analysis of learning from positive and unlabeled data", Advances in neural information processing systems, pp. 703-711, 2014.
5.
Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, et al., "Generative adversarial nets", Advances in neural information processing systems, pp. 2672-2680, 2014.
6.
Ishaan Gulrajani, Faruk Ahmed, Martin Arjovsky, Vincent Dumoulin and Aaron C Courville, "Improved training of wasserstein gans", Advances in neural information processing systems, pp. 5767-5777, 2017.
7.
Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, Günter Klambauer and Sepp Hochreiter, "Gans trained by a two time-scale update rule converge to a nash equilibrium", arXiv preprint, 2017.
8.
Ming Hou, Brahim Chaib-Draa, Chao Li and Qibin Zhao, "Generative adversarial positive-unlabelled learning", arXiv preprint, 2017.
9.
Sergey Ioffe and Christian Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift", arXiv preprint, 2015.
10.
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou and Alexei A Efros, "Image-to-image translation with conditional adversarial networks", arXiv preprint, 2017.
11.
Alexia Jolicoeur-Martineau, "The relativistic discriminator: a key element missing from standard GAN", Proc. of International Conference on Learning Representations, 2019.
12.
Tero Karras, Timo Aila, Samuli Laine and Jaakko Lehtinen, "Progressive growing of gans for improved quality stability and variation", arXiv preprint, 2017.
13.
Diederik P Kingma and Jimmy Ba, "Adam: A method for stochastic optimization", arXiv preprint, 2014.
14.
Diederik P Kingma and Max Welling, "Auto-encoding variational bayes", arXiv preprint, 2013.
15.
Kiryo Ryuichi, Niu Gang, Marthinus C du Plessis and Masashi Sugiyama, "Positive-unlabeled learning with nonnegative risk estimator", Advances in neural information processing systems, 2017.
16.
Alex Krizhevsky and Geoffrey Hinton, Learning multiple layers of features from tiny images, 2009.
17.
Yann LeCun, Léon Bottou, Yoshua Bengio and Patrick Haffner, "Gradient-based learning applied to document recognition", Proceedings of the IEEE, 1998.
18.
Bruce G Lindsay, "Mixture models: theory geometry and applications", NSF-CBMS regional conference series in probability and statistics, pp. i-163, 1995.
19.
Mao Xudong, Li Qing, Xie Haoran, Raymond YK Lau, Zhen Wang and Stephen Paul Smolley, "Least squares generative adversarial networks", Proc. of International Conference on Computer Vision, 2017.
20.
Mao Xudong, Li Qing, Xie Haoran, Y. Raymond and K. Lau, "and Zhen Wang. Multi-class generative adversarial networks with the L2 loss function", CoRR abs/1611.04076, 2016.
21.
Takeru Miyato, Toshiki Kataoka, Masanori Koyama and Yuichi Yoshida, "Spectral normalization for generative adversarial networks", arXiv preprint, 2018.
22.
Aaron van den Oord, Nal Kalchbrenner and Koray Kavukcuoglu, "Pixel recurrent neural networks", arXiv preprint, 2016.
23.
Alec Radford, Luke Metz and Soumith Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks", arXiv preprint, 2015.
24.
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford and Xi Chen, "Improved techniques for training gans", Advances in neural information processing systems, pp. 2234-2242, 2016.
25.
Aaron van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves et al., "Conditional image generation with pixelcnn decoders", Advances in neural information processing systems, 2016.
26.
Han Xiao, Kashif Rasul and Roland Vollgraf, Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
27.
Yixing Xu, Chang Xu, Chao Xu and Dacheng Tao, "Multi-positive and unlabeled learning", IJCAI, 2017.
28.
Fisher Yu, Yinda Zhang, Shuran Song, Ari Seff and Jianxiong Xiao, "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop", arXiv preprint, 2015.
29.
Weiwei Zhang, Jian Sun and Xiaoou Tang, "Cat head detection-how to effectively exploit shape and texture features", Proc. of European Conference on Computer Vision, pp. 802-816, 2008.
30.
Jun-Yan Zhu, Taesung Park, Phillip Isola and Alexei A Efros, "Unpaired image-to-image translation using cycle-consistent adversarial networks", arXiv preprint, 2017.
Contact IEEE to Subscribe

References

References is not available for this document.