Training Generative Adversarial Networks via Stochastic Nash Games | IEEE Journals & Magazine | IEEE Xplore

Training Generative Adversarial Networks via Stochastic Nash Games


Abstract:

Generative adversarial networks (GANs) are a class of generative models with two antagonistic neural networks: a generator and a discriminator. These two neural networks ...Show More

Abstract:

Generative adversarial networks (GANs) are a class of generative models with two antagonistic neural networks: a generator and a discriminator. These two neural networks compete against each other through an adversarial process that can be modeled as a stochastic Nash equilibrium problem. Since the associated training process is challenging, it is fundamental to design reliable algorithms to compute an equilibrium. In this article, we propose a stochastic relaxed forward-backward (SRFB) algorithm for GANs, and we show convergence to an exact solution when an increasing number of data is available. We also show convergence of an averaged variant of the SRFB algorithm to a neighborhood of the solution when only a few samples are available. In both cases, convergence is guaranteed when the pseudogradient mapping of the game is monotone. This assumption is among the weakest known in the literature. Moreover, we apply our algorithm to the image generation problem.
Page(s): 1319 - 1328
Date of Publication: 26 August 2021

ISSN Information:

PubMed ID: 34437077

Funding Agency:


I. Introduction

Generative adversarial networks (GANs) is an example of an unsupervised generative model. The basic idea is that, given some samples drawn from a probability distribution, the neural network takes a training set and learns how to obtain an estimate of such distribution. Most of the literature on GANs focuses on sample generation (especially image generation), but they can also be designed to explicitly estimate a probability distribution [1]–[4].

Contact IEEE to Subscribe

References

References is not available for this document.