Loading [MathJax]/extensions/MathMenu.js
A Multi-Player Minimax Game for Generative Adversarial Networks | IEEE Conference Publication | IEEE Xplore

A Multi-Player Minimax Game for Generative Adversarial Networks


Abstract:

While multi-discriminators have been recently exploited to enhance the discriminability and diversity of Generative Adversarial Networks (GANs), these independent discrim...Show More

Abstract:

While multi-discriminators have been recently exploited to enhance the discriminability and diversity of Generative Adversarial Networks (GANs), these independent discriminators may not collaborate harmoniously to learn diverse and complementary decision boundaries. This paper extends the original two-player adversarial game of GANs by introducing a new multi-player objective named Discriminator Discrepancy Loss (DDL) for diversifying the multi-discriminators. Besides the competition between the generator and each discriminator, there are also competitions between the discriminators: 1) When training multi-discriminators, we simultaneously minimize the original GAN loss and maximize DDL, seeking a good trade-off between the accuracy and diversity. This yields diversified multi-discriminators that fit the generated data distribution to the real data distribution from more comprehensive perspectives. 2) When training the generator, we minimize DDL to encourage the generator to confuse all discriminators. This enhances the diversity of the generated data distribution. Further, we propose a layer-sharing network architecture for the multi-discriminators, which allows them to learn from distinct perspectives about the shared low-level features through better collaboration. It also makes our model more lightweight than existing multi-discriminators approaches. Our DDL-GAN remarkably outperforms other GANs over five standard datasets for image generation tasks.
Date of Conference: 06-10 July 2020
Date Added to IEEE Xplore: 09 June 2020
ISBN Information:

ISSN Information:

Conference Location: London, UK

1. Introduction

Generative Adversarial Network (GAN) [1] is one of the mainstream techniques that can fit generated data into complicated real data. When being trained towards an adversarial equilibrium (if it exists) in a minimax game, the generator G attempts to fit the real data distribution Pdata, while a discriminator D attempts to distinguish Pdata and the generated data distribution PG. In this two-player game, as long as D manages to distinguish the real from the fake with nonzero probability, it will generate feedback to G through back-propagation to improve its synthesized distribution. However, if D is too weak, as the case in Fig. 1(a), it will lead to mode collapse and fail to generate realistic data. A variety of techniques, e.g., weight clipping [2], gradient penalty [3], spectral normalization [4], and self-attention [5], have been introduced to enhance the modeling capability of D. The multi-discriminators framework [6] is an alternative method to strengthen D, where different Ds may focus on different perspectives of Pdata. Hopefully, an ensemble of Ds can identify the underlying subtle distinctions between PG and Pdata and improve G as illustrated in Fig. 1(b) and Fig. 1(c). But such an ideal situation may not be practical, as the diversity of their decision boundaries is not guaranteed explicitly. The multi-discriminators are constructed with homogeneous network architecture and trained for the same task from the same training data. Thus, some of them will generate similar decision boundaries as shown in Fig. 1(d). In the worst case, they may even degenerate to a single discriminator.

Contact IEEE to Subscribe

References

References is not available for this document.