Loading web-font TeX/Math/Italic
JGAN: A Joint Formulation of GAN for Synthesizing Images and Labels | IEEE Journals & Magazine | IEEE Xplore

JGAN: A Joint Formulation of GAN for Synthesizing Images and Labels


A schematic view of using JGAN for enhancing unsupervised image generation. We propose a novel GAN formulation that models the joint distribution of images and labels. Ou...

Abstract:

Image generation with explicit condition or label generally works better than unconditional methods. In modern GAN frameworks, both generator and discriminator are formul...Show More

Abstract:

Image generation with explicit condition or label generally works better than unconditional methods. In modern GAN frameworks, both generator and discriminator are formulated to model the conditional distribution of images given with labels. In this article, we provide an alternative formulation of GAN which models the joint distribution of images and labels. There are two advantages in this joint formulation over conditional approaches. The first advantage is that the joint formulation is more robust to label noises if it's properly modeled. This alleviates the burden of making noise-free labels and allows the use of weakly-supervised labels in image generation. The second is that we can use any kinds of weak labels or image features that have correlations with the original image data to enhance unconditional image generation. We will show the effectiveness of our joint formulation on CIFAR10, CIFAR100, and STL dataset with the state-of-the-art GAN architecture.
A schematic view of using JGAN for enhancing unsupervised image generation. We propose a novel GAN formulation that models the joint distribution of images and labels. Ou...
Published in: IEEE Access ( Volume: 8)
Page(s): 188883 - 188888
Date of Publication: 15 October 2020
Electronic ISSN: 2169-3536
No metrics found for this document.

CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

Due to the success of Generative Adversarial Network (GAN) for modeling distributions of real world data, it has been widely used for image generation. After the first introduction from Goodfellow and his colleagues [1], many researchers have improved its stability and accuracy by adopting new loss functions [2], [3], designing new network architectures [4], [5], improving training process and regularization [5], [6], imposing conditions [7]–​[13], and inventing progressive methods [14]. Among them imposing explicit conditions is one of the easiest ways of improving the quality of image generation if there exist well-defined labels. In modern GAN frameworks, both generator and discriminator are formulated to model the conditional distribution of images given with labels.

In this article, we propose an alternative formulation of GAN which models the joint distribution of images and labels. We will show that there are two advantages of this joint formulation over conditional approaches. The first advantage is that the joint formulation is more robust to label noises if it’s properly modeled. Typical labels used in image synthesis are annotated by human workers or generated by other machine learning methods. It is generally difficult to guarantee the completeness or correctness of labels for large-scale data. Since conditional image generation regards labels as a given constraint or strong hypothesis, noises in labels may degenerate the quality of image generation. Our joint formulation regards labels as a piece of additional information to model the joint distribution. It could be more robust to the noises in labels since the joint probability distribution assumes no strong conditional dependence between images and labels. We will show the joint formulation provides the same level of image generation quality with defect-free labels and becomes more robust to noises in labels. Second and more importantly, we can use any kind of weak labels or additional information which have correlation with the original image data to enhance unconditional image generation since our joint GAN formulation doesn’t require those labels when generating images but actually generates them along with images. In a conventional conditional formulation, it’s impossible to feed these additional data into the generator since we don’t know what kind of data should be added to the generator. Our experiment shows better image generation is possible without feeding labels or those additional information explicitly. Our contribution is summarized as follows:

  • We propose a novel GAN formulation that models the joint distribution of images and labels, and show that this joint formulation increases the robustness on noisy or weak labels.

  • We demonstrate that this joint formulation can be used to increase the quality of unconditional image generation by incorporating weak labels or additional information which have correlation with the original image data into training process. Since the labels or those additional information are used only for training and our GAN generates both images and labels, we don’t need to feed labels when generating images.

SECTION II.

A Joint Formulation of GAN for Modeling p(I, L)

The standard adversarial loss for the discriminator D for modeling the conditional probability p(\mathbf {I}|\mathbf {L}) , in which \mathbf {I} and \mathbf {L} are images and labels respectively, is given by: \begin{align*}&\hspace {-0.5pc}l(D)= -E_{q(\mathbf {L})}[E_{q(\mathbf {I|L})}[log(D(\mathbf {I,L})]] \\&\qquad\qquad\qquad {-\,E_{p(\mathbf {L})}[E_{p(G_{\mathbf {I}}(z)|\mathbf {L})}[log(1-D(G_{\mathbf {I}}(z), \mathbf {L}))]],}\tag{1}\end{align*}

View SourceRight-click on figure for MathML and additional features. where z is input noise, and q and p are the true distribution and the generator distribution, respectively. The generator loss is defined as: \begin{equation*} l(G)=-E_{p(\mathbf {L})}[E_{p(G_{\mathbf {I}}(z)|\mathbf {L})}[log(D(G_{\mathbf {I}}(z), \mathbf {L})]].\tag{2}\end{equation*}
View SourceRight-click on figure for MathML and additional features.

In our joint formulation, we rewrite the discriminator and generator losses with a new generator G_{\mathbf {I}, \mathbf {L}}(z) , which generate both \mathbf {I} and \mathbf {L} jointly, as follows: \begin{align*} l(D)=&-E_{q(\mathbf {L})}[E_{q(\mathbf {I|L})}[log(D(\mathbf {I}, \mathbf {L})]] \\&-\,E_{p(G_{\mathbf {I}, \mathbf {L}}(z))}[log(1-D(G_{\mathbf {I,L}}(z)))], \tag{3}\\ l(G)=&-E_{p(G_{\mathbf {I,L}}(z))}[log(D(G_{\mathbf {I,L}}(z)))].\tag{4}\end{align*}

View SourceRight-click on figure for MathML and additional features.

As you can see, no modification is made on the discriminator since the discriminator has already a joint formulation which takes p(\mathbf {L}) and p(\mathbf {I|L}) (with the assumption of conditional dependence), and G_{\mathbf {I, L}} generates \mathbf {I} and \mathbf {L} , simultaneously. Figure 1 illustrates the basic difference between the conditional and our joint formulation of exploiting labels.

FIGURE 1. - Three different GAN formulations: (left) Unsupervised GAN modeling 
$p(I|z)$
; (middle) Conditional GAN modeling 
$p(I|z, L)$
; (right, ours) Joint formulation of GAN modeling 
$p(I, L|z)$
, generating images 
$I_{fake}$
 and labels 
$L_{fake}$
 simultaneously.
FIGURE 1.

Three different GAN formulations: (left) Unsupervised GAN modeling p(I|z) ; (middle) Conditional GAN modeling p(I|z, L) ; (right, ours) Joint formulation of GAN modeling p(I, L|z) , generating images I_{fake} and labels L_{fake} simultaneously.

Benefits of joint formulation over conditional formulation are limited when there exist well-defined labels, which are made carefully by human workers or external oracles. It’s well-known that modeling joint distribution is generally a more difficult task than modeling conditional distribution due to its increased dimension in probability distribution. Thus the discriminator represents the joint distribution by the lower dimension probability distributions p(\mathbf {L}) and p(\mathbf {I|L}) . The only difference here is how we can incorporate the label in generators. Common choices of imposing condition on generators are input or hidden concatenation [7]–​[10] and conditional batch normalization [11], [12]. Our joint formulation doesn’t require labels as a condition but actually generates labels with the given input noise along with images. To do this we add an additional function approximator as a part of the generator (refer to the Experiment section for the choices of the label function approximators). Since this joint formulation doesn’t use labels as a prior for lowering the dimension of the probability distribution of the data, it can be more robust to the noises in labels if we can properly model the joint distribution in the original dimension of probability distribution. A possible drawback of this joint approach over traditional condition GAN is that we lose the controllability of image generation. However, for some scenarios, if there exist several labels, both our and conditional GAN can be used together to achieve the controllability on clean labels and the robustness on noisy labels, possibly not generated by oracle but automatically generated by other system. We will explain this in details at the next section.

A. Boosting Unsupervised Image Generation

With our joint formulation we can add additional information, which has dependence on the original data, as a weak label for the generator. Figure 2 illustrates how we can add output from other classification network \varphi for boosting the quality of unsupervised image generation. Typical choices for \varphi are class prediction results from other tasks like ImageNet classification and object detection. You can also use unsupervised learning algorithms like k -means clustering or autoencoders [15]. This is an unique advantage of JGAN over conditional GANs since the additional information is modeled simultaneously by the generator, and the discriminator uses this fake information as a condition for the decision. As you can see in Equation 3, the discriminator actually models the joint distribution with the prior equals to the training label distribution. This additional information can boost the quality of synthesized images since it can act like weak labels for the discriminator. In conventional conditional GANs, this is practically impossible since it’s hard to feed \varphi (I_{fake}) while generating images.

FIGURE 2. - Enhancing unsupervised image generation by using an additional label predictor 
$\varphi $
, which generates weak (or pseudo) label 
$L_{real}$
. The label generator part of JGAN captures the distribution of 
$\varphi $
. 
$L_{real}$
 and the generated label 
$L_{fake}$
 are fed into the discriminator in a conventional way during training.
FIGURE 2.

Enhancing unsupervised image generation by using an additional label predictor \varphi , which generates weak (or pseudo) label L_{real} . The label generator part of JGAN captures the distribution of \varphi . L_{real} and the generated label L_{fake} are fed into the discriminator in a conventional way during training.

SECTION III.

Experiment

We used CIFAR10, CIFAR100, and STL for our experiment, and resized STL images to 48\times 48 from its original size of 96\times 96 . For all experiments, we fixed the discriminator architecture for assessing the effect of our joint formulation. We followed the design used by Lucic et al. [6] as a baseline framework for the entire experiment. Table 3 and 4 show the architecture of our generator and discriminator respectively. We removed batch normalization and applied spectral normalization to all layers of the discriminator. We used one discriminator update for each generator update, and all results are evaluated at 100K generator updates except STL case, in which we used 200K generator updates for better convergence. We used 0.0004 for the learning rate for the discriminator, and 0.0001 for the generator with Adam optimizer with \beta _{1}=0.5 and \beta _{2}=0.999 . We reported the average inception score [4] of the last five epochs of several runs rather than the best score [16].

TABLE 1 Inception Scores on CIFAR10 and CIFAR100 With Different Label Noise Ratios. Note That the Joint Formulation is More Robust Than the Conditional One at High Noise Ratios. The Conditional Formulation Have Almost no Benefit Over 40% Label Noises but the Joint Formulation has Improvements (7.86 vs 8.06 (Ours) and 7.86 vs 7.93 (Ours)). This Improvement Becomes More Noticeable in CIFAR100 Case (7.74 vs 8.27 (Ours) and 7.74 vs 8.06 (Ours)). (a) CIFAR10. (b) CIFAR100
Table 1- 
Inception Scores on CIFAR10 and CIFAR100 With Different Label Noise Ratios. Note That the Joint Formulation is More Robust Than the Conditional One at High Noise Ratios. The Conditional Formulation Have Almost no Benefit Over 40% Label Noises but the Joint Formulation has Improvements (7.86 vs 8.06 (Ours) and 7.86 vs 7.93 (Ours)). This Improvement Becomes More Noticeable in CIFAR100 Case (7.74 vs 8.27 (Ours) and 7.74 vs 8.06 (Ours)). (a) CIFAR10. (b) CIFAR100
TABLE 2 Inception Scores and FIDs With Unsupervised Image Generation on CIFAR10, CIFAR100, and STL. Weak Labels From Inception Pool3 are Used in Training JGAN. Note That Accuracy Numbers of Reference Implementation Differ Slightly Due to Our Reimplementation With Architectural Change and Training Hyperparameters
Table 2- 
Inception Scores and FIDs With Unsupervised Image Generation on CIFAR10, CIFAR100, and STL. Weak Labels From Inception Pool3 are Used in Training JGAN. Note That Accuracy Numbers of Reference Implementation Differ Slightly Due to Our Reimplementation With Architectural Change and Training Hyperparameters
TABLE 3 Generator, D_{b}=4 for CIFAR10 and CIFAR100, and D_{b}=6 for STL
Table 3- 
Generator, 
$D_{b}=4$
 for CIFAR10 and CIFAR100, and 
$D_{b}=6$
 for STL
TABLE 4 Discriminator, D_{f}=32 for CIFAR10 and CIFAR100, and D_{f}=48 for STL. Spectral Normalization is Applied to all Layers. For Conditional Image Generation, we Used Projection Disciminator Proposed by Miyato et al. [13]
Table 4- 
Discriminator, 
$D_{f}=32$
 for CIFAR10 and CIFAR100, and 
$D_{f}=48$
 for STL. Spectral Normalization is Applied to all Layers. For Conditional Image Generation, we Used Projection Disciminator Proposed by Miyato et al. [13]

We first show our joint formulation is as good as the conditional formulation when modeling the conditional distribution p(\mathbf {I, L}) for clean labels, and more robust to label noises. We used input concatenation [9], [10] and conditional batch normalization [11], [13] for the generator for comparison, and the projection discriminator with hinge loss proposed by Miyato and Koyama [13], which shows the state-of-the-art result for conditional image generation. To generate labels, we added a function approximator composed of several neural network layers right after the last ReLU layer of the generator in Table 3. Table 5 describes the network architecture for the label generation part of the generator. Dropout [17] is applied to all dense layers of the label generator with the rate of 0.5 to avoid overfitting. We added label noises by randomly selecting a subset of the entire dataset and then applied a random offset for each selected label. Table 1 and Figure 3 summarize the result of inception score changes according to the amount of label noises. As you can see, our joint formulation shows a competitive result on clean labels, and remains robust even on high label noise ratios.

TABLE 5 Label Generation Part of the Generator, D_{r}=32 for CIFAR and D_{r}=48 for STL, C_{l}=128 for CIFAR10 and STL and C_{l}=256 for CIFAR100, D_{o}=10 for CIFAR10 and STL and D_{o}=100 for CIFAR100. We Used One-Hot Vector Representation for Labels
Table 5- 
Label Generation Part of the Generator, 
$D_{r}=32$
 for CIFAR and 
$D_{r}=48$
 for STL, 
$C_{l}=128$
 for CIFAR10 and STL and 
$C_{l}=256$
 for CIFAR100, 
$D_{o}=10$
 for CIFAR10 and STL and 
$D_{o}=100$
 for CIFAR100. We Used One-Hot Vector Representation for Labels
FIGURE 3. - Inception scores on CIFAR10 and CIFAR100 with different label noise ratios. Graphical illustration of Table 1 a (left) and Table 1 b (right). JGAN shows robustness on label noise in both cases.
FIGURE 3.

Inception scores on CIFAR10 and CIFAR100 with different label noise ratios. Graphical illustration of Table 1 a (left) and Table 1 b (right). JGAN shows robustness on label noise in both cases.

FIGURE 4. - Comparison of images generated by (top row) conditional GAN and (bottom row) joint GAN on CIFAR10 with noisy labels. (from left to right) Generated images with clean label (0% noise), 20% noise, and 40% noise. In each sub-figure, rows are class ids and columns are random samples of each class id. Our JGAN shows a better inception score than conditional GAN (Refer to table 1 a for inception score of each case).
FIGURE 4.

Comparison of images generated by (top row) conditional GAN and (bottom row) joint GAN on CIFAR10 with noisy labels. (from left to right) Generated images with clean label (0% noise), 20% noise, and 40% noise. In each sub-figure, rows are class ids and columns are random samples of each class id. Our JGAN shows a better inception score than conditional GAN (Refer to table 1 a for inception score of each case).

FIGURE 5. - Comparison of (left) real unlabeled STL dataset, (middle) images generated by unsupervised GAN, and (right) joint GAN with weak label from ImageNet classification task, which shows a better inception score than unsupervised image synthesis (refer to table 2).
FIGURE 5.

Comparison of (left) real unlabeled STL dataset, (middle) images generated by unsupervised GAN, and (right) joint GAN with weak label from ImageNet classification task, which shows a better inception score than unsupervised image synthesis (refer to table 2).

Our next experiment is focused on improving unconditional image generation by incorporating an additional information. We used the class probability of inception network as a staring point of this addtional information. We used the same inception network version used in [4]. Since it has a probability distribution of 1000 classes and it’s difficult to find the optimal network architecture to capture this high dimensional probability distribution, we applied truncated singular value decomposition (SVD) to reduce its dimension to 64 to simplify the problem. We applied softmax to the output of truncated SVD to make it a probability distribution in lower dimensional space. Table 2 summarizes the comparison result between unsupervised and and joint image generation. We used the same network architecture for both unsupervised and joint settings except additional label function approximation. We used a label generator slightly different from the ones used in Table 5. Table 6 describes the network for weak label generation. As you can see, JGAN consistently generates images with higher inception and FID [19] scores compared to unsupervised ones. We have achieved the best unsupervised image generation score in STL dataset compared to [5] and [20], that reported 9.05 and 9.50, respectively. Note that our baseline implementation achieved a better result due to different network architecture and training process but our joint formulation achieved even higher inception and FID scores.

TABLE 6 Label Generation Part of the Generator, D_{r}=32 for CIFAR and D_{r}=48 for STL, C_{l}=128 and D_{o}=64 for all Cases
Table 6- 
Label Generation Part of the Generator, 
$D_{r}=32$
 for CIFAR and 
$D_{r}=48$
 for STL, 
$C_{l}=128$
 and 
$D_{o}=64$
 for all Cases

SECTION IV.

Conclusion

In this article, we propose a novel GAN framework that models the joint probabilistic distribution of images and labels. We showed that this joint formulation can generate as good image quality as the conventional conditional image generation with clean labels, and remains robust when there exist noises in labels. We also applied our method to improve the image quality of unconditional image generation by incorporating additional information which has correlation with the original image data. We think this joint formulation can provide an easy way to feed many kinds of relevant information or weak labels into the GAN framework with a simple modification of the generator. There are several interesting future works like finding optimal network architectures for the label generator and testing with other methods for generating additional information we can use with our joint formulation. Even though we used images as our main target domain, we expect our formulation works for other domains as well.

Usage
Select a Year
2025

View as

Total usage sinceOct 2020:414
012345JanFebMarAprMayJunJulAugSepOctNovDec140000000000
Year Total:5
Data is updated monthly. Usage includes PDF downloads and HTML views.

References

References is not available for this document.