I. Introduction
A generative model is capable of synthesizing (or generating) a variety of new samples in a high-dimensional subspace [1]–[5]. Recently, generative adversarial network (GAN) [6], [7] has become one the most popular and successful generative models that are able to generate high-quality realistic images and videos. In contrast to some traditional deep neural network (DNN)-based generative models [8]–[11], such as the pixel recurrent neural networks (Pixel-RNNs) [12] and variational autoencoder (VAE) [13], GAN can generate high-quality virtual samples with low computational complexity and has no lower limit of variation. The various counterproductive generative networks are asymptotically consistent, while VAE has some bias [14]–[16]. Meanwhile, GAN has neither a lower limit of change nor a tricky partition function compared with deep Boltzmann machines (DBMs) [17]. Finally, virtual samples can be generated by GAN in a single forward pass, rather than through an iterative process, such as the Markov chain operators [18].