Loading [MathJax]/extensions/MathZoom.js
Edge Guided GANs With Multi-Scale Contrastive Learning for Semantic Image Synthesis | IEEE Journals & Magazine | IEEE Xplore

Edge Guided GANs With Multi-Scale Contrastive Learning for Semantic Image Synthesis


Abstract:

We propose a novel edge guided generative adversarial network with contrastive learning (ECGAN) for the challenging semantic image synthesis task. Although considerable i...Show More

Abstract:

We propose a novel edge guided generative adversarial network with contrastive learning (ECGAN) for the challenging semantic image synthesis task. Although considerable improvements have been achieved by the community in the recent period, the quality of synthesized images is far from satisfactory due to three largely unresolved challenges. 1) The semantic labels do not provide detailed structural information, making it challenging to synthesize local details and structures; 2) The widely adopted CNN operations such as convolution, down-sampling, and normalization usually cause spatial resolution loss and thus cannot fully preserve the original semantic information, leading to semantically inconsistent results (e.g., missing small objects); 3) Existing semantic image synthesis methods focus on modeling “local” semantic information from a single input semantic layout. However, they ignore “global” semantic information of multiple input semantic layouts, i.e., semantic cross-relations between pixels across different input layouts. To tackle 1), we propose to use the edge as an intermediate representation which is further adopted to guide image generation via a proposed attention guided edge transfer module. Edge information is produced by a convolutional generator and introduces detailed structure information. To tackle 2), we design an effective module to selectively highlight class-dependent feature maps according to the original semantic layout to preserve the semantic information. To tackle 3), inspired by current methods in contrastive learning, we propose a novel contrastive learning method, which aims to enforce pixel embeddings belonging to the same semantic class to generate more similar image content than those from different classes. We further propose a novel multi-scale contrastive learning method that aims to push same-class features from different scales closer together being able to capture more semantic relations by explicitly exploring the structures...
Page(s): 14435 - 14452
Date of Publication: 25 July 2023

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

Semantic image synthesis refers to generating photo-realistic images conditioned on pixel-level semantic labels. This task has a wide range of applications such as image editing and content generation [1], [2], [3], [4], [5]. Although existing methods conducted interesting explorations, we still observe unsatisfactory aspects, mainly in the generated local structures and details, as well as small-scale objects, which we believe are mainly due to three reasons: 1) Conventional methods [4], [6], [7] generally take the semantic label map as input directly. However, the input label map provides only structural information between different semantic-class regions and does not contain any structural information within each semantic-class region, making it difficult to synthesize rich local structures within each class. Taking label map S in Fig. 1 as an example, the generator does not have enough structural guidance to produce a realistic bed, window, and curtain from only the input label (S). 2) The classic deep network architectures are constructed by stacking convolutional, down-sampling, normalization, non-linearity, and up-sampling layers, which will cause the problem of spatial resolution losses of the input semantic labels. 3) Existing methods for this task are typically based on global image-level generation. In other words, they accept a semantic layout containing several object classes and aim to generate the appearance of each one using the same network. In this way, all the classes are treated equally. However, because different semantic classes have distinct properties, using specified network learning for each would intuitively facilitate the complex generation of multiple classes.

Select All
1.
Q. Chen and V. Koltun, "Photographic image synthesis with cascaded refinement networks", Proc. IEEE Int. Conf. Comput. Vis., pp. 1520-1529, 2017.
2.
P. Isola, J.-Y. Zhu, T. Zhou and A. A. Efros, "Image-to-image translation with conditional adversarial networks", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 5967-5976, 2017.
3.
S. Gu, J. Bao, H. Yang, D. Chen, F. Wen and L. Yuan, "Mask-guided portrait editing with conditional GANs", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 3436-3445, 2019.
4.
X. Liu et al., "Learning to predict layout-to-image conditional convolutions for semantic image synthesis", Proc. Int. Conf. Neural Inf. Process. Syst., pp. 568-578, 2019.
5.
X. Qi, Q. Chen, J. Jia and V. Koltun, "Semi-parametric image synthesis", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 8808-8816, 2018.
6.
T. Park, M.-Y. Liu, T.-C. Wang and J.-Y. Zhu, "Semantic image synthesis with spatially-adaptive normalization", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 2337-2346, 2019.
7.
T.-C. Wang, M.-Y. Liu, J.-Y. Zhu, A. Tao, J. Kautz and B. Catanzaro, "High-resolution image synthesis and semantic manipulation with conditional GANs", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 8798-8807, 2018.
8.
M. Cordts et al., "The cityscapes dataset for semantic urban scene understanding", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 3213-3223, 2016.
9.
B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso and A. Torralba, "Scene parsing through ADE20K dataset", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 5122-5130, 2017.
10.
H. Caesar, J. Uijlings and V. Ferrari, "COCO-stuff: Thing and stuff classes in context", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 1209-1218, 2018.
11.
H. Tang et al., "Edge guided GANs with semantic preserving for semantic image synthesis", Proc. Int. Conf. Learn. Representations, 2023.
12.
I. Goodfellow et al., "Generative adversarial nets", Proc. Int. Conf. Neural Inf. Process. Syst., pp. 2672-2680, 2014.
13.
H. Tang and N. Sebe, "Total generate: Cycle in cycle generative adversarial networks for generating human faces hands bodies and natural scenes", IEEE Trans. Multimedia, vol. 24, pp. 2963-2974, 2022.
14.
H. Tang, H. Liu and N. Sebe, "Unified generative adversarial networks for controllable image-to-image translation", IEEE Trans. Image Process., vol. 29, pp. 8916-8929, 2020.
15.
M. Mirza and S. Osindero, "Conditional generative adversarial nets", 2014.
16.
H. Tang et al., "Attribute-guided sketch generation", Proc. IEEE 14th Int. Conf. Autom. Face Gesture Recognit., pp. 1-7, 2019.
17.
H. Tang et al., "Expression conditional gan for facial expression-to-expression translation", Proc. IEEE Int. Conf. Image Process., pp. 4449-4453, 2019.
18.
H. Tang, L. Shao, P. H. Torr and N. Sebe, "Bipartite graph reasoning GANs for person pose and facial image synthesis", Int. J. Comput. Vis., vol. 131, pp. 644-658, 2022.
19.
H. Tang and N. Sebe, "Facial expression translation using landmark guided gans", IEEE Trans. Affect. Comput., vol. 13, no. 4, pp. 1986-1997, 2022.
20.
H. Tang, S. Bai, L. Zhang, P. H. Torr and N. Sebe, "Xinggan for person image generation", Proc. Eur. Conf. Comput. Vis., pp. 717-734, 2020.
21.
H. Tang, D. Xu, G. Liu, W. Wang, N. Sebe and Y. Yan, "Cycle in cycle generative adversarial networks for keypoint-guided image generation", Proc. ACM Int. Conf. Multimedia, pp. 2052-2060, 2019.
22.
H. Tang, W. Wang, D. Xu, Y. Yan and N. Sebe, "Gesturegan for hand gesture-to-gesture translation in the wild", Proc. ACM Int. Conf. Multimedia, pp. 774-782, 2018.
23.
Z. Xu et al., "Predict prevent and evaluate: Disentangled text-driven image manipulation empowered by pre-trained vision-language model", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 18208-18217, 2022.
24.
M. Tao, H. Tang, F. Wu, X.-Y. Jing, B.-K. Bao and C. Xu, "DF-GAN: A simple and effective baseline for text-to-image synthesis", Proc. IEEE Conf. Comput. Vis. Pattern Recognit., pp. 16494-16504, 2022.
25.
M. Tao, B.-K. Bao, H. Tang and C. Xu, "GALIP: Generative adversarial CLIPs for text-to-image synthesis", Proc. Conf. Comput. Vis. Pattern Recognit., pp. 14214-14223, 2023.
26.
M. Tao, B.-K. Bao, H. Tang, F. Wu, L. Wei and Q. Tian, "DE-net: Dynamic text-guided image editing adversarial networks", Proc. AAAI Conf. Artif. Intell., pp. 9971-9979, 2023.
27.
H. Tang et al., "Graph transformer GANs for graph-constrained house generation", Proc. Conf. Comput. Vis. Pattern Recognit., pp. 2173-2182, 2023.
28.
S. Wu et al., "Cross-view panorama image synthesis with progressive attention gans", Elsevier Pattern Recognit., vol. 131, 2022.
29.
H. Tang, L. Shao, P. H. Torr and N. Sebe, "Local and global GANs with semantic-aware upsampling for image generation", IEEE Trans. Pattern Anal. Mach. Intell., vol. 45, no. 1, pp. 768-784, Jan. 2023.
30.
H. Tang, S. Bai and N. Sebe, "Dual attention GANs for semantic image synthesis", Proc. ACM Int. Conf. Multimedia, pp. 1994-2002, 2020.
Contact IEEE to Subscribe

References

References is not available for this document.