Loading [MathJax]/extensions/MathMenu.js
FISS GAN: A Generative Adversarial Network for Foggy Image Semantic Segmentation | IEEE Journals & Magazine | IEEE Xplore

FISS GAN: A Generative Adversarial Network for Foggy Image Semantic Segmentation


Abstract:

Because pixel values of foggy images are irregularly higher than those of images captured in normal weather (clear images), it is difficult to extract and express their t...Show More

Abstract:

Because pixel values of foggy images are irregularly higher than those of images captured in normal weather (clear images), it is difficult to extract and express their texture. No method has previously been developed to directly explore the relationship between foggy images and semantic segmentation images. We investigated this relationship and propose a generative adversarial network (GAN) for foggy image semantic segmentation (FISS GAN), which contains two parts: an edge GAN and a semantic segmentation GAN. The edge GAN is designed to generate edge information from foggy images to provide auxiliary information to the semantic segmentation GAN. The semantic segmentation GAN is designed to extract and express the texture of foggy images and generate semantic segmentation images. Experiments on foggy cityscapes datasets and foggy driving datasets indicated that FISS GAN achieved state-of-the-art performance.
Published in: IEEE/CAA Journal of Automatica Sinica ( Volume: 8, Issue: 8, August 2021)
Page(s): 1428 - 1439
Date of Publication: 17 June 2021

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

Environmental perception plays a vital role in the fields of autonomous driving [1], robotics [2], etc., and this perception influences the subsequent decisions and control of such devices [3]–[5]. Fog is a common form of weather, and when fog exists, the pixel values of foggy images are irregularly higher than those of clear images. As a result, the texture of foggy images is less than that of clear images. There are already many methods for semantic segmentation of clear images, which can extract and express the features of clear images and achieve good semantic segmentation results. However, the performance of these methods on foggy images is poor. This poor performance occurs because current methods cannot efficiently extract and express the features of foggy images. Moreover, foggy image data are not sparse, and the current excellent work [6], [7] on sparse data cannot be used. Therefore, to date, researchers have developed two ways to address this problem:

Select All
1.
L. Chen, W. J. Zhan, W. Tian, Y. H. He and Q. Zou, "Deep integration: A multi-label architecture for road scene recognition", IEEE Trans. Image Process., vol. 28, no. 10, pp. 4883-4898, Oct. 2019.
2.
K. Wada, K. Okada and M. Inaba, "Joint learning of instance and semantic segmentation for robotic pick-and-place with heavy occlusions in clutter", Proc. IEEE Int. Conf. Robotics and Autom., pp. 9558-9564, 2019.
3.
Y. C. Ouyang, L. Dong, L. Xue and C. Y. Sun, "Adaptive control based on neural networks for an uncertain 2-DOF helicopter system with input deadzone and output constraints", IEEE/CAA J. Autom. Sinica, vol. 6, no. 3, pp. 807-815, May 2019.
4.
Y. H. Luo, S. N. Zhao, D. S. Yang and H. W. Zhang, "A new robust adaptive neural network backstepping control for single machine infinite power system with TCSC", IEEE/CAA J. Autom. Sinica, vol. 7, no. 1, pp. 48-56, Jan. 2020.
5.
N. Zerari, M. Chemachema and N. Essounbouli, "Neural network based adaptive tracking control for a class of pure feedback nonlinear systems with input saturation", IEEE/CAA J. Autom. Sinica, vol. 6, no. 1, pp. 278-290, Jan. 2019.
6.
D. Wu and X. Luo, "Robust latent factor analysis for precise representation of high-dimensional and sparse data", IEEE/CAA J. Autom. Sinica, pp. 766-805, Dec. 2019.
7.
X. Luo, Y. Yuan, S. L. Chen, N. Y. Zeng and Z. D. Wang, "Position-transitional particle swarm optimization-incorporated latent factor analysis", IEEE Trans. Knowl. Data. En., pp. 1-1, Oct. 2019.
8.
A. Cantor, "Optics of the atmosphere: Scattering by molecules and particles", IEEE J. Quantum. Elect., vol. 14, no. 9, pp. 698-699, Sept. 1978.
9.
S. G. Narasimhan and S. K. Nayar, "Vision and the atmosphere", Int. J. Comput. Vision, vol. 48, no. 3, pp. 233-254, Jul. 2002.
10.
P. Luc, C. Couprie, S. Chintala and J. Verbeek, Semantic segmentation using adversarial networks, Dec. 2016, [online] Available: .
11.
A. Arnab, S. Jayasumana, S. Zheng and P. H. S. Torr, "Higher order conditional random fields in deep neural networks", Proc. European Conf Computer Vision, pp. 524-540, 2016.
12.
P. Isola, J. Y. Zhu, T. H. Zhou and A. Efros, "Image-to-image translation with conditional adversarial networks", Proc. IEEE. Conf Computer Vision and Pattern Recognition, pp. 1125-1134, 2017.
13.
J. Hoffman, E. Tzeng, T. Park, Y. J. Zhu, P. Isola, K. Saenko, et al., "Cycada: Cycle-consistent adversarial domain adaptation", Proc. 35th Int. Conf. Machine Learning, pp. 1989-1998, 2018.
14.
K. Nazeri, E. Ng, T. Joseph, F. Z. Qureshi and M. Ebrahimi, Edgeconnect: Generative image inpainting with adversarial edge learning, Jan. 2019, [online] Available: .
15.
O. Ronneberger, P. Fischer and T. Brox, "U-net: Convolutional networks for biomedical image segmentation", Proc. Int. Conf Medical Image Computing and Computer-assisted Intervention, pp. 234-241, 2015.
16.
J. Y. Kim, L. S. Kim and S. H. Hwang, "An advanced contrast enhancement using partially overlapped sub-block histogram equalization", Proc. Int. Conf. IEEE Symposium on Circuits and Systems, pp. 537-540, 2000.
17.
A. Eriksson, G. Capi and K. Doya, "Evolution of meta-parameters in reinforcement learning algorithm", Proc. IEEE/RSJ Int. Conf Intelligent Robots and System, pp. 412-417, 2003.
18.
M. J. Seow and V. K. Asari, "Ratio rule and homomorphic filter for enhancement of digital colour image", Neurocomputing, vol. 69, no. 7–9, pp. 954-958, Mar. 2006.
19.
S. Shwartz, E. Namer and Y. Y. Schechner, "Blind haze separation", Proc. IEEE. Int. Conf. Computer Vision and Pattern Recognition, pp. 1984-1991, 2006.
20.
Y. Y. Schechner and Y. Averbuch, "Regularized image recovery in scattering media", IEEE Trans. Pattern Anal., vol. 29, no. 9, pp. 1655-1660, Sept. 2007.
21.
R. T. Tan, "Visibility in bad weather from a single image", Proc. IEEE. Int. Conf. Computer Vision and Pattern Recognition, pp. 1-8, 2008.
22.
R. Fattal, "Single image dehazing", ACM Trans. Graphic., vol. 27, no. 3, pp. 1-9, Aug. 2008.
23.
K. M. He, J. Sun and X. O. Tang, "Single image haze removal using dark channel prior", IEEE Trans. Pattern Anal., vol. 33, no. 12, pp. 2341-2353, Dec. 2011.
24.
K. B. Gibson and T. Q. Nguyen, "On the effectiveness of the dark channel prior for single image dehazing by approximating with minimum volume ellipsoids", Proc. IEEE. Int. Conf Acoustics Speech and Signal Processing, pp. 1253-1256, 2011.
25.
D. F. Shi, B. Li, W. Ding and Q. M. Chen, "Haze removal and enhancement using transmittance-dark channel prior based on object spectral characteristic", Acta Autom. Sinica, vol. 39, no. 12, pp. 2064-2070, Dec. 2013.
26.
S. G. Narasimhan and S. K. Nayar, "Interactive (de) weathering of an image using physical models", Proc. IEEE Workshop on color and photometric Methods in computer Vision, vol. 6, no. 4, Jan. 2003.
27.
S. G. Narasimhan and S. K. Nayar, "Chromatic framework for vision in bad weather", Proc. IEEE. Int. Conf. Computer Vision and Pattern Recognition, pp. 598-605, 2000.
28.
J. Tarel and N. Hautière, "Fast visibility restoration from a single color or gray level image", Proc. 12th IEEE Int. Conf Computer Vision, pp. 2201-2208, 2009.
29.
H. Zhang, V. Sindagi and V. M. Patel, "Joint transmission map estimation and dehazing using deep networks", IEEE Trans. Circ. Syst. Vid., vol. 30, no. 7, Jul. 2020.
30.
W. Q. Ren, S. Liu, H. Zhang, J. S. Pan, X. C. Cao and M. H. Yang, "Single image dehazing via multi-scale convolutional neural networks", Proc. European Conf. Computer Vision, pp. 154-169, 2016.

Contact IEEE to Subscribe

References

References is not available for this document.