Loading [MathJax]/extensions/MathZoom.js
Fully End-to-End Learning Based Conditional Boundary Equilibrium GAN with Receptive Field Sizes Enlarged for Single Ultra-High Resolution Image Dehazing | IEEE Conference Publication | IEEE Xplore

Fully End-to-End Learning Based Conditional Boundary Equilibrium GAN with Receptive Field Sizes Enlarged for Single Ultra-High Resolution Image Dehazing


Abstract:

A receptive field is defined as the region in an input image space that an output image pixel is looking at. Thus, the receptive field size influences the learning of dee...Show More

Abstract:

A receptive field is defined as the region in an input image space that an output image pixel is looking at. Thus, the receptive field size influences the learning of deep convolution neural networks. Especially, in single image dehazing problems, larger receptive fields often show more effective dehazying by considering the brightness and color of the entire input hazy image without additional information (e.g. scene transmission map, depth map, and atmospheric light). The conventional generative adversarial network (GAN) with small-sized receptive fields cannot be effective for hazy images of ultra-high resolution. Thus, we proposed a fully end-to-end learning based conditional boundary equilibrium generative adversarial network (BEGAN) with the receptive field sizes enlarged for single image dehazing. In our conditional BEGAN, its discriminator is trained ultra-high resolution conditioned on downscale input hazy images, so that the haze can effectively be removed with the original structures of images stably preserved. From this, we can obtain the high PSNR performance (Track 1 - Indoor: top 4th-ranked) and fast computation speeds. Also, we combine an L1 loss, a perceptual loss and a GAN loss as the generator's loss of the proposed conditional BEGAN, which allows to obtain stable dehazing results for various hazy images.
Date of Conference: 18-22 June 2018
Date Added to IEEE Xplore: 16 December 2018
ISBN Information:

ISSN Information:

Conference Location: Salt Lake City, UT, USA
Citations are not available for this document.

1. Introduction

Images are often captured under bad weather conditions, which results in the degraded images with many obscured regions by fog, mist, and haze etc. Especially, the hazy images not only lower their aesthetical values, but also cause a significant performance degradation for object recognition. Thus, dehazing is an essential preprocessing to both aesthetic photography and computer vision applications. In general, the formulation of a hazy image can be modeled as\begin{equation*} I(x)=J(x)t(x)+A(1-t(x)) \tag{1} \end{equation*} where and are an input hazy image and a clean image, is the global atmospheric light, and is the transmission ratio that the potion of lights reaches the camera sensors. As a result, the haze removal using only a single degraded hazy image is a very challenging and ill-posed problem. The conventional haze removal methods estimate the global atmospheric light and the transmission ratio, and they remove the haze using the estimated parameters of (1) [1]–[4]. But, this approach is not a way to optimize the perceptual quality of generated dehazed images. Also, the inaccuracies of the estimated parameters can lead to weird distortions or to poor performance of haze removal. Instead, deep-learning-based convolutional neural networks can be used to effectively remove the image haze via fully end-to-end-learning. For an effective fully-end-to-end learning, the network must be able to understand the characteristics of the entire hazy images. Especially, when the resolutions of hazy images are very large, the training of the haze removal networks with small receptive filed sizes becomes difficult since the networks cannot consider the properties of the entire hazy images.

Cites in Papers - |

Cites in Papers - IEEE (19)

Select All
1.
Tao Yan, Xiangjie Zhu, Xianglong Chen, Weijiang He, Chenglong Wang, Yang Yang, Yinghui Wang, Xiaojun Chang, "GLGFN: Global-Local Grafting Fusion Network for High-Resolution Image Deraining", IEEE Transactions on Circuits and Systems for Video Technology, vol.34, no.11, pp.10860-10873, 2024.
2.
Shijie Chen, Mohammad Mahdizadeh, Chong Yu, Jiayuan Fan, Tao Chen, "Through the Real World Haze Scenes: Navigating the Synthetic-to-Real Gap in Challenging Image Dehazing", 2024 IEEE International Conference on Robotics and Automation (ICRA), pp.7265-7272, 2024.
3.
Juan Wang, Guanhai Chen, Sheng Wang, Hao Yang, Ye Cao, Yonggang Ye, "Dehazing Algorithm for UAV Image Based on Smooth Dilated Convolution", 2023 8th International Conference on Communication, Image and Signal Processing (CCISP), pp.301-306, 2023.
4.
Bilel Benjdira, Anas M. Ali, Anis Koubaa, "Streamlined Global and Local Features Combinator (SGLC) for High Resolution Image Dehazing", 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.1855-1864, 2023.
5.
Pierre Duthon, Nadav Edelstein, Efi Zelentzer, Frédéric Bernardin, "Quadsight® Vision System in Adverse Weather Maximizing the benefits of visible and thermal cameras", 2022 12th International Conference on Pattern Recognition Systems (ICPRS), pp.1-6, 2022.
6.
Kai Chen, Juping Liu, Chuheng Chen, Zhe Wang, Mingye Ju, "Contrast Restoration of Hazy Image in HSV Space", 2021 13th International Conference on Wireless Communications and Signal Processing (WCSP), pp.1-5, 2021.
7.
Hayat Ullah, Khan Muhammad, Muhammad Irfan, Saeed Anwar, Muhammad Sajjad, Ali Shariq Imran, Victor Hugo C. de Albuquerque, "Light-DehazeNet: A Novel Lightweight CNN Architecture for Single Image Dehazing", IEEE Transactions on Image Processing, vol.30, pp.8968-8982, 2021.
8.
Hendry, Daniel Herman Fredy Manongga, Yessica Nataliani, Theophilus Wellem, "Anti-Counterfeit Handwritten Signature via DCGAN with SGPD Network", 2021 7th International Conference on Applied System Innovation (ICASI), pp.79-84, 2021.
9.
Michal Uřičář, Ganesh Sistu, Lucie Yahiaoui, Senthil Yogamani, "Ensemble-Based Semi-Supervised Learning to Improve Noisy Soiling Annotations in Autonomous Driving", 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), pp.2925-2930, 2021.
10.
Michal Uřičář, Ganesh Sistu, Hazem Rashed, Antonín Vobecký, Varun Ravi Kumar, Pavel Křížek, Fabian Bürger, Senthil Yogamani, "Let’s Get Dirty: GAN Based Data Augmentation for Camera Lens Soiling Detection in Autonomous Driving", 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), pp.766-775, 2021.
11.
Dae-Kwan Ko, Dong-Han Lee, Soo-Chul Lim, "Continuous Image Generation From Low-Update-Rate Images and Physical Sensors Through a Conditional GAN for Robot Teleoperation", IEEE Transactions on Industrial Informatics, vol.17, no.3, pp.1978-1986, 2021.
12.
Nasir Baig, Muhammad Mohsin Riaz, Arjmand Fatima, Syed Sohaib Ali, Abdul Ghafoor, Adil Masood Siddiqui, "Image Dehazing using Dark and Bright Channel Priors and Multi-scale Filters", 2020 14th International Conference on Open Source Systems and Technologies (ICOSST), pp.1-5, 2020.
13.
Arindam Das, Pavel Křížek, Ganesh Sistu, Fabian Bürger, Sankaralingam Madasamy, Michal Uřičář, Varun Ravi Kumar, Senthil Yogamani, "TiledSoilingNet: Tile-level Soiling Detection on Automotive Surround-view Cameras Using Coverage Metric", 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), pp.1-6, 2020.
14.
Mario Bijelic, Tobias Gruber, Fahim Mannan, Florian Kraus, Werner Ritter, Klaus Dietmayer, Felix Heide, "Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather", 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp.11679-11689, 2020.
15.
Michal Uřičář, Pavel Křížek, Ganesh Sistu, Senthil Yogamani, "SoilingNet: Soiling Detection on Automotive Surround-View Cameras", 2019 IEEE Intelligent Transportation Systems Conference (ITSC), pp.67-72, 2019.
16.
Michal Uřićář, Jan Ulićný, Ganesh Sistu, Hazem Rashed, Pavel Křížek, David Hurych, Antonín Vobecký, Senthil Yogamani, "Desoiling Dataset: Restoring Soiled Areas on Automotive Fisheye Cameras", 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), pp.4273-4279, 2019.
17.
Codruta O. Ancuti, Cosmin Ancuti, Radu Timofte, Luc Van Gool, Lei Zhang, Ming-Hsuan Yang, Tiantong Guo, Xuelu Li, Venkateswararao Cherukuri, Vishal Monga, Hao Jiang, Siyuan Yang, Yan Liu, Xiaochao Qu, Pengfei Wan, Dongwon Park, Se Young Chun, Ming Hong, Jinying Huang, Yizi Chen, Shuxin Chen, Bomin Wang, Pablo Navarrete Michelini, Hanwen Liu, Dan Zhu, Jing Liu, Sanchayan Santra, Ranjan Mondal, Bhabatosh Chanda, Peter Morales, Tzofi Klinghoffer, Le Manh Quan, Yong-Guk Kim, Xiao Liang, Runde Li, Jinshan Pan, Jinhui Tang, Kuldeep Purohit, Maitreya Suin, A.N. Rajagopalan, Raimondo Schettini, Simone Bianco, Flavio Piccoli, C. Cusano, Luigi Celona, Sunhee Hwang, Yu Seung Ma, Hyeran Byun, Subrahmanyam Murala, Akshay Dudhane, Harsh Aulakh, Tianxiang Zheng, Tao Zhang, Weining Qin, Runnan Zhou, Shanhu Wang, Jean-Philippe Tarel, Chuansheng Wang, Jiawei Wu, "NTIRE 2019 Image Dehazing Challenge Report", 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.2241-2253, 2019.
18.
Radu Timofte, Shuhang Gu, Jiqing Wu, Luc Van Gool, Lei Zhang, Ming-Hsuan Yang, Muhammad Haris, Greg Shakhnarovich, Norimichi Ukita, Shijia Hu, Yijie Bei, Zheng Hui, Xiao Jiang, Yanan Gu, Jie Liu, Yifan Wang, Federico Perazzi, Brian McWilliams, Alexander Sorkine-Hornung, Olga Sorkine-Hornung, Christopher Schroers, Jiahui Yu, Yuchen Fan, Jianchao Yang, Ning Xu, Zhaowen Wang, Xinchao Wang, Thomas S. Huang, Xintao Wang, Ke Yu, Tak-Wai Hui, Chao Dong, Liang Lin, Chen Change Loy, Dongwon Park, Kwanyoung Kim, Se Young Chun, Kai Zhang, Pengjv Liu, Wangmeng Zuo, Shi Guo, Jiye Liu, Jinchang Xu, Yijiao Liu, Fengye Xiong, Yuan Dong, Hongliang Bai, Alexandru Damian, Nikhil Ravi, Sachit Menon, Cynthia Rudin, Junghoon Seo, Taegyun Jeon, Jamyoung Koo, Seunghyun Jeon, Soo Ye Kim, Jae-Seok Choi, Sehwan Ki, Soomin Seo, Hyeonjun Sim, Saehun Kim, Munchurl Kim, Rong Chen, Kun Zeng, Jinkang Guo, Yanyun Qu, Cuihua Li, Namhyuk Ahn, Byungkon Kang, Kyung-Ah Sohn, Yuan Yuan, Jiawei Zhang, Jiahao Pang, Xiangyu Xu, Yan Zhao, Wei Deng, Sibt Ul Hussain, Muneeb Aadil, Rafia Rahim, Xiaowang Cai, Fang Huang, Yueshu Xu, Pablo Navarrete Michelini, Dan Zhu, Hanwen Liu, Jun-Hyuk Kim, Jong-Seok Lee, Yiwen Huang, Ming Qiu, Liting Jing, Jiehang Zeng, Ying Wang, Manoj Sharma, Rudrabha Mukhopadhyay, Avinash Upadhyay, Sriharsha Koundinya, Ankit Shukla, Santanu Chaudhury, Zhe Zhang, Yu Hen Hu, Lingzhi Fu, "NTIRE 2018 Challenge on Single Image Super-Resolution: Methods and Results", 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.965-96511, 2018.
19.
Cosmin Ancuti, Codruta O. Ancuti, Radu Timofte, Luc Van Gool, Lei Zhang, Ming-Hsuan Yang, Vishal M. Patel, He Zhang, Vishwanath A. Sindagi, Ruhao Zhao, Xiaoping Ma, Yong Qin, Limin Jia, Klaus Friedel, Sehwan Ki, Hyeonjun Sim, Jae-Seok Choi, Sooye Kim, Soomin Seo, Saehun Kim, Munchurl Kim, Ranjan Mondal, Sanchayan Santra, Bhabatosh Chanda, Jinlin Liu, Kangfu Mei, Juncheng Li, Luyao, Faming Fang, Aiwen Jiang, Xiaochao Qu, Ting Liu, Pengfei Wang, Biao Sun, Jiangfan Deng, Yuhang Zhao, Ming Hong, Jingying Huang, Yizhi Chen, Erin Chen, Xiaoli Yu, Tingting Wu, Anil Genc, Deniz Engin, Hazim Kemal Ekenel, Wenzhe Liu, Tong Tong, Gen Li, Qinquan Gao, Zhan Li, Daofa Tang, Yuling Chen, Ziying Huo, Aitor Alvarez-Gila, Adrian Galdran, Alessandro Bria, Javier Vazquez-Corral, Marcelo Bertalmo, H. Seckin Demir, Omer Faruk Adil, Huynh Xuan Phung, Xin Jin, Jiale Chen, Chaowei Shan, Zhibo Chen, "NTIRE 2018 Challenge on Image Dehazing: Methods and Results", 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp.1004-100410, 2018.

Cites in Papers - Other Publishers (2)

1.
Bo Zhao, Han Wu, Zhiyang Ma, Huini Fu, Wenqi Ren, Guizhong Liu, "Nighttime Image Dehazing Based on Multi-Scale Gated Fusion Network", Electronics, vol.11, no.22, pp.3723, 2022.
2.
Etienne de Stoutz, Andrey Ignatov, Nikolay Kobyshev, Radu Timofte, Luc Van Gool, "Fast Perceptual Image Enhancement", Computer Vision ? ECCV 2018 Workshops, vol.11133, pp.260, 2019.
Contact IEEE to Subscribe

References

References is not available for this document.