I. Introduction
During foggy weather conditions, the visibility of outdoor scenes will be degraded due to suspended particles such as smoke, tiny dust particles, water droplets, etc. As a result, the images captured in the foggy weather conditions have low quality, low contrast, and a loss of image information. Hence single-image dehazing is a basic module in computer vision applications and surveillance systems. From the physics-based haze model [1], the hazy image can be defined as \begin{equation*} I(z)=J(z)\times A+(1-t(z))A \tag {1}\end{equation*}
Here is a haze free image at pixel location z. A and are the Atmospheric light component and transmission map respectively. The transmission depth map will reduce the quality of haze free scene exponentially and it is described as , where is atmospheric scattering coefficient of atmosphere and is the distance between original scene point and camera. One way to recover the clear image from a foggy input is to modify Eq. (1) as
\begin{equation*} J(z)=\frac {I(z)-A}{t(z)}+A \tag {2}\end{equation*}
where is a haze free image, which can be recovered using is foggy image, A is atmospheric light, transmission depth map. And z indicates the pixel location. Most of the image dehazing techniques [2], [3], [4] restore the original haze free image by using the Eq. (2) by estimating the atmospheric light A and transmission depth . This dehazing is an ill-posed problem since we have only , but not the ambient light A and the transmission depth map . The main idea of this method is to combine the two parameters A and into single parameter , to enhance the image quality. Then the Eq. (2) can be remodeled as
\begin{align*} J(z)& =k(z).\,I(z)+A \tag {3}\\ \text {where}~~k(z)& =\frac {1}{t(z)}\left ({{1-\frac {A}{I(z)}}}\right) \tag {4}\end{align*}
we can also recover the dehazed image using Eq. (3), where is a coefficient, which integrate two parameters A and . Using deep learning models, the dehazed image can be restored by the feature maps, extracted at various stages of the model. Hence the Eq. (3) can be represented as
\begin{align*} J(z)& =k_{1} (z).\,I_{1} (z)+k_{2} (z).\,I_{2} (z)+k_{3} (z).\,I_{3} (z) \\ & \quad +\ldots..+k_{n} (z).\,I_{n} (z)\,\,+k_{c}.\,A_{c} \tag {5}\\ J(z)& =\sum \limits _{n_{z} =1}^{n} {\{k_{n_{z}} (z).\,I_{n_{z}} (z)\}}+\,\,\,k_{c}.A_{c} \tag {6}\end{align*}
where .... are hazy image feature maps, are atmospheric light components of dehazed image and are coefficients of corresponding hazy image feature map .... respectively. And is the enhancing atmospheric coefficient of light distribution . By integrating all the feature maps extracted form hazy image with atmospheric light components, we can reconstruct the original haze-free image. In the past decade, many prior-based dehazing techniques [5], [6], [7] have been introduced to remove the haze from the image or to produce a haze-free image by extracting the ambient light A and transmission map . Some prior-based techniques perform well in some situations but cannot adapt to all circumstances. Hence, they artificially boost the contrast and cause unsightly artifacts like halos and color deviations in haze-free images. However, unreliable results are obtained due to inaccurate estimation of the transmission map and ambient light A. The Fig. 1 shows the real-world hazy image and corresponding dehazing result of various state-of-the-art techniques as Fig. 1(a)-(d). The dehazed image of the Dark channel prior method as shown in Fig. 1(b) introduced artifact and more color saturation. To address the challenges in prior-based methods and to improve the quality of dehazing results, learning-based techniques [9], [10], [11], [12], [13] are implemented. These methods estimate accurate transmission maps and extract the feature maps from the hazy images and recovers the haze-free images. Even though the learning-based techniques [14], [15] do not introduce the artifacts and color deviations, they utilize more extensive training data and are still unable to remove haze completely to produce clear images shown in Fig. 1 (c). To resolve the above issues GAN based dehazing methods [16], [17], [18], [19] are implemented, which remove haze completely from hazy images without introducing artifacts or color saturations. DehazeGAN [20] was the first dehazing GAN technique implemented using CNN as a generator and discriminator. Next, prior-based GAN Dehazing Techniques [17], [21] are suggested. Later, CycleGAN dehazing techniques [18], [22] are implemented by cycle-consistency loss on unpaired image data. All these approaches produce better results than the prior-based and deep learning-based dehazing techniques; however, they ultimately result in a blurred or distorted effect in heavy haze regions. To tackle this issue, Self-attention GAN [19], [23] based image dehazing algorithms, are proposed that enable attention mechanism for dehazing and give superior results, but the model becomes more complex due to the additional attention modules.
Comparison of the visual quality of various state-of-the-art methods on a Real-world hazy image.