1. Introduction
Single image dehazing is a classic but still active research topic in low-level computer vision, which aims to restore clean images from the degraded hazy counterparts. Recently, many deep learning approaches [5], [10], [14], [22], [25], [26], [31], [35], [45], [49], [5]0 have been proposed to address this problem by training a neural network to approximate the mapping from hazy images to haze-free ground truths. As more and more dehazing datasets have been released, such as RESIDE [23], O-Haze [3] and NH-Haze [2], these methods are able to demonstrate their outstanding ability in handling different haze patterns. However, one important issue is left behind for consideration, i.e., handling different types of hazy images by a single network. To be specific, current methods are usually trained on the training split of a particular dataset and tested on the corresponding testing split. For example, the test accuracy on RESIDE indoor test set [23] is obtained by validating a dehazing model trained on the RESIDE indoor training set. Such an evaluation strategy allows the neural network to focus on a specific domain but evades the important problem of learning a general model across datasets. A seemingly simple remedy is to train a single dehazing model on all available datasets jointly. Intuitively, with the increase of data, the network can benefit from considering more kinds of haze patterns, leading to boosted performance on every single dataset [1].
Average psnr values of gdn [26], msbdn [11] and dw-gan [14] across four datasets. It can be observed that the dehazing methods perform better if the training and validation are conducted on a single dataset.