Loading [MathJax]/extensions/MathMenu.js
Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging | IEEE Journals & Magazine | IEEE Xplore

Referenceless Prediction of Perceptual Fog Density and Perceptual Image Defogging


Abstract:

We propose a referenceless perceptual fog density prediction model based on natural scene statistics (NSS) and fog aware statistical features. The proposed model, called ...Show More

Abstract:

We propose a referenceless perceptual fog density prediction model based on natural scene statistics (NSS) and fog aware statistical features. The proposed model, called Fog Aware Density Evaluator (FADE), predicts the visibility of a foggy scene from a single image without reference to a corresponding fog-free image, without dependence on salient objects in a scene, without side geographical camera information, without estimating a depth-dependent transmission map, and without training on human-rated judgments. FADE only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. Fog aware statistical features that define the perceptual fog density index derive from a space domain NSS model and the observed characteristics of foggy images. FADE not only predicts perceptual fog density for the entire image, but also provides a local fog density index for each patch. The predicted fog density using FADE correlates well with human judgments of fog density taken in a subjective study on a large foggy image database. As applications, FADE not only accurately assesses the performance of defogging algorithms designed to enhance the visibility of foggy images, but also is well suited for image defogging. A new FADE-based referenceless perceptual image defogging, dubbed DEnsity of Fog Assessment-based DEfogger (DEFADE) achieves better results for darker, denser foggy images as well as on standard foggy images than the state of the art defogging methods. A software release of FADE and DEFADE is available online for public use: <;uri xlink:href="http://live.ece.utexas.edu/research/fog/index.html" xlink:type="simple">http://live.ece.utexas.edu/research/fog/index.html<;/uri>.
Published in: IEEE Transactions on Image Processing ( Volume: 24, Issue: 11, November 2015)
Page(s): 3888 - 3901
Date of Publication: 15 July 2015

ISSN Information:

PubMed ID: 26186784

Funding Agency:


I. Introduction

The perception of outdoor natural scenes is important for understanding the natural environment and for successfully executing visual activities such as object detection, recognition, and navigation [1]. In bad weather, the absorption or scattering of light by atmospheric particles such as fog, haze, or mist can greatly reduce the visibility of scenes [2]. As a result, objects in images captured under bad weather conditions suffer from low contrast, faint color, and shifted luminance. Since the reduction of visibility can dramatically degrade operators’ judgments in vehicles guided by camera images and can induce erroneous sensing in remote surveillance systems, automatic methods for visibility prediction and enhancement of foggy images have been intensively studied.

Contact IEEE to Subscribe

References

References is not available for this document.