ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural Rendering | IEEE Conference Publication | IEEE Xplore

ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural Rendering


Abstract:

Vision in adverse weather conditions, whether it be snow, rain, or fog is challenging. In these scenarios, scattering and attenuation severly degrades image quality. Hand...Show More

Abstract:

Vision in adverse weather conditions, whether it be snow, rain, or fog is challenging. In these scenarios, scattering and attenuation severly degrades image quality. Handling such inclement weather conditions, however, is essential to operate autonomous vehicles, drones and robotic applications where human performance is impeded the most. A large body of work explores removing weather-induced image degradations with dehazing methods. Most methods rely on single images as input and struggle to generalize from synthetic fully-supervised training approaches or to generate high fidelity results from unpaired real-world datasets. With data as bottleneck and most of today’s training data relying on good weather conditions with inclement weather as outlier, we rely on an inverse rendering approach to reconstruct the scene content. We introduce ScatterNeRF, a neural rendering method which adequately renders foggy scenes and decomposes the fog-free background from the participating media – exploiting the multiple views from a short automotive sequence without the need for a large training data corpus. Instead, the rendering approach is optimized on the multi-view scene itself, which can be typically captured by an autonomous vehicle, robot or drone during operation. Specifically, we propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses. We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber. Our code and datasets are available at https://light.princeton.edu/scatternerf.
Date of Conference: 01-06 October 2023
Date Added to IEEE Xplore: 15 January 2024
ISBN Information:

ISSN Information:

Conference Location: Paris, France
No metrics found for this document.

1. Introduction

Imaging and scene understanding in the presence of scattering media, such as fog, smog, light rain and snow, is an open challenge for computer vision and photography. As rare out-of-distribution events that occur based on geography and region [8], these weather phenomena can drastically reduce the image quality of the captured intensity images, reducing local contrast, color reproduction, and image resolution [8]. A large body of existing work has investigated methods for dehazing [57], [5], [49], [29], [73], [77] with the most successful methods employing learned feed-forward models [57], [5], [49], [29], [73]. Some methods [49], [5], [35] use synthetic data and full supervision, but struggle to overcome the domain gap between simulation and real world. Acquiring paired data in real world conditions is challenging and existing methods either learn natural image priors from large unpaired datasets [74], [73], or they rely on cross-modal semi-supervision to learn to separate atmospheric effects from clear RGB intensity [57]. Unfortunately, as the semi-supervised training cues are weak compared to paired supervised data, these methods often fail to completely separate atmospheric scatter from clear image content, especially at long distances. The problem of predicting clear images in the presence of haze is an open challenge, and notably harsh weather also results in severely impaired human vision – a major driver behind fatal automotive accidents [4].

Usage
Select a Year
2025

View as

Total usage sinceJan 2024:110
012345JanFebMarAprMayJunJulAugSepOctNovDec340000000000
Year Total:7
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.