Revisiting Light Field Rendering With Deep Anti-Aliasing Neural Network | IEEE Journals & Magazine | IEEE Xplore

Revisiting Light Field Rendering With Deep Anti-Aliasing Neural Network


Abstract:

The light field (LF) reconstruction is mainly confronted with two challenges, large disparity and the non-Lambertian effect. Typical approaches either address the large d...Show More

Abstract:

The light field (LF) reconstruction is mainly confronted with two challenges, large disparity and the non-Lambertian effect. Typical approaches either address the large disparity challenge using depth estimation followed by view synthesis or eschew explicit depth information to enable non-Lambertian rendering, but rarely solve both challenges in a unified framework. In this paper, we revisit the classic LF rendering framework to address both challenges by incorporating it with advanced deep learning techniques. First, we analytically show that the essential issue behind the large disparity and non-Lambertian challenges is the aliasing problem. Classic LF rendering approaches typically mitigate the aliasing with a reconstruction filter in the Fourier domain, which is, however, intractable to implement within a deep learning pipeline. Instead, we introduce an alternative framework to perform anti-aliasing reconstruction in the image domain and analytically show comparable efficacy on the aliasing issue. To explore the full potential, we then embed the anti-aliasing framework into a deep neural network through the design of an integrated architecture and trainable parameters. The network is trained through end-to-end optimization using a peculiar training set, including regular LFs and unstructured LFs. The proposed deep learning pipeline shows a substantial superiority in solving both the large disparity and the non-Lambertian challenges compared with other state-of-the-art approaches. In addition to the view interpolation for an LF, we also show that the proposed pipeline also benefits light field view extrapolation.
Published in: IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 44, Issue: 9, 01 September 2022)
Page(s): 5430 - 5444
Date of Publication: 16 April 2021

ISSN Information:

PubMed ID: 33861692

Funding Agency:


1 Introduction

As an alternative to traditional 3D scene representation using scene geometry (or depth) and texture (or reflectance), light field (LF) achieves photorealistic view synthesis in real-time using LF rendering technology [1], [2]. This high-quality rendering requires the disparities between adjacent views to be less than one pixel, i.e., the so-called densely-sampled LF. Unfortunately, the practical scenarios including dynamic scene [3] or limited acquisition time [4] impose insufficient sampling in the angular dimension. The quality of the rendered novel views is inevitably perturbed by the large disparity (range) in the sampled LF. In addition, the potential non-Lambertian effect in the scene, such as jewellery, fur, glass and face, will further aggravate this side effect [5], [6].

Contact IEEE to Subscribe

References

References is not available for this document.