I. Introduction
Light field (LF) contains 4D information that can capture spatial and angular rays, which has been studied in superresolution (SR) [1]–[5], view synthesis [6]–[8], saliency detection [9], quality assessment [10] and display [11], [12]. Recently, deep learning enables light field super-resolution algorithms to show excellent performance. Existing methods assume that low-resolution (LR) LF is degraded by bicubic downsampling. However, LF in real-world scene may contain other types of distortions except low-resolution during the processing pipeline including generation, transmission, storage and display. Degradation effects will be generated, such as noise and blur, which reduce the quality of LF image. Traditional LF SR algorithms are usually only designed to solve a specific downsampling, which are difficult to effectively handle the degraded LF of real scenes. Although the degradation cases are considered in some single image SR works [13]–[16], they have not been explored in LF, and the direct implementation of image SR methods on LF underuses the informative scene geometry.