I. Introduction
Occlusions are one of the most frequently observed object surface features in 3D scenes [1]. Occlusion can cause random and discontinuous changes in the scene signal. Obviously, the random and strong discontinuity caused by occlusions will induce strong interference and noise in information acquisition, resulting in local information loss. Information loss can greatly reduce the performance of computational vision systems, such as by increasing the amount of captured information or reducing the rendering quality of novel views. Computational vision algorithms may even fail in conditions of extensive occlusion. For example, the plenoptic sampling theory is derived from the nonocclusion condition for scene surfaces [2]–[7]; these sampling theories are not fully applicable in practice, as some aliasing is always present and affects the rendered output images [8], [9]. The following question thus becomes very important for computer vision algorithms: How is it possible to mathematically quantify the occlusion phenomenon to achieve high-quality reconstruction and display of a 3D scene?