I. Introduction
Current Free Viewpoint TV Applications (FTV), such as Super Multiview (SMV) and Free Navigation (FN), use content captured by multiple cameras that surround the scene. View synthesis that utilizes captured views to create new virtual views is used for expanding the coverage or closing the gaps between existing real camera views, depending on the type of FTV application, i.e., Super Multiview or 2D walk-around-scene-like (FN) immersive experience [1] [2]. In these implementations, it is common for the majority of the cameras to have dissimilar radiometric lens characteristics and be pointed to the scene from a different viewpoint, often yielding significant luminance and chrominance discrepancies among the captured views [3]. As a result, synthesized views may have visual artifacts, caused by incorrect estimation of missing texture in occluded areas and possible brightness and color differences between the original real views.