1. Introduction
Completing a scene beyond the partial occlusion of its components is a strongly desired property for many computer vision applications. For instance, in robotic manipulation, the ability to see the full target object despite the presence of occluding elements can lead to a more successful and precise grasping. In the autonomous driving context the estimation of the full profile and location of potential obstacles occluded by the car in front of us would prove useful to increase the robustness of the trajectory planning and safety control.