I. INTRODUCTION
Whether it is in the structured environments of assembly lines or in the unstructured environments of households, the repertoire of robotic tasks has consistently expanded over the last few decades. As the level of autonomy of robots increases, so does the reliance on sensors that provide feedback to robot controllers. Among sensing devices, cameras are one of the most popular in the robotics community. In particular, motion control based on visual feedback, also known as visual servoing [1], [2], has been consistently at the forefront of robotics research. The bulk of the research in visual servoing has focused on a specific architecture known as image-based visual servoing (IBVS). Despite the advantages of IBVS, its velocity control aspect is not suitable for the majority of industrial robots. Typically, industrial robots operate through proprietary interfaces that only allow position commands in joint space or Cartesian space. A more suitable visual servoing architecture for such robots is position-based visual servoing (PBVS). In PBVS, a command is defined by the Cartesian parameters of a desired position and visual feedback is used to assess the error between the current parameters and the desired ones. In general, if visual feedback is used to control the motion of a robot, it is necessary to keep specific targets, markers or features within the field of view. Whereas this issue is implicitly addressed by IBVS, it is not the case for PBVS. In fact, one of the most cited drawbacks of PBVS is the inability to guarantee that a target or scene will remain within the field of view [3]. This deficiency is often sufficient to cause the failure of a task, especially if vision tracking is required. For example, in [4], a Kalman filter is used to track the pose of a target with respect to the coordinate frame of the camera.