Dynamic visibility checking for vision-based motion planning | IEEE Conference Publication | IEEE Xplore

Dynamic visibility checking for vision-based motion planning


Abstract:

An important problem in position-based visual servoing (PBVS) is to guarantee that a target will remain within the field of view for the duration of the task. In this pap...Show More

Abstract:

An important problem in position-based visual servoing (PBVS) is to guarantee that a target will remain within the field of view for the duration of the task. In this paper, we propose a dynamic visibility checking algorithm that, given a parametrized trajectory of the camera, determines if an arbitrary 3D target will remain within the field of view. We reformulate this problem as the problem of determining if the 3D coordinates of the target collide with the frustum formed by the camera field of view during the camera trajectory. To solve this problem, our algorithm computes and compares the shortest distance between the target and the frustum with the length of the trajectory described by the target in the camera's coordinate frame. Furthermore, we demonstrate that our algorithm can be combined with path planning algorithms and, in particular, probabilistic roadmaps (PRM). Results suggest that our algorithm is computationally efficient even when the target moves in the vicinity of image borders. In simulations, we use our dynamic visibility checking algorithm in conjunction with a PRM to plan collision free paths while providing the guarantee that a specific target will not leave the field of view.
Date of Conference: 19-23 May 2008
Date Added to IEEE Xplore: 13 June 2008
ISBN Information:
Print ISSN: 1050-4729
Conference Location: Pasadena, CA, USA

I. INTRODUCTION

Whether it is in the structured environments of assembly lines or in the unstructured environments of households, the repertoire of robotic tasks has consistently expanded over the last few decades. As the level of autonomy of robots increases, so does the reliance on sensors that provide feedback to robot controllers. Among sensing devices, cameras are one of the most popular in the robotics community. In particular, motion control based on visual feedback, also known as visual servoing [1], [2], has been consistently at the forefront of robotics research. The bulk of the research in visual servoing has focused on a specific architecture known as image-based visual servoing (IBVS). Despite the advantages of IBVS, its velocity control aspect is not suitable for the majority of industrial robots. Typically, industrial robots operate through proprietary interfaces that only allow position commands in joint space or Cartesian space. A more suitable visual servoing architecture for such robots is position-based visual servoing (PBVS). In PBVS, a command is defined by the Cartesian parameters of a desired position and visual feedback is used to assess the error between the current parameters and the desired ones. In general, if visual feedback is used to control the motion of a robot, it is necessary to keep specific targets, markers or features within the field of view. Whereas this issue is implicitly addressed by IBVS, it is not the case for PBVS. In fact, one of the most cited drawbacks of PBVS is the inability to guarantee that a target or scene will remain within the field of view [3]. This deficiency is often sufficient to cause the failure of a task, especially if vision tracking is required. For example, in [4], a Kalman filter is used to track the pose of a target with respect to the coordinate frame of the camera.

Contact IEEE to Subscribe

References

References is not available for this document.