On eye-sensor based path planning for robots with non-trivial geometry/kinematics | IEEE Conference Publication | IEEE Xplore

On eye-sensor based path planning for robots with non-trivial geometry/kinematics


Abstract:

We formally pose and explore some novel issues that arise for eye-sensor based motion planning for robots with non-trivial geometry/kinematics. The key issue is that whil...Show More

Abstract:

We formally pose and explore some novel issues that arise for eye-sensor based motion planning for robots with non-trivial geometry/kinematics. The key issue is that while the sensor senses in physical space, the planning takes place in configuration space, and the two spaces are distinctly different for robots with nontrivial geometry/kinematics. This lends to some very interesting, fundamental yet novel issues. In particular, we introduce several novel notions: notion of s-reachability, notion of s-completeness that characterizes completeness for sensor-based planning algorithms, notion of explorability of configuration space, and notion of observability of physical space. We give sufficient conditions for a (discrete) eye-sensor based planner to be s-complete.
Date of Conference: 21-26 May 2001
Date Added to IEEE Xplore: 18 April 2006
Print ISBN:0-7803-6576-3
Print ISSN: 1050-4729
Conference Location: Seoul, Korea (South)

1 Introduction

Motion planning (MP) in robotics can be divided into two categories [7]: (i) model-based MP where the environment is assumed to be completely known, and the task for the robot is to reach a desired goal configuration [13], and (ii) sensor-based MP where the environment is unknown and the task for the robot, equipped with a sensor, is to explore the environment and reach a given goal configuration. A variety of robot-sensor systems have been used in the latter category, ranging from mobile robots with vision/range (“eye” type) sensing to manipulator arms with “skin” type sensors [14]. In this paper, we formalize the problem of eye-sensor based motion planning for robots with non-trivial

Non-trivial geometry/kinematics implies that the physical space and C-space are different. Idealized cases such a point or circle robots are considered to have trivial geometry/kinematics from this point of view.

geometry/kinematics. This class of robots is broad and includes robots ranging from a simple polygonal mobile robot to complex articulated manipulators. The key issue is that while the sensor senses in physical space, the planning takes place in configuration space, and the two spaces are distinctly different for rohots with non-trivial geometry/kinematics. Some recent work including our own [2], [3], [6], [4], [5] has presented implemented sensor-based motion planners and view planning algorithms for exploration with eye-in-hand systems (a manipulator arm with a wrist mounted range camera). However, some of the underlying fundamental issues have not yet been explored. This paper addresses them. For instance, it may even be impossible to decide if a given goal configuration is reachable. An example that illustrates this is shown in Figure 1 - a planar eye-in-hand system consisting of a 2-link robot arm, equipped with an “eye” sensor at its end-effector that gives distance or range of the objects (from the sensor). The arm is required to plan and execute collision-free motions in an environment initially unknown to the robot. Sensor field of view is indicated by the triangular region; in addition, the sensor has an additional degree of freedom and can rotate. White region around the initial robot configuration (in left image, shown in dark gray) is known to be free at the start, light gray region is free space unknown to the robot, and dark gray areas are obstacles unknown to the robot. The robot in goal configuration is shown in black. It is not possible for a sensor-based planner to determine if the given goal configuration is reachable or not, since the robot is not able to sense the unknown region (which is free but the robot does not know it). Right image shows the physical space that the robot is able to see. At best, the planner could terminate with “goal not reachable with the given sensor and initial free region”. The key intuitive idea here is that in the sensor-based case, the robot must be able to sense the physical space before it can occupy it. We formalize such notions in this paper. Although our focus is on eye type sensors, most of the notions are applicable to sensor-based planning in general. In particular, we introduce several novel notions: notion of s-reachability, notion of s-completeness that characterizes completeness for sensor-based planning algorithms, notion ofexplorability of configuration space, and notion of observability of physical space. We give sufficient conditions for a discrete eye-sensor based planner to be s-complete. Finally we discuss some related open problems.

Contact IEEE to Subscribe

References

References is not available for this document.