I. Introduction
This paper presents a new approach for fusing visual and force information and its application to manipulation tasks in unstructured environments. Using a classical image-based visual servoing system, a three-dimensional (3-D) trajectory between the initial and the desired configurations cannot be specified (especially for certain cases of large rotational differences [5]) and most of the applications in which it is generally used are point-to-point based [6]. Only the desired configuration is indicated and not the trajectory that should be followed to arrive at such a configuration. This reason motivates that to control the position of the robot during the task, a new approach, called movement flow-based visual servoing has been developed, which allows the tracking of the desired trajectory between the initial and the desired configurations. This system is employed to track trajectories previously generated in the image space and also has the correct sort of behavior in the 3-D space. This approach uses what we call a movement flow to determine the desired configuration from the current one. Only recently [17] has it been possible to find visual servoing applications for tracking trajectories in the image. However, in such approaches, the tracking is formulated as a timed trajectory in the image and, therefore, the current configuration and the desired one are separated by a time interval . As such, if an image-based control system is employed to track timed trajectories, the system risks not following the desired trajectory at the cost of trying to maintain time restrictions (see Section V-B). To resolve this problem, the so-called movement flow-based visual servoing is used, with which the task to be carried out by the robot is encoded in the image space. We should also mention the use of virtual fixtures [9] to guide the robot. This method, however, is not used to track a given trajectory in the image, but rather to guide the robot toward a point, a line, or a surface, introducing vision-based motion constraints. The use of virtual fixtures generates a set of preferred directions to achieve a given configuration, avoiding the geometric constraints imposed by sensor data.