Introduction
Mobile robot manipulation and transportation are becoming more and more important in many applications such as manipulation and transportation [1]–[5], [21], [22]. For example, grasping using robotic multifingered hands was developed in [1] and [2]. Cooperative manipulation of a rigid body with multiple attached unilateral thrusters was investigated in [3]. A grasper was implemented to perform pick and place task [4]. The hand exoskeletons were designed in [5]. Cooperative transportation using multiple nonholonomic robots with physical rigid-formation-motion constraints was investigated in [29].
As for positioning or localization [6]–[14], [32]–[35] and path planning [15], [16], [23]–[26], which are quite essential in transportation, different methods have been proposed. There are some techniques of localization (e.g., the global positioning system, wheeled odometry [8], ultrasound localization [9], visual odometry [10]–[12] , map of environment, and multisensor fusing method [14]). For example, a visual odometry method with 3-D sensors was used [11]. In [10], a monocular vision and odometry/AHRS (AHRS, three-axial gyroscopes, accelerometers, and magnetometers) sensors fusion was proposed with parallel processing and graphics processing units used to improve the performance. Generally, visual odometry methods are computational incentive, with higher requirements on hardware resources. In [8], a local sensor fusion technique with a global position correction was proposed, using a zenithal camera for global sensing and the sensors (encoders, gyroscope, accelerometers, compass, etc.) on the robot for localization, which consumed higher computational resources as in [9]–[11] and [15]. An onboard robotic module was designed for determining relative positions among miniature robots [13]. For a system of unmanned air and ground vehicles, a cooperative path planning method was proposed, with vision occlusions for obstacle avoidance [16]. For path planning techniques, also refer to [20].
Mechatronic implementation of a system also requires design of hardware and software architectures, as in systems of UAV tracking [17], UAVs-GAVs [18], wall-climbing robot [19], flapping-wing robot [31], etc. To the best of our knowledge, there is relatively less literature dealing with implementation of a mechatronic system for grasping and transportation, particularly with low implementation complexity.
This paper considers a systematic design and implementation for cylinder-shaped object grasping and transportation to the destination. Microcontroller units (MCUs) are used, with limited computational capacity, thus we are particularly concerned with low manipulation and path-planning complexity. The platform is a three-wheeled nonholonomic mobile robot.
For grasping, we design a mechanical gripper together with an arm, equipped with a set of sensors that mounted on the platform.
For path planning, one of the distinct characteristics in this paper is that, we do not rely on positioning or localization techniques as mentioned earlier. Instead, we intend to guide the robot with rough direction or bearing of the destination that provided by the specifically designed destination beacon and the sensor system on the robot. Necessary infrared and laser emitters and receivers are equipped on both the beacon and the robot. In our design, the robot moves along rough directions of the destination at first steps provided by the infrared system; during the process the robot has the opportunity to calibrate an accurate direction of the destination beacon via the particularly designed event-triggered bearing-alignment mechanism, with very low energy and computational cost. Then, we present an effective and computationally low-cost heuristic algorithm for transportation that tightly incorporated with the designed hardware communication configuration (HCC) in an event-triggered manner, which is robust and suitable for limited on-board computational resources. The very simple and effective nature of the algorithm dues to the designed HCC, which is also low cost (in the sense of low energy consumption); the transportation algorithm integrated with the HCC can be also viewed as a simple yet efficient version of sensor fusion, i.e., information inference using multiple cues or signals [27] (although in [27], the inference of human mind state is discussed, it can be readily applicable to sensor applications). As a comparison, the transportation methods using visual odometry (unavoidably with influence of light), visibility-graph construction, and multisensor fusing, etc., as mentioned earlier have obviously higher computational costs.
For robot control, we design a modular, hierarchical approach in the software architecture that decomposes the task into a set of high-level behaviors and low-level behaviors that are suitable for the whole implementation.
The remainder of this paper is as follows. Section II designs the mechanical parts. Section III designs the HCC, mainly for transportation. Section IV describes grasp and release. Section V is for transportation. Section VI is the conclusion.
Mechanical Design
To facilitate grasping, there are two mechanical components designed: the mechanical gripper and the robot arm.
The MCU used on the robot is: ATmega328p (Flash: 32 kB, RAM: 2 kB, Clock: 20 MHz). The platform is three wheeled (two wheels are driven by two servo motors and the third one is a caster) that is nonholonomic. The geometric parameters of the robot are given in Appendix B (see Table III).
A. Design of Gripper
Assume the size of cylinder-shaped objects is known prior to grasping. In order to implement grasping, we design the mechanical gripper as shown in Fig. 1, the claw of the gripper is driven by a servo motor with two gears for mechanical transmission.
The gripper here is for cylindrical objects, for other shapes of objects, one may possibly just consider replacing the claw (see Fig. 1) that known a priori, which is not the focus in this paper.
B. Design of Robot Arm
In order to facilitate grasping using the gripper, we design the 3-DOF arm (see
Fig. 2), each joint is driven by a servo motor (CYS model S0090) for which the range of motion is 180
degree of rotation. The robot arm is made of aluminum alloy material. The lengths of the three parts of the arm are:
Sensory system of the robot. The laser emitter and ultrasonic sensor are equipped on the front of the robot.
(a) Illustration of the position and orientation of robot. (b) In this scenario of transportation, only the
sensor
Example of path planning with multiple encounters of obstacles and multiple event triggers. The arrows denote the moving directions of the robot. The solid round discs denote the obstacles. Event II (obstacle avoidance) is triggered at positions B and D, respectively; Event I (the bearing-alignment event mechanism) is triggered at positions C and E, respectively, that making of the robot heading exactly to the direction of the beacon (denoted by the dotted lines).
Hardware Communication Configuration
As one main emphasis of this paper, this section designs the HCC with mainly four parts, which are low cost (in the sense of very low energy consumption).
In Section V, an effective heuristic transportation algorithm will be provided that is perfectly and tightly incorporated with the HCC in an event-triggered manner. The transportation algorithm becomes simple with low computational cost, just dues to the HCC designed here.
Additionally, another characteristic is that, the rough direction of the destination is used for guidance of the robot, thus the heuristics for path planning are robust (in the sense of inaccuracy detection of the destination's direction) due to the nature of the design. Other aspects of robustness are considered in Section V for multiple encounters of obstacles and multiple event-triggers.
A. Design of Destination Beacon
As shown in Fig. 3, the beacon is designed to guide the robot to get to the destination area, which consists of:
an XBee module on the top of the beacon to communicate with the XBee module mounted on the mobile robot;
six infrared emitters that transmitting infrared signal in all directions, whose frequency is 38 K and duty cycle is 75%;
six laser receivers for receiving the laser signal emitted from the transmitters equipped on the robot;
an MCU (ATmega328p) to control the aforementioned modules.
We also print some black lines around the beacon representing the boundary of the destination area.
B. Design of the Sensor System of the Robot
As depicted in Fig. 4, the robot is equipped with:
an ultrasonic sensor on the front of the robot that is used for object detection and obstacle avoidance;
a sensor system for transportation guidance that consists of:
a laser emitter (its function is described in Section III-D and shown in Fig. 9);
four infrared receivers for getting the infrared information from the beacon;
an XBee module for communication with the XBee module mounted on the beacon;
four Parallax QTI sensors to detect the boundary of the destination area (printed as the black lines in Fig. 3);
a force sensing resistor to measure the grasping force.
C. Heuristics for Path Planning
Consider the robot moves toward the destination with guidance by the beacon. There are four infrared receivers on the robot (the configuration is shown in Fig. 5, the most sensitive scope of each receiver is 90°) for receiving the infrared wave from the infrared transmitters mounted on the destination beacon.
The kinematics of the wheeled robot can be described as
\begin{eqnarray}
\dot x(t) &=& v_0 \cos \theta (t)\\
\dot y(t) &=& v_0 \sin \theta (t)
\end{eqnarray}
\begin{eqnarray}
p(k + 1) &=& p(k) + v_0 \Delta t\left(\begin{array}{l}
\cos \theta (k) \\
\sin \theta (k) \\
\end{array} \right) \nonumber\\
\theta (k + 1) &=& \theta (k) + u_\sigma (k)
\end{eqnarray}
\begin{equation}
\sigma (k) = (r_1 (k),r_2 (k),r_3 (k),r_4 (k))
\end{equation}
Fig. 7 shows the situations of invalid states of
D. Description of Bearing-Alignment Event Mechanism (Event I)
Event I is the bearing-alignment event mechanism as an important component in the HCC, which is particularly designed and has an important function in Algorithm 1 in Section V, as described in the following and illustrated in Fig. 9.
During the turning process (refer to Section III -C and Line 3 of Algorithm 1 in Section V), the laser beam emitted from the laser emitter on the robot may have the opportunity to point directly to the beacon.
When the laser receiver on the beacon receives the laser signal, an interrupt is triggered and reported to the MCU of the beacon.
Then, the MCU of the beacon sends the message through the XBee module of the beacon to the XBee module of the robot to report the state of current alignment direction.
An Event I interrupt is triggered and reported to the MCU of the robot. The robot stops its rotation, calibrates and points to the alignment direction.
Remark 1:
The energy required in the HCC is very low. To further save energy, the laser emitter can be set to begin working after having grasped the cylinder-shaped object.
Grasp and Release
The software architecture is designed and provided in Appendix A, the task sequence is shown in Fig. A1. This section describes the task of grasp and release that consists of three parts: 1) object detection, 2) control of the arm and gripper, and 3) detection of the destination area and release of the object.
A. Distance Measurement and Object Detection
To locate an object in the starting area, the robot first turns in place and records the data measured by the
ultrasonic sensor; if an object is detected (refer to Fig. A2 in Appendix
A), then it move to get closer to the object until the distance is equal to a specified one (refer to
B. Control of the Arm and Gripper for Grasping
The servo motors on the joints of the arm are controlled by the pulse width modulation (PWM) wave input and the
angle of the motors has a liner response to the number of the pulses. We just need to calculate the angle of every
motor required for grasping, and then, control the posture of the arm and claw to grasp the object accurately. Then,
we get the following equations:
\begin{eqnarray*}
&& d_1 = l_1 \sin \alpha + l_2 \sin \left({\alpha + \beta } \right) + l_3 \\
&& h_2 = h_1 + l_1 \cos \alpha + l_2 \cos \left({\alpha + \beta } \right) \\
&& \alpha + \beta + \gamma = \frac{{3\pi }}{2} \\
&& {\rm s.}{\rm t.}\;\left\{ \begin{array}{l}
0 \le \alpha \le \frac{\pi }{2} \\
0 \le \beta \le \pi \\
\end{array} \right.
\end{eqnarray*}
Calculate some frequently-used data and store them in the flash memory of the MCU of the robot,
Algorithm 1: Algorithm for path planning
Set
Loop
The robot reads the states of the four infrared receivers to
determine the state of
Then, the robot refers to the reference table (see Table I) in
its memory to determine the value of the control input
The robot takes the corresponding action, i.e., turn in place
with the value of
If Event I (the bearing-alignment event, see the detail of
Event I in Section III-D) is triggered, then go to Line 9 out
of the loop.
The robot records the distance
sensor at each step.
If Event II (the obstacle avoidance event described in the
following) is triggered (i.e., when there is an obstacle
detected in the front), then the robot moves with
obstacle avoidance (see Section V–B).
If no Event I and Event II triggered, the robot moves ahead
with velocity
End loop
The robot moves straightly on along the alignment direction
(refer to Event I) until Event III (the reaching-destination-
area event in the following) is triggered. During the process,
if Event II is triggered, then go to Line 6 and restart the loop
(refer to the remarks as follows).
C. Release of Object in the Destination Area
As the robot gets to the destination area, it will release the object, in this case, the posture of the arm and the
values of
Transportation
With the HCC designed in Section III, the software transportation algorithm of the robot designed in this section has three distinct characteristics:
without localization of the robot in the environment;
transportation with rough direction of the destination, and thus, having the robustness in this sense;
the effective and very low computational cost of the software algorithm , with robustness on event handing and obstacle avoidance.
A. Algorithm for Transportation
The algorithm is shown in Algorithm 1, which is perfectly and tightly incorporated with the HCC (see Section III) in an event-trigged manner. The very simple and effective nature of the algorithm dues to the designed mechanism of the HCC.
Remark 1:
As the robot moves along the alignment-direction, the watching of Event I can be closed by the MCU of the robot in this case to further save energy and computation resources until additional (if exists) Event II is triggered after that.
Remark 2:
In Algorithm 1, in Line 4 (if Event I is triggered), the execution goes out of the loop; while in Line 9, if Event II is triggered (no matter the laser is on or off, refer to Remark 1, and note that if the watching of Event I was closed, the MCU of the robot should restart the watching of Event I after Event II is triggered), the execution reenters the loop, which enables the robot to handle multiple encounters of obstacles, and the algorithm is robust in this sense.
Remark 3:
Algorithm 1 (together with the HCC designed in Section III) has the inherent robustness for 1) using rough directions of the destination beacon (refer to Section III–C) and 2) handling of multiple encounters of obstacles (one example shown in Fig. 11).
B. Action of Event II for Obstacle Avoidance
For obstacle avoidance, set the threshold as 20 cm (a value validated by experiments) for distance
One example of path planning is shown in Fig. 11, with multiple encounters of obstacles, and multiple event triggers of Event I (refer to Section III –D) and Event II.
C. Description of Event III
Event III is the reaching-destination-area event. When the QTI sensors on the robot detect the black line (see Fig. 3), which means the robot has already arrived at the destination, the robot stops.
D. Integration of Hardware and Software Algorithm for Computational Efficiency
Finally, we would emphasize the computational-efficiency merit of the integration [30] of software and hardware: just dues to the HCC designed in Section III and the tight integration of the heuristic software algorithm and the HCC, the transportation algorithm becomes simple with low computational cost, i.e., the very simple nature of the transportation algorithm dues to this integration, which passes some computational functions to the HCC (e.g., the heuristics for path planning in Section III-C using the effective sensors system and the destination beacon in the HCC, and the bearing-alignment event mechanism that relies on the laser emitter/receiver and the destination beacon together with communication mechanism), and thus, decreases or even eliminate many computational costs from the MCU of the robot such as conventional computations (e.g., positioning or localization, visual or wheeled odometry, visibility-graph construction, and multisensor fusing).
The simulations and experiment are provided in Appendix C.
Conclusion
In this paper, we design the hardware architecture, and the low-cost algorithms for object grasping and transportation; the solution for transportation is, as least, suitable for an environment with sparse obstacles. The physical experiments show the effectiveness of the system design and algorithms.
How this work can be extended to the environment problems in which the environment is full of static or dynamic obstacles? In this case, much powerful microprocessors may be required. One possible way is to incorporate the architecture and algorithm in this paper with existing sophisticated algorithms (typically computationally incentive, with necessary hardware) for such environment, and set the highest priority for selecting the algorithm in this paper (if possible) for certain conditions to save resources consumption, which will be provided in a future paper.
Future work may also include developing a more intelligent algorithm and a cooperative path planning method by multiple mobile robots as in the scenarios of [16], [35] and an implementation of a defender–intruder problem [28], with the treatments of some moving obstacles, etc.
Appendix
Software Architecture
The software architecture consists of three layers.
The lowest layer is the hardware abstraction layer. In this layer, the function interfaces of various sensors and motors are provided, for example, the functions of robot motion, distance detection, arm movement, the state of the infrared receiver and the sending and receiving data of the XBee modules. This layer provides the application programming interfaces (APIs) that encapsulate the hardware implementation of these functionalities.
The middle layer is the robotic action layer. The performance functions are provided with integration of certain functions in the hardware abstraction layer, for example, obstacle avoidance, object detection, grasping and releasing, destination detection, path planning, etc.
The highest layer is the task layer. The functions provided in the robotic action layer are organized to form an integrated whole to accomplish the given transportation task.
The program is executed through an event-driven mechanism. The whole hierarchical task architecture and the task sequence are shown in Fig. A1.
Simulations and Experiment
Algorithm 1 is simulated in the following. In every simulation, the robot is initially located at the origin of the
coordinates with orientation
Fig. A3 is the simulation of Algorithm I without obstacles: in (a), the trigger of Event I is disabled; for comparison, in (b), the trigger of Event I is enabled and is triggered when the robot is at the coordinates (10.61, 10.61).
Simulation result of transportation task of the robot without obstacles. The center of the destination area is the location of the beacon. (a) Without Event I triggered. (b) With Event I triggered.
In Fig. A4, one obstacle is located at the coordinates (40, 50) with radius 15 cm. Event II is triggered when the robot is at the coordinates (21.21, 36.21); the turning angle is set as 10°. Event I is triggered when the robot is at the coordinates (27.55, 64.81).
Simulation result of transportation task of the robot with obstacle. The center of the destination area is the location of the beacon. (a) Without Event I triggered. (b) With Event I triggered.
The whole experiment is implemented in a 3 m × 4 m rectangular place. The diameter and the height of the obstacle are 8 and 20 cm, respectively. The radius of the black line around the beacon is 50 cm. As illustrated in Fig. A5, the robot moves to the cylinder-shaped object (a), grasps the object (b), lifts up the object (c), then moves to the beacon with object-avoidance (d)—(h), and Event I is triggered (i), then stops at the black line when it detects the black line (k), finally the robot puts the object down (l). The experiment shows the validity of the system design and effectiveness of the algorithm.