Loading [MathJax]/extensions/MathZoom.js
A Human-Tracking Robot Using Ultra Wideband Technology | IEEE Journals & Magazine | IEEE Xplore

A Human-Tracking Robot Using Ultra Wideband Technology


The hardware architect of our robot. There are two wheels on two sides with an additional wheel for directional movement. The sensors include the UWB anchors and ultrasou...

Abstract:

In this paper, a target person tracking method based on the ultra-wideband (UWB) technology is proposed for the implementation of the human-tracking robots. For such robo...Show More
Topic: Sequential Data Modeling and Its Emerging Applications

Abstract:

In this paper, a target person tracking method based on the ultra-wideband (UWB) technology is proposed for the implementation of the human-tracking robots. For such robots, the detection and calculation of the position and orientation of the target person play an essential role in the implementation of the human-following function. A modified hyperbolic positioning algorithm is devised to overcome the challenge of the measurement errors. In addition, a modified virtual spring model is implemented in the robot to track the target person. One important advantage of such a UWB-based method compared with the computer vision-based methods lies in its robustness against the environment condition. In fact, the computer vision methods can be very sensitive to lighting conditions which make them suffer from the unconstrained outdoor environment. The robust performance of our methods is illustrated by the experimental results on a real-world robot.
Topic: Sequential Data Modeling and Its Emerging Applications
The hardware architect of our robot. There are two wheels on two sides with an additional wheel for directional movement. The sensors include the UWB anchors and ultrasou...
Published in: IEEE Access ( Volume: 6)
Page(s): 42541 - 42550
Date of Publication: 25 July 2018
Electronic ISSN: 2169-3536

Funding Agency:


SECTION I.

Introduction

In tradition, robots are only used for industrial fields in the restricted areas such as factory assembly lines. However, the significant advances in robotic technology make it possible for robots to operate in a complex environment while sharing their workspaces with humans to perform tasks such as health care, home automation, construction, etc [1]. As a result, Human Robot Interaction(HRI) has been drawing much attention from all over the world in recent years. HRI involves the human dynamics, behaviors, and various of sensors and algorithms required to perform quality and safe cooperation. While traditionally, HRI systems interact with users by passive methods such as voice command, gesture control and other Human-Computer Interfaces, people are trying to make a robot interact with the users more actively recently [2].

As a specific aspect of HRI, automatic human-tracking is an essential and fundamental function for these robots interacting with humans effectively. The technology of automatic tracking makes mobile robots keep a certain distance from the target person continuously, which therefore makes the robots capable of working together with humans safely and effectively.

A typical application of automatic human-tracking presented in [3] is that the robot can help people carry the baggages and follow them automatically so that people are able to walk outside without the need to carry the baggages themselves, especially when they are in the airport or going shopping. Another application is to allow a smart wheelchair to be led by a person toward a place, as described in [4]. In that system, driver assistance is used to keep the wheelchair from stairs falling and obstacle collision when the user is driving the wheelchair in complex environments with unknown obstacles and human following is used to guarantee the wheelchair to follow the user with a suitable distance when the user wants to walk along by himself. In [5], a robot is used to help the customs carry the products they are intended to buy and follow them to the counter in a retail store. Besides, a mobile robot keeping itself facing humans and acting as an assistant robot is presented in [6]. A friendly small size robot assistant presented in [7] is equipped with a variety of different sensors for human-tracking and has a storage box to help people delivery the books or notebooks. In [8], a construction robot is presented which can autonomously follow the human operator to the work position.

The detection and calculation of the position and orientation of the target person play an essential role in the implementation of the human-following function [6]. Various methods regarding the detecting and tracking of the target person in the complex environment have been presented recently. In early years, LED lights were carried by the person being detected and tracked by a camera [9]. An sensor network comprising of several cameras was used to help the robot track a human in [10]. The laser range finder is also used to acquire the distance information and in turn to improve the detection of target human in the background [11]–​[13]. While the laser range finder has the ability to obtain the distance accurately, it is difficult for it to identify the target person [7]. As a result, some research combined several sensors to improve the accuracy of human detection and tracking [2], [14], [15]. Hu et al. [16] have proposed a sensing system based on a novel 3-D meanshift algorithm on RGBD camera. They also derived an anticipatory human walking model based on relevant research about human walking with which the robot can predict the target person’s movement and take actions in advance, thus enhancing the human-tracking performance. Sound signal was used in [17] to locate and track humans.

The vision-based algorithms have dominated all these approaches because of the low cost of cameras and the recent improvement in machine vision algorithms [1], [18], [19]. Nevertheless, the vision-based human-tracking approaches have some fundamental disadvantages due to the effects such as camera calibration errors, variation in illumination, field of view (FOV) constraints, partial or full occlusion, etc. Such algorithms are also confronted with the challenges like camera motion [20], [21], in-plane pose variation and pose change due to out-of-plane rotation [22] and localization errors resulting from inaccurate robot odometry [23] and nonholonomic constraints [1]. Moreover, human-tracking approaches based on vision demand higher computer power and hence higher hardware cost, limiting their commercial use further.

In this paper, a method is proposed using UWB technology to detect and track the human location. The UWB technology features a variety of advantages, including the low power, the anti-multipath interference ability, the high security, the low system complexity, the especially high precision in location. It has the potential to become a mainstream technology of indoor wireless location system. In this sense, this paper is an effort in exploring the performance of such a UWB based method for human body tracking in an indoor environment. Besides, unlike the computer vision based methods that are sensitive to lighting conditions, the UWB based technology is expected to perform well in the outdoor environment. This feature is regarded as an important advantage over the computer vision based approaches, no matter depth information is used or not.

Except the technology of target person detection and location, the human-tracking algorithm also plays an important role for the robot to follow the target person. The basic human-tracking method is to move the robot toward the target person until the robot is close enough to the person and then cease moving. Afterward the robot will keep tracking the target person and follow him/her continuously whenever the person moves [16]. In this paper, a modified virtual spring model which connects the human and the robot is proposed to control the robot to track the person smoothly afterward.

This paper is organized as follows: the system architecture including hardware and software components of the mobile robot is described in Section II. The process of the UWB data calibration and the tag position calculation using a modified hyperbolic positioning algorithm is presented in Section III. The moving-average filter is also discribed in this section. The dynamic model of the robot is derived in Section IV. The human tracking algorithm is presented in Section V. The experimental results and discussion are presented in Section VI. Finally this paper is concluded in Section VII.

SECTION II.

System Artchitecture

A. Robot Hardware Architecture

The robot presented in this paper is designed and constructed by ourselves. It includes an embedded control board, and is equipped with four ultrasound sensors and three UWB anchors. The hardware architecture and components are shown in Figure 1 and Figure 2. Figure 3 is the control board of the system.

FIGURE 1. - The hardware architect of our robot. Specially the wheel robot is driven by the embedded controller. Two motors are installed for moving and turning around. Accordingly there are two wheels on two sides with an additional wheel for directional movement. The sensors include the UWB anchors and ultrasound ones. There is a control board connecting all the sensors and compute the real-time motion signals to control the movement of the two wheels by the motors. The whole robot including the hardware, software and mechanics is all implemented by the authors.
FIGURE 1.

The hardware architect of our robot. Specially the wheel robot is driven by the embedded controller. Two motors are installed for moving and turning around. Accordingly there are two wheels on two sides with an additional wheel for directional movement. The sensors include the UWB anchors and ultrasound ones. There is a control board connecting all the sensors and compute the real-time motion signals to control the movement of the two wheels by the motors. The whole robot including the hardware, software and mechanics is all implemented by the authors.

FIGURE 2. - The hardware component of the robot. Different sensor and communication modules are involved including the UWB, ultrasound, and Bluetooth. The component is designed and integrated by the authors.
FIGURE 2.

The hardware component of the robot. Different sensor and communication modules are involved including the UWB, ultrasound, and Bluetooth. The component is designed and integrated by the authors.

FIGURE 3. - The control board of the robot. The A33 SOC, STM32F103 MCU and other components are all included on the board. The control board is designed by the authors.
FIGURE 3.

The control board of the robot. The A33 SOC, STM32F103 MCU and other components are all included on the board. The control board is designed by the authors.

In this system, two high performance controllers are used, namely the A33 SOC and the STM32F103 MCU. The A33 SOC features quad Cortex$^{TM}$ -A7 CPU cores to enable the best balance between high performance and power efficiency. It comes with a wide range of connectivity which reduces system complexity and speeds up development process significantly. The UWB data reception, calibration, filtering and human tracking algorithm are executed in the A33 SOC.

The Cortex-M3 core and a variety of peripherals such as UART, SPI, I2C and so on are included in the STM32F103 devices with a maximum CPU speed of 72 MHz. In this system the STM32F103 MCU is used as the lower controller, which is connected to the Bluetooth module for data communication; to the motor drives for movement; and to the ultrasound modules for distance estimation which can be further used for the obstacle avoidance.

The A33 SOC is connected with the STM32F103 MCU through the UART. The sensor data from three UWB anchors will be first input to the A33 SOC to process the raw data. Because of the existence of the significant errors within the distance data, it is necessary to calibrate them for the further calculation. After that, the tag position and orientation will be calculated by a modified hyperbolic positioning algorithm which overcomes the difficulty of calculating the actual position of the tag due to the inevitable measurement errors when the typical hyperbolic positioning algorithm is used. Then the position of the tag is filtered by the moving-average filter algorithm which reduces the position drifting due to the random measurement errors efficiently. After the average-moving filter, the position and orientation of the target person have been obtained with minimum errors. Next these data are used to execute the path planning and human tracking algorithm. A modified virtual spring is supposed to be connected within the target person and the robot. The desired velocities of the left and right wheel will be obtained through the human tracking algorithm and be transmitted to the STM32F103 MCU through the UART between them.

The close-loop control algorithm is implemented within the STM32F103 MCU to control the two wheels to rotate in the desired velocities instructed by the A33 SOC. In this way, the robot tries to move toward the target person continuously until it is within a desired distance between itself and the person. The motor drives are H-bridges which are composed of two half bridges.

The Bluetooth module which is connected with the STM32F103 MCU through a UART can receive the instructions from the user by a smart phone which include the desired distance, start/stop command etc. Four ultrasound modules for distance estimation and obstacle avoidance are also connected to the STM32F103 MCU through another UART.

Other hardware components include a switching power module and several LDO linear regulators which convert the battery power into the desired voltages to power the A33 SOC, the STM32 MCU and other sensors and modules within the system. A debug port is also included for the data communication and monitoring during the debugging process. A JTAG port is used to download the firmware into the STM32F103 MCU, and some spare ports are reserved for further functions and modules such as GPS, GPRS, gyroscope module, etc. Besides, there is a type-A USB host port with its corresponding power management circuit used for powering the UWB anchors and receiving the distance data from them.

B. The Flow Chart of the System

In this paper, the UWB technology is used to detect and track the target person. Because of the measurement errors and additional noises caused by the real-world environments, the UWB data must be calibrated and filtered before position and orientation calculation.

Specially, a modified hyperbolic positioning algorithm (see more detailed description in Section III.C) is used in this paper to obtain the position and orientation of the target person, which overcomes the difficulty in the position calculation when using traditional hyperbolic positioning algorithm. Moreover, the moving-average filter is used and implemented for data filtering. Then if the robot is within ±45° toward the target person, the control algorithm is implemented to track and follow the target person. Otherwise, the robot stops moving forward and keeps rotating until the target person is within ±45° in front of the robot again. In this way, the robot can track and follow the target person continuously and smoothly. The flow chart is shown in Figure 4.

FIGURE 4. - The flowchart of the robot system. The robot continuously samples the UWB data by calibration and filtering, in order to estimate the position and orientation of the target person. Then it takes a policy that if the robot is in a close direction to the target, then it follows the target by autonomous moving. Otherwise, it starts to rotate to search for the lost target until it appears again.
FIGURE 4.

The flowchart of the robot system. The robot continuously samples the UWB data by calibration and filtering, in order to estimate the position and orientation of the target person. Then it takes a policy that if the robot is in a close direction to the target, then it follows the target by autonomous moving. Otherwise, it starts to rotate to search for the lost target until it appears again.

SECTION III.

UWB Data Process

A. The Layout of the UWB Anchors

There are three UWB anchors in our system, namely A0, A1, A2. As illustrated in Figure 5, the UWB anchors are mounted on top of the robot, and the UWB tag is carried by the target person. When the target person is moving, the UWB anchors will get the distance from themselves to the tag, and then the A33 chip will calculate the position and orientation of the target person after data calibration and filtering. The path planning and tracking algorithm is then executed to control the robot to move toward the target person and keep a constant distance with the person.

FIGURE 5. - The location layout of the UWB anchors which forms a triangle shape. The three anchors are installed on the robot to localize the tag on the tracked person.
FIGURE 5.

The location layout of the UWB anchors which forms a triangle shape. The three anchors are installed on the robot to localize the tag on the tracked person.

B. The Calibration of the UWB Data

There are great errors within the distances between UWB anchors and tag measured by the modules due to the multipath effect and other reasons. As a result, it is necessary to calibrate the distance data from the UWB anchors. To calibrate the data, the tag is put on a tripod and located on some fixed points from 0.3m to 2m away from anchor A0, with a gap of 10cm between these points, as shown in Figure 6.

FIGURE 6. - The location layout of the UWB anchors which forms a triangle shape. The three anchors are installed on the robot to localize the tag on the tracked person.
FIGURE 6.

The location layout of the UWB anchors which forms a triangle shape. The three anchors are installed on the robot to localize the tag on the tracked person.

Then the distance data from the anchors and the tag are obtained. The actual distances are also measured by a ruler, and the data from UWB anchors are calibrated according to the actual distances measured by the ruler, as shown in Figure 7 to Figure 9.

FIGURE 7. - Calibration result of the anchor A0. The calibrated data can well obey the linear relation to ground truth measured by the ruler.
FIGURE 7.

Calibration result of the anchor A0. The calibrated data can well obey the linear relation to ground truth measured by the ruler.

FIGURE 8. - Calibration result of the anchor A1. The calibrated data can well obey the linear relation to ground truth measured by the ruler.
FIGURE 8.

Calibration result of the anchor A1. The calibrated data can well obey the linear relation to ground truth measured by the ruler.

FIGURE 9. - Calibration result of the anchor A2. The calibrated data can well obey the linear relation to ground truth measured by the ruler.
FIGURE 9.

Calibration result of the anchor A2. The calibrated data can well obey the linear relation to ground truth measured by the ruler.

C. The Calculation of the Tag Position

Typically, under an ideal error-free condition, the tag position can be effectively located by hyperbolic positioning algorithm [24], [25], as illustrated in Figure 10. The anchors are regarded as located in the centers of three circles whose radiuses are the distance from the anchor to the tag. Theoretically, the intersection of the circles is the location of the mobile tag.

FIGURE 10. - Typical hyperbolic positioning algorithm. After obtaining the distance between the tag and A0/A1/A2 three anchors, the circles are drawn whose centers are the three anchors, and the radius are the distance between the three anchors and tag. Ideally if there is no measurement error, the three circles will be intersected on a single point, which is in fact the location of the tag.
FIGURE 10.

Typical hyperbolic positioning algorithm. After obtaining the distance between the tag and A0/A1/A2 three anchors, the circles are drawn whose centers are the three anchors, and the radius are the distance between the three anchors and tag. Ideally if there is no measurement error, the three circles will be intersected on a single point, which is in fact the location of the tag.

However, due to the inevitable measurement errors of the distances from anchors to the tag, the three circles would probably not intersect at one point, which could certainly results into a great challenge to get the correct location of the mobile tag. In this system, the typical algorithm is modified by removing one circle around A2 and searching for the two intersection points of the remaining two circles, explained as Figure 11.

FIGURE 11. - Modified hyperbolic positioning algorithm. The circles are drawn whose centers are 
$A0/A1$
 anchors. The two circles will be intersected whereby the tag is located. To determine the exacted tag location from the two candidates 
$T$
 and 
$T'$
, one need to calculate the distance between the candidate locations and the anchor 
$A2$
, namely 
$TA2$
 and 
$T'A2$
. By comparing 
$TA2$
 and 
$T'A2$
 with the actual distance between 
$A2$
 and the tag, the closer one will be selected and the tag location can be the corresponding candidate.
FIGURE 11.

Modified hyperbolic positioning algorithm. The circles are drawn whose centers are $A0/A1$ anchors. The two circles will be intersected whereby the tag is located. To determine the exacted tag location from the two candidates $T$ and $T'$ , one need to calculate the distance between the candidate locations and the anchor $A2$ , namely $TA2$ and $T'A2$ . By comparing $TA2$ and $T'A2$ with the actual distance between $A2$ and the tag, the closer one will be selected and the tag location can be the corresponding candidate.

Briefly speaking, because there are only two circles whose centers are located at $A0$ and $A1$ respectively, there must be two intersection points of these two circles, and the mobile tag must be located on one of the two point, namely $T$ and $T'$ . To determine which one is the point where the tag is actually located, the distances from $A2$ to $T$ and $T'$ are calculated and compared with the distance data measured from the anchor $A2$ . The smaller one is the point that is being looked for. The details are explained as follow.

As illustrated in Figure 12, the $x$ axis is along the line of $A1$ and $A0$ , and the $y$ axis is perpendicular to $x$ axis and corresponds to the forward direction of the robot. The origin is located in the middle of $A0$ and $A1$ . While the length from $A1$ to $A0$ has been measured in advance, the distances from $T$ to $A0$ and $A1$ are obtained from anchors in real time. As a result, the angle $\phi $ can be obtained using cosine theorem as follow:\begin{equation*} \cos \phi = \frac {TA_{0}^{2}+A_{0} A_{1}^{2}-TA_{1}^{2}}{2TA_{0}*A_{0} A_{1} }\tag{1}\end{equation*} View SourceRight-click on figure for MathML and additional features.

FIGURE 12. - The calculation of the tag’s position based on the three anchors 
$A0$
, 
$A1$
, 
$A2$
. Two candidate location 
$T$
 and 
$T'$
 are first found then the final location can be determined from these two candidates.
FIGURE 12.

The calculation of the tag’s position based on the three anchors $A0$ , $A1$ , $A2$ . Two candidate location $T$ and $T'$ are first found then the final location can be determined from these two candidates.

The equation above yields:\begin{equation*} \sin \phi =\pm \sqrt {1-cos^{2}\phi }\tag{2}\end{equation*} View SourceRight-click on figure for MathML and additional features.

Thus, the coordinates of the two intersection points $T$ and $T'$ can be calculated by the following formula:\begin{align*} [x_{T}\quad y_{T}]^{T}=&[x_{A0}-TA_{0}cos\phi \quad TA_{0}sin\phi]^{T} \\=&[x_{A0}-TA_{0}cos\phi \quad TA_{0}\sqrt {1-cos^{2}\phi }]^{T} \tag{3}\\ \,[x_{T'}\quad y_{T'}]^{T}=&[x_{A0}-TA_{0}cos\phi \quad TA_{0}sin\phi]^{T} \\=&[x_{A0}-TA_{0}cos\phi \quad -TA_{0}\sqrt {1-cos^{2}\phi }]^{T}\tag{4}\end{align*} View SourceRight-click on figure for MathML and additional features.

Then the distances from these two point to anchor $A2$ are:\begin{align*} D_{T}=&\sqrt {(x_{A0}\!-\!TA_{0}cos\phi \!-\!x_{A2})^{2} \!+\!(TA_{0}\sqrt {1\!-\!cos^{2}\phi }\!-\!y_{A2})^{2}} \tag{5}\\ D_{T'}=&\sqrt {(x_{A0}\!-\!TA_{0}cos\phi \!-\!x_{A2})^{2} \!+\!(-TA_{0}\sqrt {1\!-\!cos^{2}\phi }\!-\!y_{A2})^{2}} \tag{6}\end{align*} View SourceRight-click on figure for MathML and additional features.

Then $D_{T}$ and $D_{T'}$ are compared with the distance from A2 to T measured by the anchor. The smaller one is the correct point where the mobile tag is located.

D. The Moving-Average Filter of the UWB Data

When calculating the location of the mobile tag, it is noticeable that the coordinate of the tag changes a lot even if the tag keeps stationary. When the tag is being moved around the robot while the robot keeps still, the calculated path of the moving person is illustrated in Figure 13.

FIGURE 13. - The moving path before filtering is performed ed. The path curves are cluttered with many fluctuations and noises.
FIGURE 13.

The moving path before filtering is performed ed. The path curves are cluttered with many fluctuations and noises.

As shown in Figure 13, the path is not smooth and changes dramatically in some portion, which does not match the actual path. That is because of the measurement error of the distances obtained from the UWB anchors. To make such change less, the moving-average filter shown in Figure 14 is adopted in this system. The data from UWB modules are feed into this filter continuously and filtered data are outputted for the control algorithm afterward.

FIGURE 14. - The moving path before filtering is performed. The path curves are cluttered with many fluctuations and noises.
FIGURE 14.

The moving path before filtering is performed. The path curves are cluttered with many fluctuations and noises.

After the data have been filtered, the path becomes much smoother without a lot of dramatically changes. An example is shown in Figure 15.

FIGURE 15. - The moving path after filtering is performed by moving-average smoothing. The curves become more regular.
FIGURE 15.

The moving path after filtering is performed by moving-average smoothing. The curves become more regular.

SECTION IV.

Dynamic Model of the Robot

To develop the tracking algorithm, it is necessary to develop the dynamic model of the system that connects the system’s behavior to its inputs. The construction of the robot is shown in Figure 16. Such type of robot, namely hilare-type robot, has two wheels which are driven by two motors independently. As illustrated in Figure 16, the position of the robot in the coordinate system $x_{1}-x_{2}$ is globally represented by the coordinates of point $(x_{1},x_{2})$ , while the orientation of the robot is defined by the angle $\theta $ between the positive $x_{1}$ axis and the line that is perpendicular to the axis connecting the two wheels. In the meanwhile, it is assumed that the body coordinate system $x-y$ of the robot is fixed on the middle point of the axis of the two wheels. The center of gravity of the robot is considered to be on the point illustrated in Figure 16 [26]. The distance between the two wheels is $T$ .

FIGURE 16. - The schematic figure of the Hilare-type robot. The robot is driven by two independent wheels.
FIGURE 16.

The schematic figure of the Hilare-type robot. The robot is driven by two independent wheels.

Figure 17 shows the free body diagrams of the two wheels respectively. From the construction of the robot and the free body diagrams shown in Figure 16 and 17, the dynamic equations of motion are derived as follow:\begin{align*} \left ({m_{r}+2m_{w}+\frac {2}{r^{2}}M_{w2}}\right)\ddot {y}-m_{r}c\dot {\theta }^{2}=&\frac {1}{r} \left(\tau _{r}+\tau _{l}\right) \tag{7}\\ \left ({M_{r}+2M_{w}+m_{w}\frac {T^{2}}{2}+M_{w2}\frac {T^{2}}{2r^{2}}+m_{r}c}\right)\ddot {\theta }=&\frac {T}{2r}\left(\tau _{r}-\tau _{l}\right) \tag{8}\end{align*} View SourceRight-click on figure for MathML and additional features.where:

  • $\tau _{r}$ and $\tau _{l}$ : the driving torques of the right and the left wheel.

  • $m_{r}$ : the mass of the body of the robot.

  • $m_{w}$ : the mass of the wheels.

  • $M_{r}$ : the moment of inertia of the rotating masses of the robot body.

  • $M_{w}$ : the moment of inertia of the rotating masses of the wheels.

FIGURE 17. - The free body diagrams of the left wheel and right wheel.
FIGURE 17.

The free body diagrams of the left wheel and right wheel.

The system’s behavior is described by the state space variables. In this system, the following variables have been chosen [26]:\begin{equation*} q= \begin{bmatrix} q_{1}\\ q_{2}\\ q_{3}\\ q_{4}\\ q_{5} \end{bmatrix} = \begin{bmatrix} x_{1}\\ x_{2}\\ \theta \\ \dot {y}\\ \dot {\theta } \end{bmatrix}\tag{9}\end{equation*} View SourceRight-click on figure for MathML and additional features. From (7) and (8) the equations of motion of the robot can be written in the first-order form as follow [26]:\begin{equation*} \dot {q}= \begin{bmatrix} \dot {q_{1}}\\ \dot {q_{2}}\\ \dot {q_{3}}\\ \dot {q_{4}}\\ \dot {q_{5}} \end{bmatrix} = \begin{bmatrix} q_{4}\cos {q_{3}}\\ q_{4}\sin {q_{3}}\\ q_{5}\\ \frac {1}{m}\left ({\frac {1}{r}(\tau _{r}+\tau _{l})+m_{r}cq^{2}_{5}}\right) \\ \frac {T}{2M}\frac {1}{r}(\tau _{r}-\tau _{l}) \end{bmatrix}\tag{10}\end{equation*} View SourceRight-click on figure for MathML and additional features. where \begin{align*} m=&m_{r}+2\left ({m_{w}+\frac {1}{r^{2}}M_{w2}}\right),\tag{11}\\ M=&M_{r}+2M_{w}+\left ({m_{w}+\frac {1}{r^{2}}M_{w^{2}}}\right)\frac {T^{2}}{2}+m_{r}c.\tag{12}\end{align*} View SourceRight-click on figure for MathML and additional features.

SECTION V.

The Human Tracking Algorithm

After the average-moving filter algorithm, the target person position and orientation have been obtained with minimum errors. Next these data are used to perform path planning of the robot and the following and tracking function. In this paper, a modified virtual spring model is proposed as follow.

A virtual spring is assumed to be connected between the robot and the target person, as shown in Figure 18.

FIGURE 18. - The virtual spring model. There is a virtual spring connecting between the target person and the robot. The control is derived from two factors. The first is along the direction of the spring, and when the distance between person and robot is longer there will be a stronger force to pull back. The second is the angle between the robot and the person, which drives the robot to close this angle gap using feedback control.
FIGURE 18.

The virtual spring model. There is a virtual spring connecting between the target person and the robot. The control is derived from two factors. The first is along the direction of the spring, and when the distance between person and robot is longer there will be a stronger force to pull back. The second is the angle between the robot and the person, which drives the robot to close this angle gap using feedback control.

The input $(e,\alpha)$ of the virtual spring model are obtained from the average-moving filter algorithm, where $e$ represents the error between the detected actual distance $d_{act}$ which is from the robot to the target person and the reference distance $d_{ref}$ , and $\alpha $ is the angle between the orientation of the robot and the virtual spring. $v_{l}$ and $\omega $ are the desired linear and rotating velocity respectively which are defined as follows:\begin{align*} v_{l}=&k^{l}_{p}e+k^{l}_{i}\int {edt}+k^{l}_{d}e'\tag{13}\\ \omega=&k^{r}_{p}\alpha +k^{r}_{i}\int {\alpha dt}+k^{r}_{d}\alpha '\tag{14}\end{align*} View SourceRight-click on figure for MathML and additional features.$v_{l}$ and $\omega $ are then decoupled into the rotating velocities of the two wheels $\omega _{l}$ and $\omega _{r}$ as follow:\begin{equation*} \begin{bmatrix} \omega _{r}\\ \omega _{l} \end{bmatrix}= \begin{bmatrix} \frac {1}{r}&\quad \frac {T}{2r}\\ \frac {1}{r}&\quad -\frac {T}{2r} \end{bmatrix} \begin{bmatrix} v_{l} \\ \omega \end{bmatrix}\tag{15}\end{equation*} View SourceRight-click on figure for MathML and additional features. The whole diagram of the tracking algorithm is illustrated in Figure 19.

FIGURE 19. - The diagram of the tracking algorithm.
FIGURE 19.

The diagram of the tracking algorithm.

When the modified virtual spring model is implemented, there are some problems specific to the differential driven mobile robot which should be taken into consideration. According to Brockett’s theorem, it is impossible to control a nonholonomic robot to move to one point of the state space asymptotically using smooth state feedback [27]. Besides, when the virtual spring is along the $x$ axis, it is not possible for the robot to move according to the control law above. As a result, in our implementation, when the absolute value of angle $\alpha $ is within 45° to 90°, the control law stops working and then the robot stops moving ahead and begins rotating toward the target person until the angle $\alpha $ is less than 45°. After that, the control law takes effect again and the human tracking and following is fulfilled.

SECTION VI.

Experimental Results and Discussion

Some experimental data has been acquired and illustrated in Figure 20, Figure 21 and Figure 22, which present three moving routes of the target person and the tracking paths of the robot.

FIGURE 20. - The diagram of the tracking algorithm.
FIGURE 20.

The diagram of the tracking algorithm.

FIGURE 21. - The diagram of the tracking algorithm.
FIGURE 21.

The diagram of the tracking algorithm.

FIGURE 22. - The diagram of the tracking algorithm.
FIGURE 22.

The diagram of the tracking algorithm.

In Figure 20, the target person keeps still and the robot gets the coordinates of the person, and moves forward to the person until it gets close enough to him and stops there, during which the distance between them keeps decreasing as shown in the figure. The initial coordinates of the person are (1289, 2955)$ (mm)$ , while the end coordinates of the person are (−22, 273) $(mm)$ .

In Figure 21, the target person keeps walking forward before the robot, and the robot is trying to follow the person continuously. The curve in Figure 21 shows how the coordinates of the target person change in the whole following process. In the beginning, the person is located at the point A with the coordinates (47, 1047)$(mm)$ . When the person starts to walk the robot starts to move as well and tries to keep itself within the desired distance from the target person. From point A to B, the target person is accelerating, and the distance from the person increases, and the robot begins to accelerate as well. From point B to C, the robot moves faster than the person, and as a result, the distance between the robot and the target person decreases continuously. Finally, the person stops at the point C with the coordinates of (188,1267)$(mm)$ , and the robot stops moving, too.

In Figure 22, the target person walks around in a square route. The person starts walking forward at the point A with the coordinates of (−115, 818)$(mm)$ and the robot starts to follow the person immediately. At point B, the person turns left to the point C, and the robot starts to turn left as well until the person is located at the point D. Then the person turns left again until he goes to the point E, and the robot also turn left and continues to follow the person until he arrives at the point F and stops there.

As shown in the figures, the robot starts moving when it detects the target person carrying the tag and moves toward the person until the distance decreaces below the desired value, which illustrates our control law.

SECTION VII.

Conclusion and Further Discussion

In this paper, a method is proposed for real-time tracking and following of the human with UWB technology. Illustrated by the experimental results, it can obtain the distance of the target person accurately by using the UWB anchors and the tag. The challenge of the measurement errors is overcome by the modified hyperbolic positioning algorithm and moving-average filter, and the modified virtual spring model improves the control performance of human-tracking, which is also shown by the experimental data.

In the future work, there are two immediate directions for further research. The first is to improve the autonomous movement controlling algorithms. In fact, robot’s physical tracking capability enabled by the underlying movement control model is closely related to the tracking capability. This is because in many cases the robots in the beginning can localize well the target’s position, however due to the limitation of its movement ability it may lose track of the target and the target is out the scope of the sensors. We note a relevant controlling algorithm in [28] where movement in both straight lines and curves can be performed by automatic control. Second, vision based techniques could be infused to our current system to improve its robustness, especially the deep network based models for motion estimation [29] and object detection [30]. Meanwhile, those less heavy and more efficient object identity agnostic models e.g. the saliency methods [31] will also be studied.

References

References is not available for this document.