Autonomous aerial systems (AAVs) are defined as Dynamic Remotely Operated Navigation Equipment (DRONEs) that are autonomously controlled without onboard pilots [1]. The authors in [2] categorize navigation in four phases, namely, “perception”, “localization”, “cognition” and “path planning”, where perception is an information collection system utilizing the sensing apparatus onboard the AAV to capture surrounding environment of the AAV, and localization is the process that is used by the AAV to locate its position in the air-space with respect to a given coordinate system, and cognition is a term borrowed from neuroscience that refers to using the brain to build a mental representation of the air-space and use it to guide movement. AAVs can navigate through their flight path based on predefined set of waypoints if there is continuous access to the GNSS service [3]. However, there are challenging scenarios in which AAVs are required to deploy self-awareness in GNSS-denied environments, and find an optimal path to their destination through environmental sensing. The authors in [8] define self-awareness as a meta-capability which includes perception, localization, path planning and motion control, by taking advantage of the on-board sensing devices. Furthermore, the relative ease by which GNSS-based navigation can be jammed (through denial of reception by a competing signal), or spoofed (through deliberate introduction of a false signal) makes GNSS based localization vulnerable to cyberattacks [4], [5], [6], [7]. The authors in [8] have concluded that the vastly growing and nowadays complex AAV applications are pushing control requirements beyond human-in-the-loop capabilities and motivates research areas into fully autonomous navigation methodologies.
The authors in [8] further argue that localization techniques in autonomous navigation systems pose eluding estimation over long periods of time. The most common pose estimation practice is dead reckoning, where drifts over time are adjusted through other methods, of which GNSS is a common choice, but is vulnerable to spoofing, jamming, and environmental effects [9], as well as signal obscurity in urban canyons, natural canyons and forest under-stories [3], [10]. The conclusion is that such vulnerabilities call a quest for GNSS-independent navigation. Autonomous navigation has been a complex area of research due to the various involving aspects of the problem, including localization, translational velocity estimation, approach and landing, motion planning, obstacle avoidance, attitude estimation, and mapping, out of which 62% of the research work in this area has been expended on autonomous localization and positioning showing the need for a full GNSS-independent navigation solution as a primary quest [8].
This paper is an extension of a previously published paper [46] that outlines a cooperative localization scheme in partially-denied GNSS environments where at least an agent in the network exists with access to GNSS, or in a fully GNSS denied environment where at least an agent can have C-V2X messaging to/from a stationary infrastructure with known position, referred to as an anchor. There are other recent articles in the literature that discuss localization in GNSS denied environments, where vehicles take advantage of an environment that is embedded with geographic position marks. For example, the authors in [35] take advantage of stray radio signals emitted from a land-based infrastructure with known position, and [36], [40] take advantage of road semantics such as lane lines and poles, or RSU with given positions. Alternatively, other research work tend to facilitate localization in GNSS denied environment, through fusion of onboard sensing apparatus [27], [28], [29], [30], [31], [32], [33], [34].
The main contribution of this paper is formulation of a localization filtering algorithm that compensates the positioning error of individual AAVs in large connected networks, by integrating the available sensing resources in the network of the AAVs, and without reliance on land-based position marks or stray radio signals (which is the state-of-the-art in GNSS denied localization, according to the best knowledge of the author of this paper). Mitigation of the odometry positioning error is carried out through cooperation with other AAVs in the network, and consensus among the AAVs over their true positions. This would be especially advantageous in scenarios where flocks of AAVs are flying in unknown isolated airspace, or where access to GNSS signals are denied or spoofing cyberattacks are present. In an urban airspace where stationary anchors are present, the proposed algorithm improves the positioning error, through combination of the AAV’s sensing resources and distribution of the total network error equally among the AAVs, in accordance to the law of large numbers. The Cooperative localization technique introduced in this paper is a multi-agent filter that surpasses the accuracy of the existing statistical optimization methods.
Cooperative Navigation is a relatively new subject in the literature encompassing autonomous navigation. Reference [11] proposed cooperative localization in a swarm of mobile sensors using Covariance Intersection (CI) algorithm. The authors in [12] have suggested multi-AAVs in a cooperative fashion to circumvent inadequacies of the single-AAV three-dimensional occupancy grid mapping. Reference [13] defines Cooperative Intelligent Transport Systems (C-ITS) and elaborates the fifth-generation (5G) wireless technology to support cooperative mobility. Reference [14] is a recent survey on 5G technology for AAVs. Reference [15] is another recent survey on sensor selection and placement for assisting cooperative mobility. Reference [16] is an excellent recent research work that compares cooperative localization methodologies in modern smart transportation. Cooperative localization in GPS-denied environments is the main focus of this work and tries to present the advantages of cooperative navigation from a different angle, for the purpose of autonomous localization. The authors in [17] focus on ground-based vehicle localization in GPS-denied areas, relying only on information gathered by on-board sensors. They have argued that reliable long-term localization is a challenging problem that cannot be robustly solved using pure visual odometry approaches. They have therefore suggested an architecture that is mainly focused on the combination of visual odometry with additional sources of information in order to keep the localization errors bounded.
References [18], [19], [20] have also shown similar efforts in visual-inertial and visual-LiDAR sensor fusion to conserve estimation errors, respectively. The authors in [21] have developed and integrated capabilities such as dynamic mission planning and re-planning in real-time, reactive collision avoidance navigation based on laser information, multi-sensor fusion for accurate pose estimation and altitude filtering, object recognizer component based on a Convolutional Neural Network (CNN) model, and Image-Based Visual Surveying (IBVS) algorithm using deep Reinforcement Learning (RL) for object interaction and following, in a non-supervised machine learning framework. The authors in [22] have used laser rangefinder, IMU and vision, to control drift errors. They have presented a synthesis of techniques enabling vision-based autonomous AAV systems, and computer vision processing modules to exploit visual information and simultaneously perceive obstacles in a GPS-denied environment. Visual data is also used to extract information regarding the drone attitude and position while exploring the environment in a Simultaneous Localization and Mapping (SLAM) fashion.
The authors in [23] have identified that mapping in dynamic environments can lead to serious errors in the resulting maps such as spurious objects or misalignments due to localization errors. Nevertheless, there are several publications available in the literature where the authors have tested a prototype SLAM product in a relevant environment. For instance, the authors in [24] have introduced a vision-IMU perception algorithm where a visual-aided (Inertial Navigation System) INS is tested in a simulated known environment (software in the loop). Reference [25] deploys laser rangefinders, IMU and vision-based sensing for full GNSS-denied navigation, that is verified on a prototype in an unknown cluttered relevant environment. The authors in [26] have utilized vision, IMU, and sonar sensing for full GNSS-denied navigation and tested the prototype in a relevant environment. In addition to the above, several other research works are available in the literature that have addressed sensor-fusion AAV navigation in GNSS-denied environments [27], [28], [29], [30], [31], [32], [33], [34]. A more recent class of research work are trying to take advantage of the stray radio signals to elicit the 3-D space location of a given point [35].
The conclusion derived from the above literature review is that localization solutions infusing multiple sensing, perception, and cognition techniques yield better performance in comparison with IMU sensors alone. For instance, the authors in [36] have recently integrated onboard GNSS and INS systems, with the precise localization capabilities of road semantics, such as lane lines and poles, to achieve high accurate positioning. Furthermore, the authors in [39] have used empirical mode decomposition threshold filtering (EMDTF), and a long short-term memory (LSTM) neural network to provide pseudo GPS position information during GPS outages, and the authors in [3] combine the use of localization algorithms such as SLAM with Partially Observable Markov Decision Processes (POMDP) algorithms into a framework in which the navigation and exploration tasks are modelled as sequential decision problems under uncertainty. Visual Inertial Odometery, Visual Odometery and Optical Flow, LiDAR, and Phase Array Radio Systems (PARS) localization methods are among other complementary localization techniques that are deployed in GNSS-Independent navigation.
With the advent of the AAV-assisted Cellular Vehicle-to-Everything (C-V2X) communication technology, AAVs can periodically exchange Cooperative Awareness Messages (that contain AAV specific information such as vehicle speed, position, and predicted path in accordance to the European Telecommunications Standards Institute) and Basic Safety Messages (containing vehicle safety-related information, according to SAE J2735 Intelligent Transportation Systems (ITS) standards) with surrounding network nodes such as other AAVs, ground based vehicles, and infrastructures. Reference [37] outlines a state-of-art peer-to-peer backbone for aerial data sharing. Reference [38] utilizes AAVs as data collectors for exchanging data between IoT devices. The authors in [40] have recently formulated a Road Side Unit (RSU) based cooperative localization scheme for lane-level positioning accuracy of ground-based autonomous vehicles in GNSS denied environments, using C-V2X technology. The present paper proposes a AAV-assisted C-V2X network of multi-agent AAVs that periodically exchange their perceived positions in space with their neighbouring AAVs, to correct their estimated positions based on LiDAR-measured air-distances, and without necessity of messaging with static infrastructures such as RSU.
SECTION III.
Problem Statement
Let the undirected graph \mathcal {G} = (\mathcal {N}, \mathcal {E}, \mathcal {A})
represent a connected wireless network, where \mathcal {N}
is the set of nodes representing N number of vehicles (referred to as agents), \mathcal {E} \in \mathcal {N} \times \mathcal {N}
representing the set of edges, and \mathcal {A} = [a_{ij}]_{N \times N}, \; a_{ij} \in \{ 0,1 \}
the adjacency matrix, where i \in \{1, {\dots }, N\}
denotes the reference number of the agent. The elements of \mathcal {A}
are defined such that a_{ij} = 1
if (i,j) \in \mathcal {E}
, and a_{ij} = 0
if (i,j) \notin \mathcal {E}
. Additionally, let {\mathcal {N}}_{i}
denote the set of neighbours of agent i (i.e., agents whose elements in the Adjacency is a_{ij} = 1
). Finally, the Laplacian matrix L \triangleq [\ell _{ij}] \in \mathbb {R}^{n \times n}
associated to \mathcal {G}
is defined where,\begin{equation*} \ell _{ii}= \sum _{j=1}^{N} a_{ij} = \sum _{i=1}^{N} a_{ji}, \; \text {and}~ \ell _{ij}=-a_{ij}, \; \forall \; i \neq j. \tag {1}\end{equation*}
View Source
\begin{equation*} \ell _{ii}= \sum _{j=1}^{N} a_{ij} = \sum _{i=1}^{N} a_{ji}, \; \text {and}~ \ell _{ij}=-a_{ij}, \; \forall \; i \neq j. \tag {1}\end{equation*}
Additionally, let \mathbf {x}^{(i)}(t)=[x^{(i)}(t),y^{(i)}(t),{z}^{(i)}(t)]^{T} \in \mathbb {R}^{3}
denote the true three-dimensional position of vehicle i at time t, with reference to a given coordinate system. It is further assumed that the motion model of AAV i is given by,\begin{align*} \dot {\mathbf {x}}^{(i)}(t)=& \mathbf {v}^{(i)}(t) \\ \dot {\mathbf {v}}^{(i)}(t)=& \frac {1}{m}\mathbf {u}^{(i)}(t) - \frac {1}{m}G + \frac {1}{m}\mathbf {w}^{(i)}(t) \\ \vartheta ^{(i)}(t)=& M\left ({{\eta ^{(i)}}}\right ) \ddot {\eta }^{(i)}(t) + C\left ({{\eta ^{(i)},\dot {\eta }^{(i)}}}\right )\dot {\eta }^{(i)}(t) \tag {2}\end{align*}
View Source
\begin{align*} \dot {\mathbf {x}}^{(i)}(t)=& \mathbf {v}^{(i)}(t) \\ \dot {\mathbf {v}}^{(i)}(t)=& \frac {1}{m}\mathbf {u}^{(i)}(t) - \frac {1}{m}G + \frac {1}{m}\mathbf {w}^{(i)}(t) \\ \vartheta ^{(i)}(t)=& M\left ({{\eta ^{(i)}}}\right ) \ddot {\eta }^{(i)}(t) + C\left ({{\eta ^{(i)},\dot {\eta }^{(i)}}}\right )\dot {\eta }^{(i)}(t) \tag {2}\end{align*}
where m is the AAV mass, \mathbf {v}^{(i)}
is the three dimensional airspeed, G is the force of gravity, \eta ^{(i)}=[\theta ,\phi ,\psi]^{T}
is the attitude vector of the AAV, and \mathbf {w}^{(i)}
is an unknown disturbance force with zero mean, where \mathbb {E} [\mathbf {w}^{(i)}(t)]= 0, \; \forall \; t~\in ~\mathbb {R}^{+}
, and\begin{align*} \mathbf {u}^{(i)}(t)=f(t) \left [{{ \begin{array}{c} \cos \psi \sin \theta \cos \phi + \sin \psi \sin \phi \\ \sin \psi \sin \theta \cos \phi - \cos \psi \sin \phi \\ \cos \phi \cos \theta \end{array} }}\right ]\end{align*}
View Source
\begin{align*} \mathbf {u}^{(i)}(t)=f(t) \left [{{ \begin{array}{c} \cos \psi \sin \theta \cos \phi + \sin \psi \sin \phi \\ \sin \psi \sin \theta \cos \phi - \cos \psi \sin \phi \\ \cos \phi \cos \theta \end{array} }}\right ]\end{align*}
is the thrust vector, all in the inertial reference frame, and \vartheta ^{(i)}=[\vartheta _{\phi },\vartheta _{\theta },\vartheta _{\psi }]^{T}
is the propeller torque vector such that,\begin{align*} \left [{{ \begin{array}{c} f \\ \vartheta _{\phi } \\ \vartheta _{\theta } \\ \vartheta _{\psi }\end{array} }}\right ]= \left [{{ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ -d_{\phi } & -d_{\phi } & d_{\phi } & d_{\phi } \\ d_{\theta } & -d_{\theta } & d_{\theta } & -d_{\theta } \\ c_{\vartheta } & -c_{\vartheta } & -c_{\vartheta } & c_{\vartheta }\end{array} }}\right ] \left [{{ \begin{array}{c} f_{1_{k}} \\ f_{2_{k}} \\ f_{3_{k}} \\ f_{4_{k}}\end{array} }}\right ] \tag {3}\end{align*}
View Source
\begin{align*} \left [{{ \begin{array}{c} f \\ \vartheta _{\phi } \\ \vartheta _{\theta } \\ \vartheta _{\psi }\end{array} }}\right ]= \left [{{ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ -d_{\phi } & -d_{\phi } & d_{\phi } & d_{\phi } \\ d_{\theta } & -d_{\theta } & d_{\theta } & -d_{\theta } \\ c_{\vartheta } & -c_{\vartheta } & -c_{\vartheta } & c_{\vartheta }\end{array} }}\right ] \left [{{ \begin{array}{c} f_{1_{k}} \\ f_{2_{k}} \\ f_{3_{k}} \\ f_{4_{k}}\end{array} }}\right ] \tag {3}\end{align*}
where d_{\phi }, d_{\theta }
, and c_{\vartheta }
are known constants which denote the half of roll motor-to-motor distance, half of pitch motor-to-motor distance, and the relationship between the thrust force f and its corresponding torque, respectively, and where M(\eta ^{(i)}), C(\eta ^{(i)},\dot {\eta }^{(i)})
represent the diagonal moment of inertia tensor, and the Coriolis matrix, respectively, and where G is the gravity force in the inertial reference frame.
Equation (2) can be used to obtain an odometry representation of the AAV 3-D airspeed that will be associated with errors resulting from the onboard sensors and actuator imperfections. Therefore, one can assume that given Equation (2) and onboard IMU sensors \bar {\mathbf {v}}^{(i)}(t)=\mathbf {v}^{(i)}(t)+\Delta \mathbf {v}^{(i)}(t)
is available, where \Delta \mathbf {v}^{(i)}
is an unknown zero-mean error. Then the approximate position of the AAV can be derived through an odometry model that is given by,\begin{equation*} \bar {\mathbf {x}}^{(i)}(t) = \mathbf {x}^{(i)}\left ({{t_{0}}}\right )+\int _{\tau =t_{0}}^{t} \bar {\mathbf {v}}^{(i)}\left ({{\tau }}\right ) d \tau \tag {4}\end{equation*}
View Source
\begin{equation*} \bar {\mathbf {x}}^{(i)}(t) = \mathbf {x}^{(i)}\left ({{t_{0}}}\right )+\int _{\tau =t_{0}}^{t} \bar {\mathbf {v}}^{(i)}\left ({{\tau }}\right ) d \tau \tag {4}\end{equation*}
where \bar {\mathbf {x}}^{(i)}(t)
denotes the 3-D odometry position of the AAV, and \mathbf {x}^{(i)}(t_{0})
denotes the true position of the AAV at some reference point t_{0}
where \mathbf {x}^{(i)}(t_{0})
is available. Therefore, integration of \Delta \mathbf {v}^{(i)}(t)
over time (in Equation (4)) can be regarded as a dominant factor for localization error in odometry (also referred to as dead reckoning). The author further denotes \mathbf {x}(t)=[\mathbf {x}^{(1)}(t),\mathbf {x}^{(2)}(t), {\dots },\mathbf {x}^{(N)}(t)]^{T}
as the true position of the network of AAVs, and \bar {\mathbf {x}}(t)=[\bar {\mathbf {x}}^{(1)}(t),\bar {\mathbf {x}}^{(2)}(t), {\dots },\bar {\mathbf {x}}^{(N)}(t)]^{T}
as the odometry position estimation of the network.
Given the above, let t=k T_{s} \in \mathbb {R}^{+}
and assume that k T_{s}, \; k \in \mathbb {N}^{+}
represents a set of points at which the true position \mathbf {x}^{(i)}(k T_{s})
is required to be estimated from \bar {\mathbf {x}}^{(i)}(kT_{s})
, where T_{s} \in \mathbb {R}^{+}
is an arbitrarily small sampling period, and \; k \in \mathbb {N}^{+}
. Then the objective of this paper is to find \hat {\mathbf {x}}(k T_{s} + \Delta t)~\in \mathbb {R}^{3}
, such that || \hat {\mathbf {x}}(k T_{s} + \Delta t) - \mathbf {x}(k T_{s})||^{2}
is minimum, where \Delta t \in [0,T_{s}]
represents a small transient time. In order to accomplish this objective, the author considers an LTI filter with an impulse response h(t)
, and find an excitation \hat {\mathbf {u}}(t)
for the filter, such that,\begin{equation*} \hat {\mathbf {x}}\left ({{k T_{s}+ \Delta t}}\right )~\triangleq \bar {\mathbf {x}}\left ({{k T_{s}}}\right ) + \hat {\mathbf {u}}\left ({{k T_{s}+ \Delta t}}\right )~\ast h\left ({{k T_{s}+\Delta t}}\right ), \tag {5}\end{equation*}
View Source
\begin{equation*} \hat {\mathbf {x}}\left ({{k T_{s}+ \Delta t}}\right )~\triangleq \bar {\mathbf {x}}\left ({{k T_{s}}}\right ) + \hat {\mathbf {u}}\left ({{k T_{s}+ \Delta t}}\right )~\ast h\left ({{k T_{s}+\Delta t}}\right ), \tag {5}\end{equation*}
where \hat {\mathbf {u}}(k T_{s}+ \Delta t)~\ast h(k T_{s}+\Delta t)
is the error compensation term, and is intended to compensate the error in the odometry model \bar {\mathbf {x}}(k T_{s})
, so that \hat {\mathbf {x}}(k T_{s}+ \Delta t)~\to \mathbf {x}(k T_{s}),\; \forall \; k
, as \Delta t \to T_{s}
.
It would be intuitive to realize the similarity of Equation (5) with Kalman filter, where,\begin{equation*} \hat {\mathbf {x}}\left ({{k T_{s}}}\right ) \triangleq \bar {\mathbf {x}}\left ({{k T_{s}}}\right ) + K \big ( \mathbf {z}\left ({{k T_{s}}}\right ) - \bar {\mathbf {x}}\left ({{k T_{s}}}\right ) \big )\end{equation*}
View Source
\begin{equation*} \hat {\mathbf {x}}\left ({{k T_{s}}}\right ) \triangleq \bar {\mathbf {x}}\left ({{k T_{s}}}\right ) + K \big ( \mathbf {z}\left ({{k T_{s}}}\right ) - \bar {\mathbf {x}}\left ({{k T_{s}}}\right ) \big )\end{equation*}
and where \mathbf {z}(k T_{s})
is a position observation (if assumed to be existing), and K is the Kalman filter gain. In Equation (5), \hat {\mathbf {u}}(k T_{s}+ \Delta t)~\ast h(k T_{s}+\Delta t)
is the term that is used instead of K (\mathbf {z}(k T_{s}) - \bar {\mathbf {x}}(k T_{s}))
.
However, \bar {\mathbf {x}}(k T_{s})
and \mathbf {x}(k T_{s})
are stochastic processes, since \mathbf {x}(k T_{s})
is affected by random disturbances and unknown actuator errors. Therefore, in a more realistic framework the objective would be,\begin{equation*} \min _{h(t)} \; \lim _{\Delta t \to T_{s}} \; \mathbb {E} \left [{{ ||\hat {\mathbf {x}}\left ({{k T_{s}+ \Delta t}}\right ) - \mathbf {x}\left ({{k T_{s}}}\right )||^{2} }}\right ], \; \forall \; i \in \mathcal {N}.\quad \tag {6}\end{equation*}
View Source
\begin{equation*} \min _{h(t)} \; \lim _{\Delta t \to T_{s}} \; \mathbb {E} \left [{{ ||\hat {\mathbf {x}}\left ({{k T_{s}+ \Delta t}}\right ) - \mathbf {x}\left ({{k T_{s}}}\right )||^{2} }}\right ], \; \forall \; i \in \mathcal {N}.\quad \tag {6}\end{equation*}
implying minimum steady state error in a mean-sense. Objective (6) could be also written along all three axes as,\begin{equation*} \hat {x}\left ({{k T_{s}+ \Delta t}}\right )~\triangleq \hat {u}_{x}\left ({{k T_{s}+ \Delta t}}\right )~\ast h\left ({{k T_{s}+\Delta t}}\right ) + \bar {x}\left ({{k T_{s}}}\right ),\end{equation*}
View Source
\begin{equation*} \hat {x}\left ({{k T_{s}+ \Delta t}}\right )~\triangleq \hat {u}_{x}\left ({{k T_{s}+ \Delta t}}\right )~\ast h\left ({{k T_{s}+\Delta t}}\right ) + \bar {x}\left ({{k T_{s}}}\right ),\end{equation*}
where \hat {u}_{x}
is the filter excitation \hat {\mathbf {u}}
along the x-axis (to be determined), and \ast
denotes the convolution operator, and where the same applies to y-axis, and z-axis in a similar fashion. However, if \Delta \mathbf {x}^{(i)}(k T_{s}+ \Delta t)
denotes the filter error, one can write,\begin{equation*} \hat {\mathbf {x}}^{(i)}\left ({{k T_{s} + \Delta t}}\right ) = \mathbf {x}^{(i)}\left ({{k T_{s}}}\right ) + \Delta \mathbf {x}^{(i)}\left ({{k T_{s}+ \Delta t}}\right ), \tag {7}\end{equation*}
View Source
\begin{equation*} \hat {\mathbf {x}}^{(i)}\left ({{k T_{s} + \Delta t}}\right ) = \mathbf {x}^{(i)}\left ({{k T_{s}}}\right ) + \Delta \mathbf {x}^{(i)}\left ({{k T_{s}+ \Delta t}}\right ), \tag {7}\end{equation*}
and the objective in (6) could be equivalently written as,\begin{align*} \min _{h(t)}& \; \lim _{\Delta t \to T_{s}} \; \mathbb {E} \left [{{ ||\Delta \mathbf {x}^{(i)}\left ({{k T_{s}+\Delta t}}\right )||^{2} }}\right ], \; \forall \; i \in \mathcal {N}, \; \forall \; k \in \mathbb {N}^{+}, \tag {8}\end{align*}
View Source
\begin{align*} \min _{h(t)}& \; \lim _{\Delta t \to T_{s}} \; \mathbb {E} \left [{{ ||\Delta \mathbf {x}^{(i)}\left ({{k T_{s}+\Delta t}}\right )||^{2} }}\right ], \; \forall \; i \in \mathcal {N}, \; \forall \; k \in \mathbb {N}^{+}, \tag {8}\end{align*}
where ||\Delta \mathbf {x}^{(i)}(k T_{s}+\Delta t)||^{2}=(\Delta x^{(i)}(k T_{s}+\Delta t))^{2} + (\Delta y^{(i)}(k T_{s}+\Delta t))^{2} + (\Delta {z}^{(i)}(k T_{s}+\Delta t))^{2}
. Obviously, the objective in (8) is satisfied if the steady state error \mathbb {E} [(\Delta x^{(i)}(k T_{s}+\Delta t))^{2}]
is minimized (along the x-axis) by a correct choice for h(t)
, since in this case the same would apply for \Delta y^{(i)}(k T_{s})
, and \Delta {z}^{(i)}(k T_{s})
. Therefore, it would be sufficient to concentrate on the localization policy for the x-axis only, noting that the same would apply for the other two axes in a similar fashion.
SECTION IV.
Assumptions and Definitions
From Equation (4) it follows that \bar {x}(k T_{s})
is a random variable with zero mean, since, \bar {\mathbf {v}}_{x}(k T_{s})
is a zero-mean random variable. Consequently, \hat {x}(k T_{s}+\Delta t)
, and \Delta x(k T_{s}+\Delta t)
would be random variables with zero means. Furthermore, AAVs are equipped with onboard rangefinders to determine the relative range of surrounding agents and static obstacles. However, rangefinders (such as LiDARs) are devices with stochastic uncertainties, as explained in [41], [42], [43]. Therefore, given the relative range \omega ^{(i,j)}(k T_{s}) = x^{(i)}(k T_{s}) - x^{(j)}(k T_{s}), \; j \in {\mathcal {N}}_{i}
, the rangefinder reads \omega ^{(i,j)}(k T_{s}) + \Delta \omega ^{(i,j)}(k T_{s})
, where \Delta \omega ^{(i,j)}(k T_{s})
is a random error that i reads when finding its range to j, along the x-axis.
Assumption 1:
Let \Delta t \in [0,T_{s}]
be sufficiently small so that \omega ^{(i,j)}(k T_{s} + \Delta t)~\approx \omega ^{(i,j)}(k T_{s})
. This implies that although the vehicles are in continuous motion, the displacement can be considered as negligible over \Delta t
.
Assumption 2:
It is assumed that \Delta x^{(i)}(k T_{s})
’s are zero-mean identically independent distributed (i.i.d) random variables for all i and k, that is,\begin{align*} \mathbb {E} \left [{{ \Delta x^{(i)}\left ({{k T_{s}}}\right ) \; \Delta x^{(j)}\left ({{k T_{s}}}\right ) }}\right ]=& \\ \mathbb {E} \left [{{ \Delta x^{(i)}\left ({{k T_{s}}}\right ) }}\right ] \; \mathbb {E} \left [{{ \Delta x^{(j)}\left ({{k T_{s}}}\right ) }}\right ]=& 0, \; \forall \; i,j \in \mathcal {N},\end{align*}
View Source
\begin{align*} \mathbb {E} \left [{{ \Delta x^{(i)}\left ({{k T_{s}}}\right ) \; \Delta x^{(j)}\left ({{k T_{s}}}\right ) }}\right ]=& \\ \mathbb {E} \left [{{ \Delta x^{(i)}\left ({{k T_{s}}}\right ) }}\right ] \; \mathbb {E} \left [{{ \Delta x^{(j)}\left ({{k T_{s}}}\right ) }}\right ]=& 0, \; \forall \; i,j \in \mathcal {N},\end{align*}
and, \lim _{N=|\mathcal {N}| \to \infty } \sum _{i \in \mathcal {N}} \Delta x^{(i)}(k T_{s})=0
.
Assumption 3:
It is further assumed that \mathbb {E} [\Delta \omega ^{(i,j)}(k T_{s})] =0, \; \forall \; i \in \mathcal {N}, j \in {\mathcal {N}}_{i}
, and,\begin{align*} \mathbb {E} \left [{{\Delta \omega ^{\left ({{i,j_{1}}}\right )}\left ({{k T_{s}}}\right ) \; \Delta \omega ^{\left ({{i,j_{2}}}\right )}\left ({{k T_{s}}}\right )}}\right ]=& \\ \mathbb {E} \left [{{\Delta \omega ^{\left ({{i,j_{1}}}\right )}\left ({{k T_{s}}}\right )}}\right ] \; \mathbb {E} \left [{{\Delta \omega ^{\left ({{i,j_{2}}}\right )}\left ({{k T_{s}}}\right )}}\right ]=& 0,\end{align*}
View Source
\begin{align*} \mathbb {E} \left [{{\Delta \omega ^{\left ({{i,j_{1}}}\right )}\left ({{k T_{s}}}\right ) \; \Delta \omega ^{\left ({{i,j_{2}}}\right )}\left ({{k T_{s}}}\right )}}\right ]=& \\ \mathbb {E} \left [{{\Delta \omega ^{\left ({{i,j_{1}}}\right )}\left ({{k T_{s}}}\right )}}\right ] \; \mathbb {E} \left [{{\Delta \omega ^{\left ({{i,j_{2}}}\right )}\left ({{k T_{s}}}\right )}}\right ]=& 0,\end{align*}
for all i \in \mathcal {N}, \; j_{1},j_{2} \in {\mathcal {N}}_{i}
, implying that rangefinder uncertainties, are mutually independent zero-mean random variables too.
Definition 1:
Given that \Delta x(k T_{s}+\Delta t)=[\Delta x^{(1)}(k T_{s}+\Delta t), {\dots },\Delta x^{(N)}(k T_{s}+\Delta t)]^{T}
, the network error of the cooperative localization method along the x-axis, at time-step k, is defined as,\begin{align*}& \lim _{\Delta t \to T_{s}} \mathbb {E}\left [{{|| \Delta x\left ({{k T_{s}+\Delta t}}\right )~||^{2}}}\right ] \\& \;= \lim _{\Delta t \to T_{s}} \mathbb {E}\left [{{\Delta x(k T_{s}+\Delta t)^{T}\Delta x(k T_{s}+\Delta t)}}\right ] \\& \;\triangleq \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}),\end{align*}
View Source
\begin{align*}& \lim _{\Delta t \to T_{s}} \mathbb {E}\left [{{|| \Delta x\left ({{k T_{s}+\Delta t}}\right )~||^{2}}}\right ] \\& \;= \lim _{\Delta t \to T_{s}} \mathbb {E}\left [{{\Delta x(k T_{s}+\Delta t)^{T}\Delta x(k T_{s}+\Delta t)}}\right ] \\& \;\triangleq \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}),\end{align*}
where \Delta x^{(i)}(k T_{s} + \Delta t)=\hat {x}^{(i)}(k T_{s} + \Delta t)-x^{(i)}(k T_{s} + \Delta t)
, and the network error of the odometry localization method along the x-axis, at time-step k, as,\begin{equation*} \mathbb {E} \big [ ||\bar {x}(k T_{s})-x(k T_{s})||^{2} \big ] \triangleq \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry}).\end{equation*}
View Source
\begin{equation*} \mathbb {E} \big [ ||\bar {x}(k T_{s})-x(k T_{s})||^{2} \big ] \triangleq \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry}).\end{equation*}
Additionally, the network sensor error is further defined as,\begin{equation*} \mathbb {E}\left [{{|| \Delta \omega (k T_{s})||^{2}}}\right ] = \sum _{i \in \mathcal {N}} \sum _{j \in {\mathcal {N}}_{i}} \mathbb {E}\left [{{ ( \Delta \omega ^{(i,j)}(k T_{s}) )^{2}}}\right ] \triangleq \sigma ^{2}_{\Delta \omega (k T_{s})},\end{equation*}
View Source
\begin{equation*} \mathbb {E}\left [{{|| \Delta \omega (k T_{s})||^{2}}}\right ] = \sum _{i \in \mathcal {N}} \sum _{j \in {\mathcal {N}}_{i}} \mathbb {E}\left [{{ ( \Delta \omega ^{(i,j)}(k T_{s}) )^{2}}}\right ] \triangleq \sigma ^{2}_{\Delta \omega (k T_{s})},\end{equation*}
and the Cooperative Localization Error Gain (CLEG) along the x-axis will be defined as,\begin{equation*} \textit {CLEG} = \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\textit {Coop-Localization})}{\sigma ^{2}_{\Delta \omega (k T_{s})}}.\end{equation*}
View Source
\begin{equation*} \textit {CLEG} = \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\textit {Coop-Localization})}{\sigma ^{2}_{\Delta \omega (k T_{s})}}.\end{equation*}
The cooperative localization error gain is a metrics to evaluate the total positioning error in a network. It will be shown later in Section VI that the network error in a cooperative localization scheme depends on the network connectivity, and the topological diameter, as well as network size.
SECTION V.
Basics of Cooperative Localization in Multi-Agent Networks
For the time being, let the sensor error be ignored and assume ideal ranging devices for the vehicles, where \omega ^{(i,j)}(k T_{s}) = x^{(i)}(k T_{s}) - x^{(j)}(k T_{s}), \; j \in {\mathcal {N}}_{i}
, and where \omega ^{(i,j)}(k T_{s})
s are measurable for all i, and each time k T_{s}
. In Section VI, sensor errors are taken into account and the section analyzes CLEG for a realistic case, and then compares the network error analytically in a network of AAVs, with and without cooperative localization. However, imagine the LTI filter discussed in Equation (5), to be in accordance to the block diagram in Figure 1, and Theorem 1 below.
Theorem 1:
In a large connected and balanced network with graph \mathcal {G}
, where N \to \infty
, the LTI filter with the impulse response h(kT_{s}+\Delta t) = \varrho (kT_{s})-\varrho (kT_{s}+\Delta t)
provides accurate positioning in a steady state (i.e., \lim _{N \to \infty } \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})=0)
, if \hat {x}^{(i)}(k T_{s})=\bar {x}^{(i)}(k T_{s}), \; \forall \; k \in \mathbb {N}^{+}
, where \varrho (\cdot)
is the unit step function, and where the inputs to the filter are depicted in Figure 1.
Proof:
Let \hat {\mathbf {u}}(k T_{s}+ \Delta t)
in Equation (5) along the x-axis be,\begin{align*} \hat {u}^{(i)}_{x}(k T_{s}+\Delta t) =& \gamma \sum _{j \in {\mathcal {N}}_{i}} \big ( \hat {x}^{(j)}(k T_{s}+\Delta t)-\hat {x}^{(i)}(k T_{s}+\Delta t) \\& {}+\omega ^{(i,j)}(k T_{s}) \big ) + \bar {\mathbf {v}}^{(i)}_{x}(k T_{s}+\Delta t)\end{align*}
View Source
\begin{align*} \hat {u}^{(i)}_{x}(k T_{s}+\Delta t) =& \gamma \sum _{j \in {\mathcal {N}}_{i}} \big ( \hat {x}^{(j)}(k T_{s}+\Delta t)-\hat {x}^{(i)}(k T_{s}+\Delta t) \\& {}+\omega ^{(i,j)}(k T_{s}) \big ) + \bar {\mathbf {v}}^{(i)}_{x}(k T_{s}+\Delta t)\end{align*}
where x^{(j)}(k T_{s}+\Delta t)-x^{(i)}(k T_{s}+\Delta t)+\omega ^{(i,j)}(k T_{s})=0
(with reference to the Assumptions). One would simply deduce that,\begin{align*} \sum _{j \in {\mathcal {N}}_{i}} \big (& \hat {x}^{(j)}(k T_{s}+\Delta t)-\hat {x}^{(i)}(k T_{s}+\Delta t)+\omega ^{(i,j)}(k T_{s}) \big ) \\& \;= \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(k T_{s}+\Delta t)-\Delta x^{(i)}(k T_{s}+\Delta t)\big ).\end{align*}
View Source
\begin{align*} \sum _{j \in {\mathcal {N}}_{i}} \big (& \hat {x}^{(j)}(k T_{s}+\Delta t)-\hat {x}^{(i)}(k T_{s}+\Delta t)+\omega ^{(i,j)}(k T_{s}) \big ) \\& \;= \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(k T_{s}+\Delta t)-\Delta x^{(i)}(k T_{s}+\Delta t)\big ).\end{align*}
Furthermore, with due attention to the definition of the impulse response h(t)
, it follows that,\begin{align*} \hat {u}^{(i)}_{x}(k T_{s}+\Delta t)~\ast h(k T_{s}+\Delta t) =& \\ \int _{\tau =-\infty }^{\infty } \hat {u}^{(i)}_{x}(\tau -k T_{s}-\Delta t)h(\tau ) d \tau =& \int _{k T_{s}}^{k T_{s} + \Delta t} \hat {u}^{(i)}_{x}(\tau ) d \tau .\end{align*}
View Source
\begin{align*} \hat {u}^{(i)}_{x}(k T_{s}+\Delta t)~\ast h(k T_{s}+\Delta t) =& \\ \int _{\tau =-\infty }^{\infty } \hat {u}^{(i)}_{x}(\tau -k T_{s}-\Delta t)h(\tau ) d \tau =& \int _{k T_{s}}^{k T_{s} + \Delta t} \hat {u}^{(i)}_{x}(\tau ) d \tau .\end{align*}
Following simple manipulations, the equation of the filter output can be written as,\begin{align*} \hat {x}^{(i)}(k T_{s}+\Delta t)=& \bar {x}^{(i)}(k T_{s})+\int _{k T_{s}}^{k T_{s} + \Delta t} \bar {\mathbf {v}}^{(i)}_{x}(\tau ) d \tau \\& {}+\int _{k T_{s}}^{k T_{s} + \Delta t} \gamma \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(\tau )-\Delta x^{(i)}(\tau )\big ) d\tau .\end{align*}
View Source
\begin{align*} \hat {x}^{(i)}(k T_{s}+\Delta t)=& \bar {x}^{(i)}(k T_{s})+\int _{k T_{s}}^{k T_{s} + \Delta t} \bar {\mathbf {v}}^{(i)}_{x}(\tau ) d \tau \\& {}+\int _{k T_{s}}^{k T_{s} + \Delta t} \gamma \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(\tau )-\Delta x^{(i)}(\tau )\big ) d\tau .\end{align*}
However, since the theorem assumes that \hat {x}^{(i)}(k T_{s})=\bar {x}^{(i)}(k T_{s})
, one can use Equation (7) to write,\begin{equation*} \bar {x}^{(i)}(k T_{s}) = x^{(i)}(k T_{s}) + \Delta x^{(i)}(k T_{s}), \tag {9}\end{equation*}
View Source
\begin{equation*} \bar {x}^{(i)}(k T_{s}) = x^{(i)}(k T_{s}) + \Delta x^{(i)}(k T_{s}), \tag {9}\end{equation*}
and by derivation and substitution for \hat {x}^{(i)}
, \bar {x}^{(i)}
, it follows that,\begin{align*}& \dot {x}^{(i)}(k T_{s}+\Delta t) + \Delta \dot {x}^{(i)}(k T_{s}+\Delta t) = \dot {x}^{(i)}(k T_{s}) + \Delta \dot {x}^{(i)}(k T_{s}) \\& \;\quad + \bar {\mathbf {v}}^{(i)}_{x}(k T_{s}+\Delta t) - \bar {\mathbf {v}}^{(i)}_{x}(k T_{s}) \\& \;\quad + \gamma \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(k T_{s}+\Delta t)-\Delta x^{(i)}(k T_{s}+\Delta t)\big ) \\& \;\quad - \gamma \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(k T_{s})-\Delta x^{(i)}(k T_{s})\big ),\end{align*}
View Source
\begin{align*}& \dot {x}^{(i)}(k T_{s}+\Delta t) + \Delta \dot {x}^{(i)}(k T_{s}+\Delta t) = \dot {x}^{(i)}(k T_{s}) + \Delta \dot {x}^{(i)}(k T_{s}) \\& \;\quad + \bar {\mathbf {v}}^{(i)}_{x}(k T_{s}+\Delta t) - \bar {\mathbf {v}}^{(i)}_{x}(k T_{s}) \\& \;\quad + \gamma \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(k T_{s}+\Delta t)-\Delta x^{(i)}(k T_{s}+\Delta t)\big ) \\& \;\quad - \gamma \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(k T_{s})-\Delta x^{(i)}(k T_{s})\big ),\end{align*}
and noting that \dot {x}^{(i)}(k T_{s}+\Delta t)-\bar {\mathbf {v}}^{(i)}_{x}(k T_{s}+\Delta t)=\dot {x}^{(i)}(k T_{s})-\bar {\mathbf {v}}^{(i)}_{x}(k T_{s})
for a small \Delta t
, one can write,\begin{equation*} \Delta \dot {x}^{(i)}(k T_{s}) = \gamma \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(k T_{s})-\Delta x^{(i)}(k T_{s})\big ),\end{equation*}
View Source
\begin{equation*} \Delta \dot {x}^{(i)}(k T_{s}) = \gamma \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(k T_{s})-\Delta x^{(i)}(k T_{s})\big ),\end{equation*}
and,\begin{align*} \Delta \dot {x}^{(i)}(k T_{s}+\Delta t) =& \gamma \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(k T_{s}+\Delta t) \\& {}-\Delta x^{(i)}(k T_{s}+\Delta t)~\big )\end{align*}
View Source
\begin{align*} \Delta \dot {x}^{(i)}(k T_{s}+\Delta t) =& \gamma \sum _{j \in {\mathcal {N}}_{i}} \big (\Delta x^{(j)}(k T_{s}+\Delta t) \\& {}-\Delta x^{(i)}(k T_{s}+\Delta t)~\big )\end{align*}
which in a vector form can be written as,\begin{equation*} \Delta \dot {x}(k T_{s}+\Delta t)=-\gamma \, L \, \Delta x(k T_{s}+\Delta t). \tag {10}\end{equation*}
View Source
\begin{equation*} \Delta \dot {x}(k T_{s}+\Delta t)=-\gamma \, L \, \Delta x(k T_{s}+\Delta t). \tag {10}\end{equation*}
One can observe that Equation (10) is a consensus control protocol for the error vector \Delta x(k T_{s} + \Delta t)
, with the solution,\begin{equation*} \Delta x(k T_{s}+\Delta t)=e^{-\gamma \, L \, \Delta t} \Delta x(k T_{s}). \tag {11}\end{equation*}
View Source
\begin{equation*} \Delta x(k T_{s}+\Delta t)=e^{-\gamma \, L \, \Delta t} \Delta x(k T_{s}). \tag {11}\end{equation*}
Therefore, given a balanced graph \mathcal {G}
, the steady state output of the filter is reduced to,\begin{equation*} \lim _{\gamma \Delta t \to \infty } \Delta x(k T_{s} + \Delta t) = \frac {1}{N} \sum _{i=1}^{N} \Delta x^{(i)}(k T_{s}) \mathbf {1}_{N},\end{equation*}
View Source
\begin{equation*} \lim _{\gamma \Delta t \to \infty } \Delta x(k T_{s} + \Delta t) = \frac {1}{N} \sum _{i=1}^{N} \Delta x^{(i)}(k T_{s}) \mathbf {1}_{N},\end{equation*}
where \gamma
is assumed to be a very large positive scalar, and \Delta t \to T_{s}
, where,\begin{equation*} \Delta x^{(i)}(k T_{s})=\hat {x}^{(i)}(k T_{s})-x^{(i)}(k T_{s})=\bar {x}^{(i)}(k T_{s})-x^{(i)}(k T_{s}),\end{equation*}
View Source
\begin{equation*} \Delta x^{(i)}(k T_{s})=\hat {x}^{(i)}(k T_{s})-x^{(i)}(k T_{s})=\bar {x}^{(i)}(k T_{s})-x^{(i)}(k T_{s}),\end{equation*}
and where \mathbf {1}_{N}
is a column vector of ones with N rows. Furthermore,\begin{align*}& \lim _{\Delta t \to T_{s}} || \Delta x(k T_{s}+\Delta t)~||^{2} \\& \; = \frac {1}{N^{2}} \sum _{i=1}^{N} \sum _{j=1}^{N} \left [{{ (\bar {x}^{(i)}(k T_{s})-x^{(i)}(k T_{s}))(\bar {x}^{(j)}(k T_{s})-x^{(j)}(k T_{s})) }}\right ],\end{align*}
View Source
\begin{align*}& \lim _{\Delta t \to T_{s}} || \Delta x(k T_{s}+\Delta t)~||^{2} \\& \; = \frac {1}{N^{2}} \sum _{i=1}^{N} \sum _{j=1}^{N} \left [{{ (\bar {x}^{(i)}(k T_{s})-x^{(i)}(k T_{s}))(\bar {x}^{(j)}(k T_{s})-x^{(j)}(k T_{s})) }}\right ],\end{align*}
and considering that for i \ne j
, \mathbb {E} [(\bar {x}^{(i)}(k T_{s})-x^{(i)}(k T_{s}))(\bar {x}^{(j)}(k T_{s})-x^{(j)}(k T_{s}))] =0
, it follows that,\begin{align*}& \lim _{\Delta t \to T_{s}} \mathbb {E} \left [{{ || \Delta x(k T_{s}+\Delta t)~||^{2}}}\right ] \\& \;= \frac {1}{N} \sum _{i=1}^{N} \mathbb {E} \left [{{ (\bar {x}^{(i)}(k T_{s})-x^{(i)}(k T_{s}))^{2} }}\right ].\end{align*}
View Source
\begin{align*}& \lim _{\Delta t \to T_{s}} \mathbb {E} \left [{{ || \Delta x(k T_{s}+\Delta t)~||^{2}}}\right ] \\& \;= \frac {1}{N} \sum _{i=1}^{N} \mathbb {E} \left [{{ (\bar {x}^{(i)}(k T_{s})-x^{(i)}(k T_{s}))^{2} }}\right ].\end{align*}
Therefore, in a steady state,\begin{equation*} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) = \frac {1}{N} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry}) \quad \tag {12}\end{equation*}
View Source
\begin{equation*} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) = \frac {1}{N} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry}) \quad \tag {12}\end{equation*}
implying that \lim _{N \to \infty } \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) =0
, which completes the proof of the theorem.■
Theorem 1 implies that the filter in Figure 1 takes samples \bar {x}^{(i)}(kT_{s})
as initial values for \hat {x}^{(i)}(kT_{s})
(at the beginning of each time interval [kT_{s},kT_{s}+\Delta t], \; k \; \in \mathbb {N}^{+}
), and as \Delta t \to T_{s}
, \hat {x}^{(i)}(kT_{s}+\Delta t)
gets closer and closer to the true position x^{(i)}(kT_{s})
. Recall that \gamma \Delta \to \infty
is practically equivalent to \Delta t \to T_{s}
, since a sufficiently large \gamma \in \mathbb {R}^{+}
has been selected. An optimal value for \gamma \Delta t
will be derived later in order to achieve a minimal network error when range-finder sensor error is present.
Theorem 1 implies that cooperative localization provides a smaller error in a mean square sense, when estimating the position of a group of vehicles in a connected network by N times, if the vehicles exchange their filtered positions \hat {x}^{(i)}(kT_{s}+\Delta t), \; \Delta t \in [0,T_{s}]
(in accordance with Figure 1), as part of the cooperative awareness messages, rather than the usual \bar {x}^{(i)}(k T_{s})
suggested by SAE J2735 Intelligent Transportation Systems standards. When the number of agents N is very large one will have \lim _{\gamma \Delta t \to \infty } \mathbb {E}[(\Delta x^{(i)}(k T_{s} + \Delta t))^{2}] \rightarrow 0, \; \forall \; i \in \mathcal {N}
, implying that \Delta x^{(i)}(k T_{s} + \Delta t)~\rightarrow 0
, as \Delta t \to \infty
, which means precise localization. Cooperative localization scheme of a network with three vehicles has been illustrated in Figure 2.
It is advisable to note that linear statistical methods are also available for reducing localization error, and one can optimize positioning error by means of a minimum-mean-square-error (MMSE) filter to obtain \min _{H_{k}} \mathbb {E}[||x(k T_{s}) - H_{k} \bar {x}(k T_{s})||^{2}]
, directly from \bar {x}^{(i)}(k T_{s})
’s. In that case a centralized control unit would be required to process the odometry estimated positions of all vehicles, and find H_{k}
that provides a more accurate position, namely z(k T_{s})=H_{k} \bar {x}(k T_{s})
, and transmit z^{(i)}(k T_{s})
back to each agent as an improved positioning estimation to use for navigation purposes, instead of \bar {x}^{(i)}(k T_{s})
. This is illustrated in Figure 3.
It is later shown that proposed distributed cooperative localization in Figure 2 is beneficial to the centralized MMSE cooperative localization in Figure 3, by a factor of N/2
.
SECTION VI.
Theoretical Results
Theorem 1 introduces the main philosophy behind cooperative localization. In a limited time duration \Delta t
there would be an error associated with the transient response of the LTI filter and insufficient time for consensus that will be discussed in Theorem 2 below.
Theorem 2:
In any connected and balanced graph (\mathcal {G})
, where N \rightarrow \infty
, and where \Delta \omega ^{(i,j)}(k T_{s}) \approx 0
, and given a set of initial odometery estimates, \bar {x}(k T_{s})=[\bar {x}^{(1)}(k T_{s}), {\dots }, \bar {x}^{(N)}(k T_{s})]^{T}
, cooperative localization guarantees an arbitrarily small mean-sense positioning error \epsilon = e^{- \gamma \lambda _{2} \Delta t} \sigma _{\Delta x(k T_{s})} (\text {Odometry})
, where,\begin{equation*} \sigma ^{2}_{\Delta x(k T_{s})} (\textit {Coop-Localization}) \lt \epsilon ^{2},\end{equation*}
View Source
\begin{equation*} \sigma ^{2}_{\Delta x(k T_{s})} (\textit {Coop-Localization}) \lt \epsilon ^{2},\end{equation*}
and where \lambda _{2}
is the second smallest eigenvalue of L.
Proof:
The proof begins with Equation (10). One can write,\begin{align*} \Delta x(k T_{s} +\Delta t) =& \frac {1}{N} \sum _{i=1}^{N} \Delta x^{(i)}(k T_{s}) \; \mathbf {1}_{N} \\& {}+ \sum _{j=2}^{N} v_{j} e^{-\gamma \lambda _{j} \Delta t} v^{T}_{j} \Delta x(k T_{s}), \tag {13}\end{align*}
View Source
\begin{align*} \Delta x(k T_{s} +\Delta t) =& \frac {1}{N} \sum _{i=1}^{N} \Delta x^{(i)}(k T_{s}) \; \mathbf {1}_{N} \\& {}+ \sum _{j=2}^{N} v_{j} e^{-\gamma \lambda _{j} \Delta t} v^{T}_{j} \Delta x(k T_{s}), \tag {13}\end{align*}
where, \lambda _{j}
and v_{j}
are the j^{th}
eigenvalue and its corresponding eigenvector in L, respectively, and \mathbf {1}_{N} = [1, {\dots }, 1]^{T}_{1 \times N}
. For a sufficiently large N one will have, {}\frac {1}{N} \sum _{i=1}^{N} \Delta x^{(i)}(k T_{s}) \; \mathbf {1}_{N} \approx \mathbb {E} [\Delta x^{(i)}(k T_{s})] \mathbf {1}_{N} = \mathbf {0}_{N}
, and Equation (13) reduces to,\begin{equation*} \Delta x(k T_{s}+\Delta t) = \sum _{i=2}^{N} v_{i} e^{-\gamma \lambda _{i} \Delta t} v^{T}_{i} \Delta x(k T_{s}), \tag {14}\end{equation*}
View Source
\begin{equation*} \Delta x(k T_{s}+\Delta t) = \sum _{i=2}^{N} v_{i} e^{-\gamma \lambda _{i} \Delta t} v^{T}_{i} \Delta x(k T_{s}), \tag {14}\end{equation*}
and,\begin{align*}& \Delta x(k T_{s}+\Delta t)^{T} \Delta x(k T_{s}+\Delta t)~ \\& \;=\Delta x(k T_{s})^{T} \left ({{\sum _{i=2}^{N} \sum _{j=2}^{N} v_{j} e^{- \gamma \lambda _{j} \Delta t} v^{T}_{j} v_{i} e^{- \gamma \lambda _{i} \Delta t} v^{T}_{i} }}\right ) \Delta x(k T_{s})\end{align*}
View Source
\begin{align*}& \Delta x(k T_{s}+\Delta t)^{T} \Delta x(k T_{s}+\Delta t)~ \\& \;=\Delta x(k T_{s})^{T} \left ({{\sum _{i=2}^{N} \sum _{j=2}^{N} v_{j} e^{- \gamma \lambda _{j} \Delta t} v^{T}_{j} v_{i} e^{- \gamma \lambda _{i} \Delta t} v^{T}_{i} }}\right ) \Delta x(k T_{s})\end{align*}
where, v^{T}_{j} v_{i} = 0, \; \forall \; i \ne j
, and v^{T}_{i} v_{i} = 1, \; \forall \; i
. Therefore, one can write,\begin{align*}& \Delta x(k T_{s}+\Delta t)^{T} \Delta x(k T_{s}+\Delta t)~ \\& \;=\Delta x(k T_{s})^{T} \left ({{ \sum _{i=2}^{N} v_{i} e^{- 2 \gamma \lambda _{i} \Delta t} v^{T}_{i} }}\right ) \Delta x(k T_{s}).\end{align*}
View Source
\begin{align*}& \Delta x(k T_{s}+\Delta t)^{T} \Delta x(k T_{s}+\Delta t)~ \\& \;=\Delta x(k T_{s})^{T} \left ({{ \sum _{i=2}^{N} v_{i} e^{- 2 \gamma \lambda _{i} \Delta t} v^{T}_{i} }}\right ) \Delta x(k T_{s}).\end{align*}
However, \sum _{i=2}^{N} e^{- 2 \gamma \lambda _{i} \Delta t} \le \sum _{i=2}^{N} e^{- 2 \gamma \lambda _{2} \Delta t}
. Therefore,\begin{align*}& \Delta x(k T_{s}+\Delta t)^{T} \Delta x(k T_{s}+\Delta t)~ \\& \;\le \Delta x(k T_{s})^{T} e^{-2 \gamma \lambda _{2} \Delta t} \Delta x(k T_{s}).\end{align*}
View Source
\begin{align*}& \Delta x(k T_{s}+\Delta t)^{T} \Delta x(k T_{s}+\Delta t)~ \\& \;\le \Delta x(k T_{s})^{T} e^{-2 \gamma \lambda _{2} \Delta t} \Delta x(k T_{s}).\end{align*}
Also, considering that \mathbb {E}[\Delta x(k T_{s})^{T} \Delta x(k T_{s})] = \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry})
, it follows that,\begin{align*}& \mathbb {E}\left [{{ \Delta x(k T_{s}+\Delta t)^{T} \Delta x(k T_{s}+\Delta t)~}}\right ] \\& \;\lt e^{-2 \gamma \Delta t \lambda _{2}} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry}),\end{align*}
View Source
\begin{align*}& \mathbb {E}\left [{{ \Delta x(k T_{s}+\Delta t)^{T} \Delta x(k T_{s}+\Delta t)~}}\right ] \\& \;\lt e^{-2 \gamma \Delta t \lambda _{2}} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry}),\end{align*}
or,\begin{align*}& \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) \\& \;\lt e^{-2 \gamma \lambda _{2} \Delta t} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry}) \tag {15}\end{align*}
View Source
\begin{align*}& \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) \\& \;\lt e^{-2 \gamma \lambda _{2} \Delta t} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry}) \tag {15}\end{align*}
and finally assuming,\begin{equation*} \epsilon = e^{- \gamma \lambda _{2} \Delta t}\sigma _{\Delta x(k T_{s})} (\text {Odometry}), \tag {16}\end{equation*}
View Source
\begin{equation*} \epsilon = e^{- \gamma \lambda _{2} \Delta t}\sigma _{\Delta x(k T_{s})} (\text {Odometry}), \tag {16}\end{equation*}
it follows that \mathbb {E}[(\hat {x}^{(i)}(t+\Delta t) - x^{(i)}(k T_{s}))^{2}] \le \epsilon ^{2}
, which completes the proof of the theorem.■
For cases where there are limited agents, integrating Theorems 1 and 2, it follows that,\begin{align*}& \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) \\& \;\lt \left ({{\frac {1}{N}+e^{- 2\gamma \lambda _{2} \Delta t}}}\right )\sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry}), \tag {17}\end{align*}
View Source
\begin{align*}& \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) \\& \;\lt \left ({{\frac {1}{N}+e^{- 2\gamma \lambda _{2} \Delta t}}}\right )\sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry}), \tag {17}\end{align*}
and one has to select \gamma \Delta t
such that e^{- 2\gamma \lambda _{2} \Delta t}
is negligible for all possible network topologies. However, the significance of Theorem 2 can be realized by considering the example of a AAV’s vertical landing in a network where GNSS service is denied. Then, Equation (17) can be used to evaluate the minimum admissible consensus interval \Delta t
, to be able to land without violating the perimeters of the landing spot. Additionally, one can use Theorems 1 and 2 to evaluate the effect of spoofing cyberattacks on a given AAV such as m in a connected network of AAVs performing cooperative localization. Lemma 1 explains how cooperative localization can mitigate spoofing cyberattacks on a fleet of vehicles.
Lemma 1:
Let \Delta \tilde {x}^{(m)}(k T_{s})
denote the positioning displacement induced on agent m \in \mathcal {N}
, as a result of a spoofing cyberattack on m, where the agents are connected in a balanced graph \mathcal {G}
, and where the agents perform cooperative localization. In this case, if the spoofed position of agent m is characterized as x^{(m)}(k T_{s})+\Delta \tilde {x}^{(m)}(k T_{s})
, then cooperative localization induces a smaller error {}\frac {1}{N}\Delta \tilde {x}^{(m)}(k T_{s})
to all the agents in \mathcal {N}
, including m.
The proof of the lemma is omitted due to space constraints. However, this phenomenon entails significant benefit in applications where spoofing cyberattack resiliency is concerned. For instance, one may imagine a fleet of naval vessels, where a large number of miniature drones will be deployed to navigate along with the vessel over a certain altitude, and form a cyberattack shield in order to distribute and absorb the cyberattack as implied in Lemma 1.
In the next section the effect of rangefinder errors will be taken into account, and new bounds for the network error \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})
will be introduced. Before this, Lemma 2 introduces as an alternative scenario that would yield accurate localization through cooperative localization in an ideal scenario (again neglecting sensor errors) with the aid of a anchor node. An “anchor node” is a stationary or mobile agent with known true position. A practical application can be in Advanced Air Mobility (AAM) applications where large numbers of AAV share urban airspace, while GPS signals are obscured for certain portion of the traffic, and available to part of the traffic. It will be shown that under this condition every vehicle in the AAM could achieve accurate localization, as long as the network is connected and a cooperative localization control protocol is applied in the network.
Lemma 2:
Let \mathcal {G}
denote a connected and balanced graph of a multi-agent system, and let \Delta x^{(m)}(k T_{s}) =0, \; \forall \; k \in \mathbb {N}
, for an arbitrary agent m (assuming \Delta \omega ^{(i,j)}(k T_{s}) \approx 0, \; \forall \; i,j \in \mathcal {N})
. In this case,\begin{equation*} \lim _{\Delta t \to T_{s}} \; \hat {x}^{(i)}(k T_{s} + \Delta t)~\rightarrow x^{(i)}(k T_{s}), \; \forall \; i \in \mathcal {N},\end{equation*}
View Source
\begin{equation*} \lim _{\Delta t \to T_{s}} \; \hat {x}^{(i)}(k T_{s} + \Delta t)~\rightarrow x^{(i)}(k T_{s}), \; \forall \; i \in \mathcal {N},\end{equation*}
if all the agents except agent m take part in a cooperative localization process (i.e., that is if \hat {x}^{(m)}(k T_{s} + \Delta t)=\hat {x}^{(m)}(k T_{s}), \; \forall \; \Delta t)
.
The proof has been provided in the author’s earlier publication [46]. Agent(s) m with deterministic positions with \Delta x^{(m)}(k T_{s}) =0
are referred to as anchor nodes, as explained before. Any cooperative localization setting with one or more anchor nodes emulates a network with infinite number of agents.
The idea introduced in this section has neglected sensor errors in order to realize conceptual merits of cooperative localization. The next section will examine a more practical aspect of cooperative localization by including the sensor error in the derivations, and will improvise new conditions to attain optimal error gain and to minimize the network localization error.
One can simply modify Equation (10) to account for \Delta \omega ^{(i,j)}(k T_{s})
. In this case,\begin{align*} \Delta \dot {x}(k T_{s}+\Delta t) =& - \gamma \; L \; \Delta x^{(i)}(k T_{s}+\Delta t)~ \\& {}+ \gamma \Delta \omega (k T_{s}), \; \forall \; k \in \mathbb {N}. \tag {18}\end{align*}
View Source
\begin{align*} \Delta \dot {x}(k T_{s}+\Delta t) =& - \gamma \; L \; \Delta x^{(i)}(k T_{s}+\Delta t)~ \\& {}+ \gamma \Delta \omega (k T_{s}), \; \forall \; k \in \mathbb {N}. \tag {18}\end{align*}
It may be noted that Equation (18) is unstable due to rank deficiency of L. Therefore, \gamma \; \Delta t
has to be chosen such that Equation (18) results in a minimal \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})
. If \Delta t
is a sufficiently small time interval, so that one can assume negligible mobility for the vehicles during \Delta t
, then \omega (k T_{s}+\Delta t)
can be assumed constant through [kT_{s},kT_{s}+\Delta t], \; \forall \; k
. Therefore, it would be sufficient to once read \omega (k T_{s})
in each time step kT_{s}
, and assume a constant \Delta \omega (k T_{s}+\Delta t)
. However, the problem would still remain, being optimization of \gamma \Delta t
, so that the CLEG is minimized.
Theorem 3:
Let \mathcal {G}
denote a connected multi-agent system with N agents. The network error gain is minimal if \gamma \lambda _{2} \Delta t = N
.
Note that in this case Equation (16) would simply reduce to \epsilon = e^{- N}\sigma _{\Delta x(k T_{s})} (\text {Odometry})
.
Proof:
The solution of Equation (18) is,\begin{align*}& \Delta x(k T_{s}+\Delta t) = e^{- \gamma L \Delta t} \Delta x(k T_{s}) \\& \; \quad + \int _{\tau =kT_{s}}^{kT_{s}+\Delta t} e^{- \gamma L (kT_{s}+\Delta t-\tau )} \Delta \omega (k T_{s}) \; d \tau \\& \;=e^{- \gamma L \Delta t} \Delta x(k T_{s}) + \int _{\tau =0}^{\Delta t} e^{- \gamma \; L \tau } \Delta \omega (k T_{s}) \; d \tau . \tag {19}\end{align*}
View Source
\begin{align*}& \Delta x(k T_{s}+\Delta t) = e^{- \gamma L \Delta t} \Delta x(k T_{s}) \\& \; \quad + \int _{\tau =kT_{s}}^{kT_{s}+\Delta t} e^{- \gamma L (kT_{s}+\Delta t-\tau )} \Delta \omega (k T_{s}) \; d \tau \\& \;=e^{- \gamma L \Delta t} \Delta x(k T_{s}) + \int _{\tau =0}^{\Delta t} e^{- \gamma \; L \tau } \Delta \omega (k T_{s}) \; d \tau . \tag {19}\end{align*}
Let v_{1}={}\frac {1}{\sqrt {N}} \mathbf {1}, v_{2}, {\dots }, v_{N}
, denote the right set of eigenvectors of L, corresponding to the eigenvalues \lambda _{1}, \lambda _{2}, {\dots }, \lambda _{N}
, respectively. Then,\begin{align*} e^{- \gamma \; L \Delta t}=& \frac {1}{N} \mathbf {1} e^{- \gamma \; \lambda _{1} \Delta t} \mathbf {1}^{T} + v_{2} e^{- \gamma \; \lambda _{2} \Delta t} v^{T}_{2} \\& {}+ {\dots }+ v_{N} e^{- \gamma \; \lambda _{N} \Delta t} v^{T}_{N}, \tag {20}\end{align*}
View Source
\begin{align*} e^{- \gamma \; L \Delta t}=& \frac {1}{N} \mathbf {1} e^{- \gamma \; \lambda _{1} \Delta t} \mathbf {1}^{T} + v_{2} e^{- \gamma \; \lambda _{2} \Delta t} v^{T}_{2} \\& {}+ {\dots }+ v_{N} e^{- \gamma \; \lambda _{N} \Delta t} v^{T}_{N}, \tag {20}\end{align*}
where \lambda _{1}=0
. Integration from 0 to \Delta t
, results,\begin{equation*} \int _{\tau =0}^{\Delta t} e^{- \gamma \; L \tau } d \tau = \frac {\Delta t}{N} \mathbf {1} \mathbf {1}^{T}+ \sum _{i=2}^{N} v_{i} \frac {1 - e^{- \gamma \lambda _{i} \Delta t}}{ \lambda _{i}} v^{T}_{i},\end{equation*}
View Source
\begin{equation*} \int _{\tau =0}^{\Delta t} e^{- \gamma \; L \tau } d \tau = \frac {\Delta t}{N} \mathbf {1} \mathbf {1}^{T}+ \sum _{i=2}^{N} v_{i} \frac {1 - e^{- \gamma \lambda _{i} \Delta t}}{ \lambda _{i}} v^{T}_{i},\end{equation*}
and Equation (19) reduces to,\begin{align*} \Delta x(k T_{s}+\Delta t) =& \frac {1}{N} \mathbf {1} \mathbf {1}^{T} \Delta x(k T_{s}) + \frac {\gamma \Delta t}{N} \mathbf {1} \mathbf {1}^{T} \; \Delta \omega (k T_{s}) \\& {}+ \sum _{i=2}^{N} v_{i} e^{- \gamma \lambda _{i} \Delta t } v^{T}_{i} \Delta x(k T_{s}) \\& {}+ \sum _{i=2}^{N} v_{i} \frac {1 - e^{- \gamma \lambda _{i} \Delta t }}{\lambda _{i}} v^{T}_{i} \Delta \omega (k T_{s}). \tag {21}\end{align*}
View Source
\begin{align*} \Delta x(k T_{s}+\Delta t) =& \frac {1}{N} \mathbf {1} \mathbf {1}^{T} \Delta x(k T_{s}) + \frac {\gamma \Delta t}{N} \mathbf {1} \mathbf {1}^{T} \; \Delta \omega (k T_{s}) \\& {}+ \sum _{i=2}^{N} v_{i} e^{- \gamma \lambda _{i} \Delta t } v^{T}_{i} \Delta x(k T_{s}) \\& {}+ \sum _{i=2}^{N} v_{i} \frac {1 - e^{- \gamma \lambda _{i} \Delta t }}{\lambda _{i}} v^{T}_{i} \Delta \omega (k T_{s}). \tag {21}\end{align*}
For a sufficiently large \gamma \lambda _{2} \Delta t
, where one can consider e^{- \gamma \lambda _{i} \Delta t }\le e^{- \gamma \lambda _{2} \Delta t } \approx 0
, Equation (21) further reduces to,\begin{align*} \Delta x(k T_{s} +\Delta t)~\approx & \frac {1}{N} \mathbf {1} \mathbf {1}^{T} \Delta x(k T_{s}) + \frac { \gamma \Delta t}{N} \mathbf {1} \mathbf {1}^{T} \; \Delta \omega (k T_{s}) \\& {} + \sum _{i=2}^{N} v_{i} \frac {1}{\lambda _{i}} v^{T}_{i} \Delta \omega (k T_{s}). \tag {22}\end{align*}
View Source
\begin{align*} \Delta x(k T_{s} +\Delta t)~\approx & \frac {1}{N} \mathbf {1} \mathbf {1}^{T} \Delta x(k T_{s}) + \frac { \gamma \Delta t}{N} \mathbf {1} \mathbf {1}^{T} \; \Delta \omega (k T_{s}) \\& {} + \sum _{i=2}^{N} v_{i} \frac {1}{\lambda _{i}} v^{T}_{i} \Delta \omega (k T_{s}). \tag {22}\end{align*}
The term {}\frac { \gamma \Delta t}{N} \mathbf {1} \mathbf {1}^{T} + \sum _{i=2}^{N} v_{i} {}\frac {1}{\lambda _{i}} v^{T}_{i}
can be further simplified, considering that,\begin{align*} \frac {\gamma \Delta t}{N} \mathbf {1} \mathbf {1}^{T} = \left [{{\mathbf {1} \; v_{2} \; {\dots }\; v_{N}}}\right ] \left [{{ \begin{array}{cccc} \frac {\gamma \cdot \Delta t}{N} & 0 & {\dots }& 0 \\ 0 & 0 & {\dots }& 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & {\dots }& 0\end{array} }}\right ] \left [{{ \begin{array}{c} \mathbf {1}^{T} \\ v^{T}_{2} \\ \vdots \\ v^{T}_{N}\end{array} }}\right ] \end{align*}
View Source
\begin{align*} \frac {\gamma \Delta t}{N} \mathbf {1} \mathbf {1}^{T} = \left [{{\mathbf {1} \; v_{2} \; {\dots }\; v_{N}}}\right ] \left [{{ \begin{array}{cccc} \frac {\gamma \cdot \Delta t}{N} & 0 & {\dots }& 0 \\ 0 & 0 & {\dots }& 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & {\dots }& 0\end{array} }}\right ] \left [{{ \begin{array}{c} \mathbf {1}^{T} \\ v^{T}_{2} \\ \vdots \\ v^{T}_{N}\end{array} }}\right ] \end{align*}
and,\begin{align*} \sum _{i=2}^{N} v_{i} \frac {1}{\lambda _{i}} v^{T}_{i}= \left [{{\mathbf {1} \; v_{2} \; {\dots }\; v_{N}}}\right ] \left [{{ \begin{array}{cccc} 0 & 0 & {\dots }& 0 \\ 0 & \frac {1}{\lambda _{2}} & {\dots }& 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & {\dots }& \frac {1}{\lambda _{N}}\end{array} }}\right ] \left [{{ \begin{array}{c} \mathbf {1}^{T} \\ v^{T}_{2} \\ \vdots \\ v^{T}_{N}\end{array} }}\right ]. \end{align*}
View Source
\begin{align*} \sum _{i=2}^{N} v_{i} \frac {1}{\lambda _{i}} v^{T}_{i}= \left [{{\mathbf {1} \; v_{2} \; {\dots }\; v_{N}}}\right ] \left [{{ \begin{array}{cccc} 0 & 0 & {\dots }& 0 \\ 0 & \frac {1}{\lambda _{2}} & {\dots }& 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & {\dots }& \frac {1}{\lambda _{N}}\end{array} }}\right ] \left [{{ \begin{array}{c} \mathbf {1}^{T} \\ v^{T}_{2} \\ \vdots \\ v^{T}_{N}\end{array} }}\right ]. \end{align*}
However, L = V \Lambda V^{-1}
can be written as,\begin{align*} L= \left [{{\mathbf {1} \; v_{2} \; {\dots }\; v_{N}}}\right ] \left [{{ \begin{array}{cccc} \lambda _{1}=0 & 0 & {\dots }& 0 \\ 0 & \lambda _{2} & {\dots }& 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & {\dots }& \lambda _{N}\end{array} }}\right ] \left [{{ \begin{array}{c} \mathbf {1}^{T} \\ v^{T}_{2} \\ \vdots \\ v^{T}_{N}\end{array} }}\right ], \end{align*}
View Source
\begin{align*} L= \left [{{\mathbf {1} \; v_{2} \; {\dots }\; v_{N}}}\right ] \left [{{ \begin{array}{cccc} \lambda _{1}=0 & 0 & {\dots }& 0 \\ 0 & \lambda _{2} & {\dots }& 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & {\dots }& \lambda _{N}\end{array} }}\right ] \left [{{ \begin{array}{c} \mathbf {1}^{T} \\ v^{T}_{2} \\ \vdots \\ v^{T}_{N}\end{array} }}\right ], \end{align*}
where V is the matrix of right eigenvectors of L such that V^{-1}=V^{T}
(in a balanced network), and \Lambda
is the diagonal matrix of the eigenvalues of L associated with V. In a similar way one can denote,\begin{equation*} \frac {\gamma \Delta t}{N} \mathbf {1} \mathbf {1}^{T} + \sum _{i=2}^{N} v_{i} \frac {1}{\lambda _{i}} v^{T}_{i} \triangleq L^{-1}_{\gamma }\end{equation*}
View Source
\begin{equation*} \frac {\gamma \Delta t}{N} \mathbf {1} \mathbf {1}^{T} + \sum _{i=2}^{N} v_{i} \frac {1}{\lambda _{i}} v^{T}_{i} \triangleq L^{-1}_{\gamma }\end{equation*}
where L_{\gamma }
is a full-rank version of L, that is,\begin{align*} L_{\gamma } = \left [{{\mathbf {1} \; v_{2} \; {\dots }\; v_{N}}}\right ] \left [{{ \begin{array}{cccc} \frac {N}{\gamma \cdot \Delta t} & 0 & {\dots }& 0 \\ 0 & \lambda _{2} & {\dots }& 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & {\dots }& \lambda _{N}\end{array} }}\right ] \left [{{ \begin{array}{c} \mathbf {1}^{T} \\ v^{T}_{2} \\ \vdots \\ v^{T}_{N}\end{array} }}\right ].\end{align*}
View Source
\begin{align*} L_{\gamma } = \left [{{\mathbf {1} \; v_{2} \; {\dots }\; v_{N}}}\right ] \left [{{ \begin{array}{cccc} \frac {N}{\gamma \cdot \Delta t} & 0 & {\dots }& 0 \\ 0 & \lambda _{2} & {\dots }& 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & {\dots }& \lambda _{N}\end{array} }}\right ] \left [{{ \begin{array}{c} \mathbf {1}^{T} \\ v^{T}_{2} \\ \vdots \\ v^{T}_{N}\end{array} }}\right ].\end{align*}
In conclusion, Equation (22) can be written as,\begin{equation*} \Delta x(k T_{s})(\Delta t)~\approx \frac {1}{N} \mathbf {1} \mathbf {1}^{T} \Delta x(k T_{s}) + L^{-1}_{\gamma } \Delta \omega (k T_{s}), \tag {23}\end{equation*}
View Source
\begin{equation*} \Delta x(k T_{s})(\Delta t)~\approx \frac {1}{N} \mathbf {1} \mathbf {1}^{T} \Delta x(k T_{s}) + L^{-1}_{\gamma } \Delta \omega (k T_{s}), \tag {23}\end{equation*}
where L_{\gamma }
depends on \gamma \Delta t
and the eigenstructure of L. Equation (23) reveals that the total network error in a cooperative localization scenario, is composed of two terms, namely: (1) the mean network error,\begin{equation*} \frac {1}{N} \mathbf {1} \mathbf {1}^{T} \Delta x(k T_{s})= \frac {\sum _{i=1}^{N} \Delta x^{(i)}(k T_{s})}{N} \mathbf {1}, \tag {24}\end{equation*}
View Source
\begin{equation*} \frac {1}{N} \mathbf {1} \mathbf {1}^{T} \Delta x(k T_{s})= \frac {\sum _{i=1}^{N} \Delta x^{(i)}(k T_{s})}{N} \mathbf {1}, \tag {24}\end{equation*}
and (2) the network sensor error,\begin{equation*} L^{-1}_{\gamma } \Delta \omega (k T_{s}) = L^{-1}_{\gamma } \sum _{j \in {\mathcal {N}}_{i}} \Delta \omega ^{(i,j)}(k T_{s}). \tag {25}\end{equation*}
View Source
\begin{equation*} L^{-1}_{\gamma } \Delta \omega (k T_{s}) = L^{-1}_{\gamma } \sum _{j \in {\mathcal {N}}_{i}} \Delta \omega ^{(i,j)}(k T_{s}). \tag {25}\end{equation*}
Considering that \mathbb {E}[\Delta x^{T}(k T_{s}) \Delta \omega (k T_{s})]=0
, and \mathbb {E}[\Delta x^{(i)}(k T_{s}) \Delta x^{(j)}(k T_{s})]=0
, one can obtain the total network error \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})
, as,\begin{align*}& \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) \\& \;= \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry})}{N} \\& \;\quad + \mathbb {E} \; \left [{{ \Delta \omega ^{T}(k T_{s}) L^{-2}_{\gamma } \Delta \omega (k T_{s}) }}\right ],\end{align*}
View Source
\begin{align*}& \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) \\& \;= \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry})}{N} \\& \;\quad + \mathbb {E} \; \left [{{ \Delta \omega ^{T}(k T_{s}) L^{-2}_{\gamma } \Delta \omega (k T_{s}) }}\right ],\end{align*}
since (L^{-1}_{\gamma })^{T} L^{-1}_{\gamma } = L^{-2}_{\gamma }
. Consequently, an upper bound for the total network error can be obtained as,\begin{align*} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})\le & \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometery})}{N} \\& {}+ \frac { \sigma ^{2}_{\Delta \omega (k T_{s})}}{ \lambda ^{2}_{min} (L_{\gamma })}. \tag {26}\end{align*}
View Source
\begin{align*} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})\le & \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometery})}{N} \\& {}+ \frac { \sigma ^{2}_{\Delta \omega (k T_{s})}}{ \lambda ^{2}_{min} (L_{\gamma })}. \tag {26}\end{align*}
For a sufficiently large network, the first term on the right hand side of Inequality (26) diminishes, and one can deduce an upper bound for the network error gain as,\begin{equation*} \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})}{\sigma ^{2}_{\Delta \omega (k T_{s})}} \le \frac { 1}{ \lambda ^{2}_{min} (L_{\gamma })}, \tag {27}\end{equation*}
View Source
\begin{equation*} \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})}{\sigma ^{2}_{\Delta \omega (k T_{s})}} \le \frac { 1}{ \lambda ^{2}_{min} (L_{\gamma })}, \tag {27}\end{equation*}
when N \to \infty
, which provides a performance metric for a cooperative localization application. Optimal performance can be expected by maximizing \lambda _{min} (L_{\gamma })
, and for a given N and \lambda _{2}
, this can be guaranteed if {}\frac {N}{\gamma \Delta t} \ge \lambda _{2}
, or N \ge \gamma \lambda _{2} \Delta t
. In the meantime, one may note from Theorem (2) and Equation (16), that, \epsilon = e^{- \gamma \lambda _{2} \Delta t } \sigma _{\Delta x(k T_{s})} (\text {Odometery})
, implying that \gamma \lambda _{2} \Delta t
has to be maximized to reduce \epsilon
. This constraint together with N \ge \gamma \lambda _{2} \Delta t
provides an optimal performance criteria, to be,\begin{equation*} \gamma \lambda _{2} \Delta t = N \tag {28}\end{equation*}
View Source
\begin{equation*} \gamma \lambda _{2} \Delta t = N \tag {28}\end{equation*}
which completes the proof of the theorem.■
Given a multi-agent network with N agents, one can expect the largest \lambda _{2}
for a fully connected network. In this case \lambda _{2}=N
, and,\begin{equation*} \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})}{\sigma ^{2}_{\Delta \omega (k T_{s})}} \le \frac { 1}{N^{2}},\end{equation*}
View Source
\begin{equation*} \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})}{\sigma ^{2}_{\Delta \omega (k T_{s})}} \le \frac { 1}{N^{2}},\end{equation*}
also implying that the accuracy of positioning estimation in a cooperative localization framework improves with connectivity in the network. Furthermore, for a sufficiently large and well connected network it would be possible to estimate localization precisely, regardless of the network sensor error. Additionally, it can be observed that in a fully connected network, the optimal condition requires \gamma \Delta t = 1
.
The derivations have so far provided the total network error in its general term in a cooperative localization framework according to Equation (26). It is interesting to evaluate the performance of cooperative localization in comparison to the existing statistical methods that can optimally estimate unknown system states in mean square sense. For example, Kalman filter is one of those methods that can optimally estimate x(k T_{s})
provided that there is a measurement vector such as z(k T_{s})
which unfortunately does not exist in a GNSS denied environment. However, since the problem concerns a MAS of mobile vehicles, one might consider to optimally estimate x(k T_{s})
through a linear transformation of \bar {x}(k T_{s})
, by an unknown matrix H_{k}
, with the constraint,\begin{equation*} \min _{H_{k}} \; \mathbb {E} \left [{{||x(k T_{s}) - H_{k} \cdot \bar {x}(k T_{s})||^{2}}}\right ].\end{equation*}
View Source
\begin{equation*} \min _{H_{k}} \; \mathbb {E} \left [{{||x(k T_{s}) - H_{k} \cdot \bar {x}(k T_{s})||^{2}}}\right ].\end{equation*}
Theorem 4 below shows that the proposed cooperative localization method provides N/2
times better performance in comparison to a minimum-mean-square-sense localization technique such as H_{k} \cdot \bar {x}(k T_{s})
. In order to be able to make a fair comparison between the two cases, it is assumed that the network sensor error is negligible.
Theorem 4:
Let \mathcal {G}
denote a connected balanced graph of a MAS with N agents, where \mathbb {E} [|| \Delta \omega (k T_{s}) ||^{2}]=0
. The optimal linear transformation H_{k}
that minimizes \mathbb {E} [||x(k T_{s}) - H_{k} \cdot \bar {x}(k T_{s})||^{2}]
is N/2
times less accurate in comparison with cooperative localization, i.e.,\begin{align*} \min _{H_{k}}& \; \mathbb {E} \left [{{||x(k T_{s}) - H_{k} \bar {x}(k T_{s})||^{2}}}\right ] \\& {}\ge \frac {N}{2} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}).\end{align*}
View Source
\begin{align*} \min _{H_{k}}& \; \mathbb {E} \left [{{||x(k T_{s}) - H_{k} \bar {x}(k T_{s})||^{2}}}\right ] \\& {}\ge \frac {N}{2} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}).\end{align*}
Proof:
x(k T_{s}) = [x^{(1)}(k T_{s}), {\dots }, x^{(N)}(k T_{s})]^{T}
denotes the true positions of given odometry calculations \bar {x}(k T_{s}) = [x^{(1)}(k T_{s}), {\dots }, x^{(N)}(k T_{s})]^{T}
. Let there be a vector function z(k T_{s}) = [z^{(1)}(k T_{s}), {\dots }, z^{(N)}(k T_{s})]^{T}
, where z^{(i)}(k T_{s}) = \sum _{j=1}^{N} h_{ij} \bar {x}^{(j)}(k T_{s})
. The first objective is to find h_{ij}
’s such that \mathbb {E} [(x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}))^{2}]
is the minimum for all i. In a compact form let H_{k} = [h_{ij}]
, and z(k T_{s}) = H_{k} \bar {x}(k T_{s})
, that is,\begin{align*}& \left [{{ z^{(1)}(k T_{s}), {\dots }, z^{(N)}(k T_{s}) }}\right ]^{T} \\& {}=\left [{{ \begin{array}{cccc} h_{11} & h_{12} & {\dots }& h_{1N} \\ h_{21} & h_{22} & {\dots }& h_{2N} \\ \vdots & \vdots & \ddots & \vdots \\ h_{N1} & h_{N2} & {\dots }& h_{NN}\end{array} }}\right ] \left [{{ \begin{array}{c} \bar {x}^{(1)}(k T_{s}) \\ \bar {x}^{(2)}(k T_{s}) \\ \vdots \\ \bar {x}^{(N)}(k T_{s}) \\ \end{array} }}\right ].\end{align*}
View Source
\begin{align*}& \left [{{ z^{(1)}(k T_{s}), {\dots }, z^{(N)}(k T_{s}) }}\right ]^{T} \\& {}=\left [{{ \begin{array}{cccc} h_{11} & h_{12} & {\dots }& h_{1N} \\ h_{21} & h_{22} & {\dots }& h_{2N} \\ \vdots & \vdots & \ddots & \vdots \\ h_{N1} & h_{N2} & {\dots }& h_{NN}\end{array} }}\right ] \left [{{ \begin{array}{c} \bar {x}^{(1)}(k T_{s}) \\ \bar {x}^{(2)}(k T_{s}) \\ \vdots \\ \bar {x}^{(N)}(k T_{s}) \\ \end{array} }}\right ].\end{align*}
The objective is to find h_{ij}
’s such that \mathbb {E} [(x(k T_{s}) - z(k T_{s}))^{T} (x(k T_{s}) - z(k T_{s}))]
is minimum. This implies that,\begin{equation*} \frac {\partial \; \mathbb {E} \left [{{ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) )^{2} }}\right ]}{ \partial \; \left [{{h_{i1},h_{i2}, {\dots },h_{iN} }}\right ] }=0, \; \forall \; i\end{equation*}
View Source
\begin{equation*} \frac {\partial \; \mathbb {E} \left [{{ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) )^{2} }}\right ]}{ \partial \; \left [{{h_{i1},h_{i2}, {\dots },h_{iN} }}\right ] }=0, \; \forall \; i\end{equation*}
where,\begin{equation*} z^{(i)}(k T_{s}) = \left [{{h_{i1}, {\dots },h_{iN} }}\right ]^{T} \left [{{\bar {x}^{(1)}(k T_{s}), {\dots }, \bar {x}^{(N)}(k T_{s}) }}\right ].\end{equation*}
View Source
\begin{equation*} z^{(i)}(k T_{s}) = \left [{{h_{i1}, {\dots },h_{iN} }}\right ]^{T} \left [{{\bar {x}^{(1)}(k T_{s}), {\dots }, \bar {x}^{(N)}(k T_{s}) }}\right ].\end{equation*}
Through expansion, it follows that,\begin{equation*} \mathbb {E} \left [{{ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) ) \frac { \; \partial z^{(i)}(k T_{s})}{ \partial \; [h_{i1},h_{i2}, {\dots },h_{iN} ] } }}\right ]=0, \; \forall \; i,\end{equation*}
View Source
\begin{equation*} \mathbb {E} \left [{{ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) ) \frac { \; \partial z^{(i)}(k T_{s})}{ \partial \; [h_{i1},h_{i2}, {\dots },h_{iN} ] } }}\right ]=0, \; \forall \; i,\end{equation*}
where,\begin{equation*} \frac { \partial z^{(i)}(k T_{s})}{ \partial \; \left [{{h_{i1}, {\dots },h_{iN} }}\right ] } = \left [{{\bar {x}^{(1)}(k T_{s}), {\dots }, \bar {x}^{(N)}(k T_{s}) }}\right ].\end{equation*}
View Source
\begin{equation*} \frac { \partial z^{(i)}(k T_{s})}{ \partial \; \left [{{h_{i1}, {\dots },h_{iN} }}\right ] } = \left [{{\bar {x}^{(1)}(k T_{s}), {\dots }, \bar {x}^{(N)}(k T_{s}) }}\right ].\end{equation*}
In other words, minimizing \mathbb {E} [||x(k T_{s}) - z(k T_{s}) ||^{2}]
is equivalent to having,\begin{align*} \left [{{ \begin{array}{c} \mathbb {E} \left [{{ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) ) \bar {x}^{(1)}(k T_{s}) }}\right ] \\ \mathbb {E} \left [{{ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) ) \bar {x}^{(2)}(k T_{s}) }}\right ] \\ \vdots \\ \mathbb {E} \left [{{ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) ) \bar {x}^{(N)}(k T_{s}) }}\right ] \\ \end{array} }}\right ]= \mathbf {0}_{N \times 1}, \; \forall \;i,\end{align*}
View Source
\begin{align*} \left [{{ \begin{array}{c} \mathbb {E} \left [{{ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) ) \bar {x}^{(1)}(k T_{s}) }}\right ] \\ \mathbb {E} \left [{{ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) ) \bar {x}^{(2)}(k T_{s}) }}\right ] \\ \vdots \\ \mathbb {E} \left [{{ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) ) \bar {x}^{(N)}(k T_{s}) }}\right ] \\ \end{array} }}\right ]= \mathbf {0}_{N \times 1}, \; \forall \;i,\end{align*}
that is,\begin{equation*} \mathbb {E} \big [ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) ) \bar {x}^{(j)}(k T_{s}) \big ]=0, \; \forall \; i,j. \tag {29}\end{equation*}
View Source
\begin{equation*} \mathbb {E} \big [ (x^{(i)}(k T_{s}) - z^{(i)}(k T_{s}) ) \bar {x}^{(j)}(k T_{s}) \big ]=0, \; \forall \; i,j. \tag {29}\end{equation*}
Substituting for z^{(i)}(k T_{s})
in Equation (29) yields,\begin{equation*} \mathbb {E} \big [ x(k T_{s}) \bar {x}^{T}(k T_{s}) \big ] = \mathbb {E} \big [ ( H_{k} \; \bar {x}(k T_{s})) \bar {x}^{T}(k T_{s}) \big ]\end{equation*}
View Source
\begin{equation*} \mathbb {E} \big [ x(k T_{s}) \bar {x}^{T}(k T_{s}) \big ] = \mathbb {E} \big [ ( H_{k} \; \bar {x}(k T_{s})) \bar {x}^{T}(k T_{s}) \big ]\end{equation*}
or,\begin{equation*} \mathbb {E} \big [ x(k T_{s}) \bar {x}^{T}(k T_{s}) \big ] = H_{k} \; \mathbb {E} \big [ \bar {x}(k T_{s}) \bar {x}^{T}(k T_{s}) \big ]\end{equation*}
View Source
\begin{equation*} \mathbb {E} \big [ x(k T_{s}) \bar {x}^{T}(k T_{s}) \big ] = H_{k} \; \mathbb {E} \big [ \bar {x}(k T_{s}) \bar {x}^{T}(k T_{s}) \big ]\end{equation*}
and finally resulting that,\begin{equation*} H_{k} = \mathbb {E} \big [ x(k T_{s}) \bar {x}^{T}(k T_{s}) \big ] \Big (\mathbb {E} \big [ \bar {x}(k T_{s}) \bar {x}^{T}(k T_{s}) \big ] \Big )^{-1} \tag {30}\end{equation*}
View Source
\begin{equation*} H_{k} = \mathbb {E} \big [ x(k T_{s}) \bar {x}^{T}(k T_{s}) \big ] \Big (\mathbb {E} \big [ \bar {x}(k T_{s}) \bar {x}^{T}(k T_{s}) \big ] \Big )^{-1} \tag {30}\end{equation*}
The term (\mathbb {E} [\bar {x}(k T_{s}) \bar {x}^{T}(k T_{s})])^{-1}
in Equation (30) is possible to derive through Monte-Carlo methods. This is due to the fact that one can take samples from the known distribution p(\bar {x}(k T_{s}))
for all k’s. However, since the distribution p(x(k T_{s}))
is not accessible, the term \mathbb {E} [x(k T_{s}) \bar {x}^{T}(k T_{s})]
is not derivable.
Nevertheless, let p(x(k T_{s}))
be assumed to be accessible, and proceed to determine the error function \mathbb {E} [(x(k T_{s}) - z(k T_{s}))^{T} (x(k T_{s}) - z(k T_{s}))]
, and compare it with the network error derived through cooperative localization. Equation (29) reveals that the optimal condition to minimize the network error \mathbb {E} [(x(k T_{s}) - z(k T_{s}))^{T} (x(k T_{s}) - z(k T_{s}))]
is satisfied when \mathbb {E} [(x(k T_{s}) - z(k T_{s})) \bar {x}^{T}(k T_{s})] =0
. Therefore, under optimal conditions,\begin{align*}& \mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) (x(k T_{s}) - z(k T_{s}))^{T}}}\right ] \\& \;=\mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) x^{T}(k T_{s})}}\right ] \\& \;=\mathbb {E} \left [{{ (x(k T_{s}) - H_{k} \bar {x}(k T_{s})) x^{T}(k T_{s})}}\right ] \\& \;=\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] - H_{k} \mathbb {E} \left [{{\bar {x}(k T_{s}) x^{T}(k T_{s})}}\right ],\end{align*}
View Source
\begin{align*}& \mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) (x(k T_{s}) - z(k T_{s}))^{T}}}\right ] \\& \;=\mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) x^{T}(k T_{s})}}\right ] \\& \;=\mathbb {E} \left [{{ (x(k T_{s}) - H_{k} \bar {x}(k T_{s})) x^{T}(k T_{s})}}\right ] \\& \;=\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] - H_{k} \mathbb {E} \left [{{\bar {x}(k T_{s}) x^{T}(k T_{s})}}\right ],\end{align*}
and substitution for H_{k}
from Equation (30) results that,\begin{align*}& \mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) (x(k T_{s}) - z(k T_{s}))^{T}}}\right ] \\& \;=\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \\& \;\quad -\mathbb {E} \big [ x(k T_{s}) x^{T}_{k|k-1} \big ] \big (\mathbb {E} \big [ \bar {x}(k T_{s}) x^{T}_{k|k-1} \big ] \big )^{-1} \mathbb {E} \left [{{\bar {x}(k T_{s}) x^{T}(k T_{s})}}\right ]. \tag {31}\end{align*}
View Source
\begin{align*}& \mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) (x(k T_{s}) - z(k T_{s}))^{T}}}\right ] \\& \;=\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \\& \;\quad -\mathbb {E} \big [ x(k T_{s}) x^{T}_{k|k-1} \big ] \big (\mathbb {E} \big [ \bar {x}(k T_{s}) x^{T}_{k|k-1} \big ] \big )^{-1} \mathbb {E} \left [{{\bar {x}(k T_{s}) x^{T}(k T_{s})}}\right ]. \tag {31}\end{align*}
Since, for \Delta t=0
, cooperative localization simply reduces to odometry, one may note that \bar {x}(k T_{s})=x(k T_{s})+\Delta x(k T_{s})
, and Equation (31) can be further simplified, as,\begin{align*} \mathbb {E} \big [ x(k T_{s}) x^{T}_{k|k-1} \big ]=& \mathbb {E} \big [ x(k T_{s}) (x^{T}(k T_{s}) + \Delta x^{T}(k T_{s})) \big ] \\=& \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) \big ] + \mathbb {E} \big [ x(k T_{s}) \Delta x^{T}(k T_{s}) \big ],\end{align*}
View Source
\begin{align*} \mathbb {E} \big [ x(k T_{s}) x^{T}_{k|k-1} \big ]=& \mathbb {E} \big [ x(k T_{s}) (x^{T}(k T_{s}) + \Delta x^{T}(k T_{s})) \big ] \\=& \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) \big ] + \mathbb {E} \big [ x(k T_{s}) \Delta x^{T}(k T_{s}) \big ],\end{align*}
where \mathbb {E} [x(k T_{s}) \Delta x^{T}(k T_{s})] = \mathbb {E} [x(k T_{s})] \mathbb {E} [\Delta x(k T_{s})] =0
. Substituting \mathbb {E} [x(k T_{s}) \bar {x}^{T}(k T_{s})]
, and \mathbb {E} [x^{T}(k T_{s}) \bar {x}(k T_{s})]
, for \mathbb {E} [x(k T_{s}) \bar {x}^{T}(k T_{s})]
, and \mathbb {E} [x^{T}(k T_{s}) \bar {x}(k T_{s})]
, respectively, in Equation (31) follows that,\begin{align*}& \mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) (x(k T_{s}) - z(k T_{s}))^{T}}}\right ] \\& \; =\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \\& \;\quad -\,\mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) \big ] \big (\mathbb {E} \big [ \bar {x}(k T_{s}) x^{T}_{k|k-1} \big ] \big )^{-1} \\& \;\quad \times \,\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ]. \tag {32}\end{align*}
View Source
\begin{align*}& \mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) (x(k T_{s}) - z(k T_{s}))^{T}}}\right ] \\& \; =\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \\& \;\quad -\,\mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) \big ] \big (\mathbb {E} \big [ \bar {x}(k T_{s}) x^{T}_{k|k-1} \big ] \big )^{-1} \\& \;\quad \times \,\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ]. \tag {32}\end{align*}
Additionally,\begin{align*} \mathbb {E} \big [ \bar {x}(k T_{s}) x^{T}_{k|k-1} \big ]=& \mathbb {E} \big [ (x(k T_{s}) + \Delta x(k T_{s})) \\& {}\quad (x^{T}(k T_{s}) + \Delta x^{T}(k T_{s})) \big ] \\=& \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] \\& {}\quad + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ]\end{align*}
View Source
\begin{align*} \mathbb {E} \big [ \bar {x}(k T_{s}) x^{T}_{k|k-1} \big ]=& \mathbb {E} \big [ (x(k T_{s}) + \Delta x(k T_{s})) \\& {}\quad (x^{T}(k T_{s}) + \Delta x^{T}(k T_{s})) \big ] \\=& \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] \\& {}\quad + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ]\end{align*}
and Equation (31) can be written as,\begin{align*}& \mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) (x(k T_{s}) - z(k T_{s}))^{T}}}\right ] \\& \;=\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] - \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) \big ] \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ] \big )^{-1} \\& \; \qquad \mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \\& \;=\mathbb {E} \left [{{x(k T_{s}) \cdot x^{T}(k T_{s})}}\right ] \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ] \big )^{-1} \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ] \big ) \\& \;\quad -\mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) \big ] \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ] \big )^{-1} \\& \;\qquad \mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \\& \;=\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ] \big )^{-1} \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s})] \\& \;\quad - \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] \big )\end{align*}
View Source
\begin{align*}& \mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) (x(k T_{s}) - z(k T_{s}))^{T}}}\right ] \\& \;=\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] - \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) \big ] \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ] \big )^{-1} \\& \; \qquad \mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \\& \;=\mathbb {E} \left [{{x(k T_{s}) \cdot x^{T}(k T_{s})}}\right ] \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ] \big )^{-1} \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ] \big ) \\& \;\quad -\mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) \big ] \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ] \big )^{-1} \\& \;\qquad \mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \\& \;=\mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) ] \big )^{-1} \\& \;\qquad \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s})] \\& \;\quad - \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] \big )\end{align*}
and finally,\begin{align*}& \mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) (x(k T_{s}) - z(k T_{s}))^{T}}}\right ] \\& \;= \mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] \\& \;\quad + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s})] \big )^{-1} \mathbb {E} \left [{{\Delta x(k T_{s}) \Delta x^{T}(k T_{s})}}\right ].\quad \tag {33}\end{align*}
View Source
\begin{align*}& \mathbb {E} \left [{{ (x(k T_{s}) - z(k T_{s})) (x(k T_{s}) - z(k T_{s}))^{T}}}\right ] \\& \;= \mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \big ( \mathbb {E} \big [ x(k T_{s}) x^{T}(k T_{s}) ] \\& \;\quad + \mathbb {E} \big [ \Delta x(k T_{s}) \Delta x^{T}(k T_{s})] \big )^{-1} \mathbb {E} \left [{{\Delta x(k T_{s}) \Delta x^{T}(k T_{s})}}\right ].\quad \tag {33}\end{align*}
Equation (33) provides a comparison between a cooperative localization error, and a statistical minimum mean square error (MMSE) based optimization, where it is previously learned that given negligible sensor errors,\begin{equation*} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) \le \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometery})}{N}.\end{equation*}
View Source
\begin{equation*} \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization}) \le \frac {\sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometery})}{N}.\end{equation*}
On the other hand, according to Equation (33) the localization estimation error in an optimal mean-square-sense is\begin{align*} \min _{H_{k}}& \mathbb {E} \left [{{||x(k T_{s}) - H_{k} \bar {x}(k T_{s})||^{2}}}\right ] \triangleq \sigma ^{2}_{\Delta x(k T_{s})} (\text {MMSE}) \\& {}=\mathbf {Trace} \Bigg ( \mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \Big ( \mathbb {E} \left [{{ x(k T_{s}) x^{T}(k T_{s}) }}\right ] \\& {}\quad +\mathbb {E} \left [{{ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) }}\right ] \Big )^{-1} \Bigg ) \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Odometery}}}\right ), \tag {34}\end{align*}
View Source
\begin{align*} \min _{H_{k}}& \mathbb {E} \left [{{||x(k T_{s}) - H_{k} \bar {x}(k T_{s})||^{2}}}\right ] \triangleq \sigma ^{2}_{\Delta x(k T_{s})} (\text {MMSE}) \\& {}=\mathbf {Trace} \Bigg ( \mathbb {E} \left [{{x(k T_{s}) x^{T}(k T_{s})}}\right ] \Big ( \mathbb {E} \left [{{ x(k T_{s}) x^{T}(k T_{s}) }}\right ] \\& {}\quad +\mathbb {E} \left [{{ \Delta x(k T_{s}) \Delta x^{T}(k T_{s}) }}\right ] \Big )^{-1} \Bigg ) \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Odometery}}}\right ), \tag {34}\end{align*}
which is in any case superior to the odometery error \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometery})
, since \mathbb {E} [x(k T_{s}) x^{T}(k T_{s})] \lt \mathbb {E} [x(k T_{s}) x^{T}(k T_{s})] + \mathbb {E} [\Delta x(k T_{s}) \Delta x^{T}(k T_{s})]
. However, to obtain a quantitative comparison between \sigma ^{2}_{\Delta x(k T_{s})} (\text {MMSE})
and \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})
, one may reasonably expect that,\begin{equation*} \mathbb {E} \left [{{ \Delta x\left ({{k T_{s}}}\right ) \Delta x^{T}\left ({{k T_{s}}}\right )}}\right ] \lt \mathbb {E} \left [{{x\left ({{k T_{s}}}\right ) x^{T}\left ({{k T_{s}}}\right )}}\right ],\end{equation*}
View Source
\begin{equation*} \mathbb {E} \left [{{ \Delta x\left ({{k T_{s}}}\right ) \Delta x^{T}\left ({{k T_{s}}}\right )}}\right ] \lt \mathbb {E} \left [{{x\left ({{k T_{s}}}\right ) x^{T}\left ({{k T_{s}}}\right )}}\right ],\end{equation*}
therefore,\begin{align*}& \mathbb {E} \left [{{ \Delta x\left ({{k T_{s}}}\right ) \Delta x^{T}\left ({{k T_{s}}}\right )}}\right ]+\mathbb {E} \left [{{x\left ({{k T_{s}}}\right ) x^{T}\left ({{k T_{s}}}\right )}}\right ] \\& \;\lt 2 \; \mathbb {E} \left [{{x\left ({{k T_{s}}}\right ) x^{T}\left ({{k T_{s}}}\right )}}\right ]\end{align*}
View Source
\begin{align*}& \mathbb {E} \left [{{ \Delta x\left ({{k T_{s}}}\right ) \Delta x^{T}\left ({{k T_{s}}}\right )}}\right ]+\mathbb {E} \left [{{x\left ({{k T_{s}}}\right ) x^{T}\left ({{k T_{s}}}\right )}}\right ] \\& \;\lt 2 \; \mathbb {E} \left [{{x\left ({{k T_{s}}}\right ) x^{T}\left ({{k T_{s}}}\right )}}\right ]\end{align*}
or,\begin{align*}& \mathbb {E} \left [{{x\left ({{k T_{s}}}\right ) x^{T}\left ({{k T_{s}}}\right )}}\right ] \\& \;\quad \Big ( \mathbb {E} \left [{{ \Delta x\left ({{k T_{s}}}\right ) \Delta x^{T}\left ({{k T_{s}}}\right )}}\right ]+\mathbb {E} \left [{{x\left ({{k T_{s}}}\right ) x^{T}\left ({{k T_{s}}}\right )}}\right ] \Big )^{-1} \gt \frac {1}{2} I\end{align*}
View Source
\begin{align*}& \mathbb {E} \left [{{x\left ({{k T_{s}}}\right ) x^{T}\left ({{k T_{s}}}\right )}}\right ] \\& \;\quad \Big ( \mathbb {E} \left [{{ \Delta x\left ({{k T_{s}}}\right ) \Delta x^{T}\left ({{k T_{s}}}\right )}}\right ]+\mathbb {E} \left [{{x\left ({{k T_{s}}}\right ) x^{T}\left ({{k T_{s}}}\right )}}\right ] \Big )^{-1} \gt \frac {1}{2} I\end{align*}
and substitution in Equation (34) yields that,\begin{align*} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Odometery}}}\right )\gt & \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {MMSE}}}\right ) \\\gt & \frac {1}{2} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Odometery}}}\right ) \tag {35}\end{align*}
View Source
\begin{align*} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Odometery}}}\right )\gt & \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {MMSE}}}\right ) \\\gt & \frac {1}{2} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Odometery}}}\right ) \tag {35}\end{align*}
and in comparison to \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})
, one may deduce that,\begin{equation*} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Coop-Localization}}}\right ) \lt \frac {2}{N} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {MMSE}}}\right ) \tag {36}\end{equation*}
View Source
\begin{equation*} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Coop-Localization}}}\right ) \lt \frac {2}{N} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {MMSE}}}\right ) \tag {36}\end{equation*}
which completes the proof of the theorem.■
This section will bring several examples of GNSS-independent localization in multi-agent systems, with and without cooperative localization. The motion model of a miniature quadcopter is used in this simulation to derive an initial odometry localization \bar {x}(k T_{s})
at a given time-step k T_{s}
, that is plotted in “black”, where AAVs are flying independently from other AAVs in a flock and on a separate path from others in a 2-D plane, and where atmospheric disturbances are present. However, although the AAVs in the flock are dynamically independent, they are continuously exchanging filtered positions \hat {x}(kT_{s}+\Delta t)
with each other on a continuous-time basis, as in Figure 2. Therefore, the AAVs inject \bar {x}(k T_{s})
to their filter, and take the output of the filter \hat {x}(kT_{s}+\Delta t)
to communicate with each other, until a relative consensus is achieved in the network, where \Delta t \le T_{s}
, and \gamma =N/(\lambda _{2} \Delta t)~\ge N/(\lambda _{2} T_{s})
. Then \hat {x}(t+\Delta t)
is used by the flock as their accurate positions x(T_{s})
, and is plotted in “red” colour. Obviously, \Delta t
is a relatively small transient settling time with respect to T_{s}
, such that one can assume that there is negligible mobility within \Delta t
. The above scenario is repeated numerous times for random IMU sensor and LiDAR errors to show how the samples will be dispersed. The more disperse the samples, the less accurate the estimation model. In general, “black” samples representing odometry models are proven to be more dispersed (i.e., greater \mathbb {E}[||\bar {x}(k T_{s})-x(k T_{s})||^{2}])
, in comparison to the “red” samples (representing cooperative localization), showing less error variance (i.e., lower \mathbb {E}[||\hat {x}(t+\Delta t)-x(k T_{s})||^{2}])
.
The horizontal axis represents the longitudinal position of the vehicle with respect to an arbitrary origin, and the scales are also set arbitrarily since comparison between the cases with and without cooperative localization has been the focus of the simulations. The vertical axis is also representing an imaginary latitude with respect to the origin with arbitrary scale. The first scenario as depicted in Figure 4 represents the distribution of \bar {x}(k T_{s})
(odometry localization), and \hat {x}(t+\Delta t)
(cooperative localization), for a network, where N=4
.
It is evident that cooperative localization is advantageous over odometry estimation, as the “red” samples are less dispersed in comparison with the “black”. Ideally, for a 100% accurate localization one may expect the “red” particles to converge to the center of their mass, that represents the ground truth position in each case. The sensor error in this scenario has been selected moderately to illustrate the advantage of cooperative localization under realistic scenarios for small networks.
Figure 5 is a modified version of localization error in a multi-agent system of N=4
with higher sensor errors in comparison with Figure 4. It can be seen that there is little to no advantage in deploying cooperative localization in comparison with conventional (individually based) estimations, where low grade sensors are adopted. It is therefore expected that cooperative localization may not seem an advantage in small networks since in addition to the divided drift error (i.e., \sigma ^{2}_{\Delta x(k T_{s})}/N)
, the sensor error component (that is \sigma ^{2}_{\Delta \omega (k T_{s})}/\lambda ^{2}_{min} (L_{\gamma })
as specified in Equation (26) is also playing role. Especially, this can become a remarkable error for the cases where the connectivity of the network is low (i.e., small \lambda _{min} (L_{\gamma }))
.
Figure 6 demonstrates a scenario where a network of N=9
agents are deployed. In every scenario, the simulation is set to assume a random position in the map. Denoting \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometery})
as the original estimation error of the network, the total network error achieved through cooperative localization can be derived by Equation (26) as,\begin{align*} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Coop-Localization}}}\right )\le & \frac {\sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Odometery}}}\right )}{N} \\& {}\quad + \frac { \sigma ^{2}_{\Delta \omega \left ({{k T_{s}}}\right )}}{ \lambda ^{2}_{min} \left ({{L_{\gamma }}}\right )}.\end{align*}
View Source
\begin{align*} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Coop-Localization}}}\right )\le & \frac {\sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Odometery}}}\right )}{N} \\& {}\quad + \frac { \sigma ^{2}_{\Delta \omega \left ({{k T_{s}}}\right )}}{ \lambda ^{2}_{min} \left ({{L_{\gamma }}}\right )}.\end{align*}
where \sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometery})
is represented by the “black” area in Figure 6, and \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})
is represented by the “red” area.
Figure 7 illustrates a similar case study with N=7
with the exception that low-grade sensors are used in Figure 7 for localization.
The advantage of cooperative localization in each scenario could be visualized by comparing the span of the “red” area to the span of the “black” area. Figure 8 is the case for N=20
indicating that cooperative localization performs better as N grows.
Figure 9 illustrates cooperative localization for a fully connected network with N=100
agents, that demonstrates almost perfect localization. Comparison of the size of the “red to black” areas indicate 6.6%, implying more than 15 times improvement.
Figure 10 illustrates a similar case study as depicted in Figure 9 (where N=100
), with lower network connectivity (i.e., smaller \lambda _{2}
). Since \lambda _{min} (L_{\gamma })=\lambda _{2}
, Equation (26) can be written as,\begin{align*} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Coop-Localization}}}\right )\le & \frac {\sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Odometery}}}\right )}{N} \\& {}\quad + \frac { \sigma ^{2}_{\Delta \omega \left ({{k T_{s}}}\right )}}{ \lambda ^{2}_{2}}\end{align*}
View Source
\begin{align*} \sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Coop-Localization}}}\right )\le & \frac {\sigma ^{2}_{\Delta x\left ({{k T_{s}}}\right )} \left ({{\text {Odometery}}}\right )}{N} \\& {}\quad + \frac { \sigma ^{2}_{\Delta \omega \left ({{k T_{s}}}\right )}}{ \lambda ^{2}_{2}}\end{align*}
where it can be evident that low network connectivity results in a large {}\frac {\sigma ^{2}_{\Delta \omega (k T_{s})}}{ \lambda ^{2}_{2}}
. Multi-agent system theories have provided lower and higher bounds for \lambda _{2}
, as,\begin{equation*} \frac {4}{N \mathbf {diam}\left ({{\mathcal {G}}}\right )} \le \lambda _{2} \le \frac {N}{N-1} \ell _{min} \tag {37}\end{equation*}
View Source
\begin{equation*} \frac {4}{N \mathbf {diam}\left ({{\mathcal {G}}}\right )} \le \lambda _{2} \le \frac {N}{N-1} \ell _{min} \tag {37}\end{equation*}
where \mathbf {diam}(\mathcal {G})
is the diameter of the graph, defined to be the largest number of vertices that must be traversed in order to travel between one vertex to another, and \ell _{min}
is the minimum degree of vertices. Connectivity and the effect of the network diameter can be explained in Figure 11 and Figure 12 below. Figure 11 represents cooperative localization in a setting with one anchor node whose ground truth position is known, such as in a landmark equipped with a radio transmitting device, that is periodically transmitting its position to all neighbouring agents. The anchor node is shown in the figure with an asterisk and a zero variance (since the position of the anchor node is a deterministic parameter).
The network is fully connected and every other node in the network is able to determine its own position through direct cooperation with the anchor. As a result, cooperative localization is most accurate when the network is infinitely large. Figure 12 is the same case study discussed in Figure 11, except that this time a network with a large diameter and low connectivity is simulated,
where the agents are serially connected to the anchor node as in a chain. The ground truth position of the anchor node and every other node is the same as in Figure 11. The node that is immediately connected to the anchor has the highest accuracy that is characterized by a relatively low variance (or spread) in comparison with the others. The agents that are farther away from the anchor have lower positioning accuracy, and as the radio distance from the anchor is increased in the chain the variance of the estimated position increases, showing added error that is due to indirect cooperation with the anchor. The last case study corresponds to cases where \gamma \Delta t
is not optimized.
The effect of an inappropriate selection of \gamma \Delta t
has been shown in Figure 13, that is a replication of the case in Figure 9, except where the value for \gamma
has been selected slightly larger than optimal (i.e., \gamma \lambda _{2} \Delta t = N
), and where \Delta t
is decided in accordance with the vehicle air-speed, ensuring that the vehicle displacements are negligible in \Delta t
. The error has become excessive and beyond acceptance, implying that one has to optimize \gamma \Delta t
in order to tune the process for a minimal network error.
SECTION VIII.
Discussions and Clarifications
The purpose of this section is to relate the simulations and case studies in Section VII with the theoretical results derived in the previous sections. The following discussions are a handful of controversial aspects.
A. Ground Truth Positions
Assumption 2 requires a zero-mean estimation error for every agent in the network. This implies that if the motion models of the vehicles in Equations (2) and (3) were accurate representations of the actual motion, and if a motion experiment could be infinitely repeated from an origin to a destination, then the AAVs mean-sense-motion could be precisely derived from the motion equations and the IMUs, since the measurement error of healthy instrumentation are assumed to be non-biased. This implies that the ground truth position for the case studies in Section VII is the center of the mass of the “red” particles, since each particle is a representation of a motion experiment. Specifically, Figure 11 is an emulation of a network with infinite agents and ideal range-finders. Since it is practically impossible to simulate infinite agents, the case study is an emulation through the aid of an anchor node, as in Lemma 2. The anchor node is represented by an asterisk in the figure having a zero variance, i.e., deterministic position. All other nodes in the graph are directly or indirectly cooperating with the anchor node.
The proof is straightforward, and omitted due to space constraints. However, the small variance in the estimation of the position of the non-anchor agents is due to intrinsic numerical error in the LTI filter, as well as limited consensus times \Delta t
in Theorem 2, rendering a negligible error. Let us remember that in this case the error of the range-finders have been neglected.
B. Performance Metrics
The quantitative performance of cooperative localization can be assessed by evaluating the variance ratio \sigma ^{2}_{\Delta x(k T_{s})} (\text {Coop-Localization})/\sigma ^{2}_{\Delta x(k T_{s})} (\text {Odometry})
. This ratio can be numerically evaluated by particle filters, and alternatively, a rough evaluation can also be attained by inspection of the density of the “red” to “black” areas in each case study. For example, Figures 8 and 6 are similar experiments in terms of network topology, with the exception that the number of agents have increased from N=9
to N=20
in the two case studies. As a result, the spread of the “red” areas in comparison to the “black” has reduced from 50% to 35%, and even to 6.6% for N=100
in Figure 9, which is a quantitative metrics to evaluate localization performance as the network size is increased. Performance comparison between the figures is summarized in Table 1.
C. Network Connectivity
Equation (26) explains the importance of graph connectivity (i.e., \lambda _{2}
) for reducing the network error in a cooperative localization scheme. In an optimal sense, where \gamma \lambda _{2} \Delta t =N
, and \lambda _{min} (L_{\gamma })=\lambda _{2}
, the network error can be mitigated by, (1) increasing the size of the network N, or equivalently utilization of one or more anchor nodes (as in Remark 3), and (2) by reducing the range-finders error. Commercial range-finders have unavoidable intrinsic detection error, therefore, the only practical way to optimize the overall performance of a cooperative localization network is to increase the network connectivity \lambda _{2}
, and that is by reducing the graph diameter, as in Equation (37). Figure 11 and Section VIII-A has shown the effect of a very large network in reducing the error of the network, in a simulation where the range-finders were ideal instruments. In case of ideal rangefinders, \sigma ^{2}_{\Delta \omega (k T_{s})}=0
, and the network connectivity \lambda _{2}
does not play role in Equation (26). However, situation changes in a practical scenario where \sigma ^{2}_{\Delta \omega (k T_{s})} \ne 0
. Figure 12 is a simulation case identical to Figure 11 except that range-finders’ error were taken into account, and where the topology of the network was changed to a radial network, where the anchor agent was on one side of a line, and the others where sequentially linked to one another and finally to the anchor (as in a chain), hence a very large diameter. It is evident in Figure 12 that the agent immediately connected to the anchor has the smallest estimation variance, and the variance in the positioning estimation grows, as the agents get farther away from the anchor in the chain. It is worth noting from the figure that network neighbours are not necessarily physical neighbours. Obviously, if the topology of the graph changes to a fully connected network then the size of each red spot will be equal to the smallest red spot, and in case ideal range-finders are used, then Figure 12 transforms to Figure 11.
D. Initial Localization Error
Equation (11) shows that for a sufficiently large \gamma
, the cooperative localization error,\begin{equation*} \lim _{\Delta t \to T_{s}} \Delta x^{(i)}\left ({{k T_{s}+\Delta t}}\right )=\frac {1}{N} \sum _{i=1}^{N} \Delta x^{(i)}\left ({{k T_{s}}}\right ), \; \forall \; i \in \mathcal {N},\end{equation*}
View Source
\begin{equation*} \lim _{\Delta t \to T_{s}} \Delta x^{(i)}\left ({{k T_{s}+\Delta t}}\right )=\frac {1}{N} \sum _{i=1}^{N} \Delta x^{(i)}\left ({{k T_{s}}}\right ), \; \forall \; i \in \mathcal {N},\end{equation*}
and is independent to the initial error \Delta x^{(i)}(k T_{s})
, as long as N is sufficiently large. The initial network error of a large network can be visualized by the “black” area in Figure 11, which reduces to the small “red” spots in a short time \Delta t \to T_{s}
(assuming ideal range-finders), hence, the initial amplitude of the error will have no effect on the final cooperative localization error, in large networks. In case of smaller networks the localization error will be the mean of the network’s error, and will not be very advantageous in comparison to ordinary odometery localization. Hence, cooperative localization would be particularly favourable in large networks, such as in future Urban Air Mobility (UAM).
E. Rural Vs. Urban Air Mobility
The conceptual foundation of cooperative localization is established upon three main pivots, namely,
Network size N (that is responsible for attenuating the odometry error): An urban air mobility system is generally associated with large number of connected aero-vehicles randomly flying in a often dense airspace, where strict traffic control rules prevail. A dense airspace may also imply lower clearance requirements, and finally more stringent localization standards, to ensure precise flight paths for collision avoidance. This provides ideal grounds for cooperative localization, since range-finder detection ranges are often low range and more suitable for urban areas, in comparison to a rural airspace where distances between AAVs are relatively larger.
Network connectivity \lambda _{2}
(that is responsible for attenuating the range-finders error): In urban airspaces, stationary landmarks and poles can serve as anchor nodes, if equipped with radio transmitters to disseminate their ground truth position to a surrounding neighbourhood. Each anchor node technically functions similar to an infinitely large and fully connected network. The ultimate effect is equivalent to a smaller topological diameter, regardless of the physical span of the airspace. A smaller network diameter then attenuates range-finders error resulting in an improved localization, as discussed in Section VIII-C. In conclusion, an urban airspace is not only advantageous for reducing the odometry error in the network (due to large N), but also advantageous for attenuation of the range-finder error, resulting an overall favourable localization, in comparison with rural environments (refer to Equation (26)).
LTI filter gain \gamma
(that is responsible for attenuating the transient error in the LTI filter): Theorem 3 optimally requires \gamma \lambda _{2} \Delta t =N
, implying that \gamma \Delta t
has to be decided according to the network size N and the connectivity \lambda _{2}
. This further implies that the consensus error component \epsilon = e^{- \gamma \lambda _{2} \Delta t} \sigma _{\Delta x(k T_{s})} (\text {Odometry})
that is due to insufficient consensus time (in Theorem 2) can be improved in large and well connected networks, where \gamma
can be large. Therefore, urban environments are again more favourable candidates for cooperative localization, especially when high speed mobility is a concern.
In conclusion, one can deduce that cooperative localization is very efficient for urban air mobility, and less efficient in rural applications.
F. Centralized Vs. Distributed Localization
A centralized localization system requires airspace users to communicate with a central computing station that collects information from the network, and transmits more accurate positioning estimations to the aero-vehicles, as depicted in Figure 3. This arrangement makes the overall system vulnerable to spoofing cyberattacks since intrusion of a single GPS link could paralyze positioning estimations of the victim and consequently threaten the security of the traffic system. Cooperative localization proposed in this article is a fully distributed localization system, and consequently a spoofing cyberattack on any communication link is equally distributed among the network population according to Lemma 1, mitigating the overall impact. Furthermore, existence of at least one anchor node in the network could eliminate the spoofing threat, entirely.
G. Cooperative Localization in Dense AAV Networks
The disadvantage of the proposed cooperative localization method is in the time-delay of the LTI filter discussed in Theorem 1. According to the theorem \lim _{\Delta t \to T_{s}} \Delta x^{(i)}(kT_{s}+\Delta t)=0, \; \forall \; i \in \mathcal {N}
, when N \to \infty
, implying that \lim _{\Delta t \to T_{s}} \hat {x}^{(i)}(kT_{s}+\Delta t)=x^{(i)}(kT_{s})
. In other words, a given vehicle may attain a true position x^{(i)}(kT_{s})
only after lapse of a certain time-delay \Delta t \to T_{s}
. This is an inevitable characteristic of dynamic LTI filters, that require a transient period to settle. In conclusion, in fast mobility applications cooperative localization is concomitant with a mobility error \bar {\mathbf {v}}^{(i)} \Delta t
that requires to be taken into account as a safe clearance with neighbouring mobile agents and obstacles. In dense environments flight clearances are scarce. As a result, cooperative localization in dense airspaces could be problematic, especially in high speed air mobility.
H. Future Its Standards and Practical Implementation
Cooperative localization can be a secure and accurate localization candidate for the future urban air mobility, as it inflicts no any extra burden to the information system infrastructure. Articles [44], [45] outline information exchange systems in UAM and modern Urban Air Traffic Control (UTC). Aero-vehicles in UAM transmit real-time momentary positions as part of their situational awareness messages to the traffic control system, and surrounding aero-vehicles. These awareness messages are all that is required for cooperative localization to be practically implemented inside vehicle controllers through a simplified algorithm, in case a few additional assumptions can be made. Given that C-V2X and V2V are digital communication protocols, the simplified cooperative localization algorithm will be explained in a discrete-time system framework. The emerging 5G and 6G technologies enable fast and secure exchange of information messages in a V2V wireless tele-communication infrastructure in modern ITS, where the transportation speed of information messages can transcend far beyond the physical mobility of the network, and even multiple time steps k_{m} T_{s}
, such that if x^{(i)}_{k} \triangleq x^{(i)}(kT_{s})
denotes the true position of a AAV, then x^{(i)}_{k} \approx x^{(i)}_{k+1} \approx {\dots } \approx x^{(i)}_{k+k_{m}}, \; \forall \; i,k
. Under this condition, it is possible to implement a simplified version of the cooperative localization LTI filter in Theorem 1, as below. Let \hat {x}^{(i)}_{k}=\bar {x}^{(i)}_{k}
denote the initial positioning estimate of a given aero-vehicle i derived from an odometery model, at a particular time-step k, and \omega ^{(i,j)}_{k}
denote the range of i to j measured by a range-finder on vehicle i. Also, let all aero-vehicles in the network perform a series of simple computational iterations,\begin{align*} \hat {x}^{(i)}_{k+1}=& \hat {x}^{(i)}_{k} + \gamma \; T_{s} \sum _{\forall j \in {\mathcal {N}}_{i}} \left ({{\hat {x}^{(j)}_{k} - \hat {x}^{(i)}_{k} + \omega ^{(i,j)}_{k}}}\right ), \\ \hat {x}^{(i)}_{k+2}=& \hat {x}^{(i)}_{k+1} + \gamma \; T_{s} \sum _{\forall j \in {\mathcal {N}}_{i}} \left ({{\hat {x}^{(j)}_{k+1} - \hat {x}^{(i)}_{k+1} + \omega ^{(i,j)}_{k}}}\right ),\end{align*}
View Source
\begin{align*} \hat {x}^{(i)}_{k+1}=& \hat {x}^{(i)}_{k} + \gamma \; T_{s} \sum _{\forall j \in {\mathcal {N}}_{i}} \left ({{\hat {x}^{(j)}_{k} - \hat {x}^{(i)}_{k} + \omega ^{(i,j)}_{k}}}\right ), \\ \hat {x}^{(i)}_{k+2}=& \hat {x}^{(i)}_{k+1} + \gamma \; T_{s} \sum _{\forall j \in {\mathcal {N}}_{i}} \left ({{\hat {x}^{(j)}_{k+1} - \hat {x}^{(i)}_{k+1} + \omega ^{(i,j)}_{k}}}\right ),\end{align*}
and so forth, until \hat {x}^{(i)}_{k+k_{m}}
settles to a steady state. One can be simply show that a steady state \hat {x}^{(i)}_{k+k_{m}}
exists, where \hat {x}^{(i)}_{k+k_{m}} \approx x^{(i)}_{k} \approx x^{(i)}_{k+k_{m}}
, hence, real-time localization. Moreover, ITS standards are nowadays requiring more stringent criteria for bandwidth, security, and latency, enabling cooperative localization to be implemented in relatively high-speed applications and dense airspaces, with a negligible computational burden on the aero-vehicle controllers. Additionally, interoperability standards in modern ITS enable large connected networks with sufficient connectivity, and low topological diameters, which are all among very favourable characteristics for cooperative localization.
The present work involves GNSS-Independent localization through a multi-agent system of cooperating agents, where the agents tend to correct their perceived positions with reference to perceptions of their neighbouring agents through consensus, and relative ranges to their neighbours, sensed by their onboard rangefinders (i.e., LiDARs, vision-cameras, etc.). It has been shown that perfect localization is possible if the network is sufficiently large and well connected. Cooperative localization is also advantageous for resiliency against false data injection cyberattacks on certain sensors, through a formation referred to as “cyberattack shield”, where miniature suicidal drones can be deployed to protect a fleet of ground or marine vehicles against cyberattacks, by absorption of the attack intrusion vector. However, consensus in the information exchange system does not converge when sensor error is present, and causes instability in the localization process. In real world scenarios where sensor errors are inevitable a “relative consensus” is considered where exchange of information among the agents takes place in the certain time interval. Optimization of the duration and consensus speed of the MAS yields important results to minimize the network positioning error in the localization process, and has been a focus of the present work. In order to evaluate the performance of cooperative localization, a minimum-mean-square-error (MMSE) based optimization has been developed for the MAS, and it is shown that cooperative localization reduces network error (at least) by a factor of N/2
, where N is the number of agents.
Case studies and simulations have verified the theoretical results achieved in this work, and has shown that cooperative localization can have remarkable advantages in improving positioning error when the network is large and well connected. It has been found that the topology of the multi-agent network has direct impact on the performance of cooperative localization, and consequently on accurate localization. Rangefinder errors are also proven to have remarkable impact on the performance of the process. Therefore, it is advisable that further works be directed on alternative consensus protocols, in an attempt to facilitate cooperative localization in radial networks, or in networks where connectivity is insufficient. It is also beneficial to investigate methodologies that may alleviate rangefinder errors.