Introduction
Mobile Wireless Sensor Networks (MWSNs) are currently playing a very important role in communication networks [1], since they are utilized in a large number of applications such as tracking [2], [3], the Internet of Things (IoT) [4], real-time location [5], [6], natural resources research [7], power consumption systems [8], monitoring of physical environments [6], [9]–[12], traffic monitoring [13], industry and agriculture [7], healthcare [6], prevention of natural disasters [12], etc. A MWSN consists of a node network spatially distributed over a monitoring area, where the nodes can be integrated into vehicles or robots with motion in a given environment [14]. Nodes are small low-cost devices with low processing capacity and low power consumption. Some of their tasks include the collection, processing and transmission of information; they also carry out cooperation with other nodes [1], [5].
Localization is one of the main problems in Wireless Sensor Networks (WSNs), since it provides useful information about the location of an event. The localization of information is useful for a large number of applications such as routing [3], [6], [15], health surveillance [1], [6], battlefield surveillance [6], [9]–[11], underwater environments [16], target tracking [1], [3], logistics, power consumption [8], spatial querying [3], load balancing [17], rescue operations [6], [18], [19], etc. In reconfigurable networks, the information collected by a node is transmitted through multiple nodes (through the use of multiple hops) until it reaches the access points [20], [21]. In ad-hoc networks the nodes between the Node of Interest (NOI) and the Reference Nodes (RNs) help set the communication between the NOI and its respective RNs to estimate the NOI’s location [20], [21]. The RNs are nodes whose position is known; this is accomplished by either equipping them with a Global Positioning System (GPS), or distributing them in strategic places with a known location. GPS is a technology that can achieve the estimation of a node, but due to the high cost, indoor inefficiency and high power consumption of the system, it is not the best fit [22]. Besides the GPS issue, cell phone and WiFi systems do not perform well in certain scenarios such as highlands, underground and disaster zones where satellite signals or signals from the mobile infrastructure cannot be received [23].
At present, there is a great variety of localization algorithms that do not consider environments with mobile nodes. In localization with static nodes, the NOI is located only once. In contrast, in a MWSN the NOI is continuously localized due to its own mobility [24]. The node’s mobility in a MWSN implies greater energy consumption, a shorter lifetime of the node and an increased communication cost [24]. Some advantages of MWSNs over a WSN with static nodes are greater coverage in the network, a greater number of nodes neighboring the NOI, better network security and increased network connectivity [24].
Most of the localization algorithms in the literature use techniques based on Received Signal Strength (RSS) to estimate the distance between two nodes, because the implementation or deployment of hardware is relatively straightforward. Also, the use of RSS represents low computational complexity. However, the use of RSS in localization exhibits low accuracy in the location of the NOI due mainly to signal propagation issues. There are applications that demand greater localization accuracy such as vehicular networks, [25], underwater environments, [16], 3D WSN, [26], among many others. Thus, the contribution of this work focuses on increasing the location accuracy through distance estimation techniques using Time of Arrival (ToA), and on integrating a correcting factor in order to decrease the error of the estimated distance used to determine the location of the NOI.
In MWSNs, there are three mobility scenarios [24], [27]: (1) static RNs and moving sensor nodes, (2) static sensor nodes and moving RNs and (3) moving RNs and moving sensor nodes. This study uses the first mobility scenario, where we assume that the RNs are static and their positions are known. Additionally, it is assumed that the localization of the sensor nodes will be done only once. The performance of the range-based algorithms is evaluated under this scenario, using techniques such as the Multilateration Algorithm (MA), Weighted Multilateration Algorithm (WMA), Maximum Likelihood Algorithm (MLA) and the MA with a Correcting Factor (CF), i.e., (MA CF). The proposed WMA CF algorithm is also presented; it consists of the WMA algorithm and the calculation of the correcting factor of the distance separating the NOI from its respective RNs. The correcting factor improves the accuracy of the NOI’s localization, which is why our proposed WMA CF algorithm yields a better performance than the other algorithms analyzed in different proposed evaluation scenarios. We use ToA to determine the distance separating the NOI from the RNs. Furthermore, we consider that the estimated distance separating the NOI from the RNs is affected by a random variable with beta distribution due to the NOI’s mobility, which is obtained through several simulations of the NOI motion varying its speed and direction at different points of time. The localization algorithms analyzed in this study are evaluated in a single-hop and multi-hop scenario through different RN distributions, in other words, a fixed distribution where the RNs are distributed by means of a solid geometry and a random RN distribution. The algorithms analyzed in this study are evaluated under the normalized Root Mean Squared Error (RMSE) performance metric. The proposed algorithm, like the rest of the localization algorithms analyzed in this work, presents low performance in terms of normalized RMSE in scenarios where there is a low density of nodes in the network, low network coverage, a small number of reference nodes (less than 3), and an irregular geometrical distribution of the reference nodes. All the algorithms analyzed have been evaluated under the same conditions where 100% connectivity of the nodes is guaranteed, with at least 3 RNs.
The contributions of this paper are: 1) Performance assessment of the localization algorithms analyzed based on ToA in terms of normalized RMSE in single-hop and multi-hop scenarios. 2) Evaluation of the location algorithms analyzed on a network with mobile nodes considering that the RNs are static and the NOI is in motion. 3) Estimation of the probability density function (pdf) of the estimated distance between the mobile NOI and the respective RNs. 4) The proposed range-based localization algorithm using ToA together with correcting factor to decrease the error of the estimated distance between the NOI and the respective RNs.
The rest of the article is organized as follows: Section II presents the work related to the mobility and classification of the localization algorithms in MWSNs; Section III describes the localization problem in a network with mobile nodes; Section IV presents the analysis of the localization algorithms MA, WMA and MLA analyzed in this study; subsequently, Section V presents the correcting factor analysis of the distance between two nodes in single-hop and multi-hop scenarios; Section VI presents the analysis of the results of the analyzed localization algorithms, and finally we present the conclusions drawn from this study.
Related Work
Nowadays MWSNs are considered in large-scale applications, which consist of a great number of sensor nodes and sinks wirelessly connected through an arbitrary topology [1]. Therefore, mobility plays an important role in MWSNs and it can be applied in all the MWSN sensors depending on the application [1], [3]. Mobility in a MWSN is divided into three categories: random mobility, predictable mobility and controlled mobility [3]. In random mobility, mobile devices move freely and randomly over an interest area with no constraint. In the second category, the trajectory of the mobile device is known and cannot be altered. In the third category, related to controlled mobility, the mobile device moves to a known destination following a mobility pattern for a common aim, usually exploration and localization. Nowadays there are many proposals that consider mobility models that predict the motion of a sensor node [3]. In MWSNs, mobility models predict the trajectory of a moving sensor node [1], [28]. Mobility models describe the speed changes, acceleration and position of a sensor node with respect to time; and they are often used to investigate new proposals of communication and navigation techniques.
Mobility patterns are classified as trace models and syntactic models [1], [29]. Trace models are deterministic mobility patterns that can be observed in real life. In WSNs, trace models cannot be modeled if the traces have not been formed. Therefore, in MWSNs it is necessary to use syntactic models to describe the sensor node’s mobility pattern. Syntactic models describe the sensor node’s realistic movement without considering traces. Syntactic models are classified as entity models and group mobility models [1], [29]. According to the specific features of the syntactic models, these can be classified as random models, time-dependent models, space-dependent models and models with geographic constraints. Some mobility patterns based on entity mobility are random way point, random walk, random Gauss-Markov, city section, random direction, boundless simulation area and the probabilistic version of random walk [1], [30]. The mobility patterns based on group mobility are exponential correlated random, column mobility model, nomadic community, pursue mobility model, Reference Point Group Mobility (RPGM), drift group and group force [1], [30].
In MWSNs, localization algorithms are classified into two broad groups: range-free and range-based [31], [32]. The range-based algorithms estimate the separation distance between the RNs and the NOI by means of a distance-estimation technique, such as ToA, RSS, Time Difference of Arrival (TDoA) or Angle of Arrival (AoA) [21]. The range-free algorithms use the connectivity information between the nodes to estimate the separation distance between two nodes [21], [33]. The range-based algorithms achieve a higher accuracy in the localization of the NOI than the range-free algorithms, but the range-free algorithms require extra hardware in the NOI or the RNs for the estimation [21], [33]. In many studies RSS is used to estimate the distance between the RNs and the NOI, because RSS can easily be implemented in hardware, but the cost is a lower accuracy than that obtained when ToA, TDoA and AoA techniques are used [21], [34]. ToA requires perfect synchronization, TDoA has a limited coverage and AoA involves computationally expensive hardware and it also requires an antenna array [21].
Some range-free localization algorithms are the centroid, and weighted centroid, [21]; Distance Vector-Hop (DV-Hop), Improved DV-Hop (IDV-Hop), and Weighted DV-Hop (WDV-Hop), [33]; Approximate Point in Triangle (APIT) [35], [36]; circular intersection, rectangular and hexagonal, [37], among others. In MWSNs, most of the range-free localization algorithms use the Sequential Monte Carlo (SMC) method to estimate the NOI’s position [24], [38]. The Monte Carlo method uses the probability density function (pdf) to estimate the NOI’s position. This method estimates the NOI in three stages: initialization, prediction and filtering [38]. In [12], [39], the authors propose the Weighted Monte Carlo Localization algorithm (WMCL), which is based on the SMC method [38]. This proposal improves the accuracy of the NOI localization compared to that of the DV-Hop [33] and SMC [38] methods. In [40] the improved Probabilistic Multilateration Algorithm (PMA) achieves better normalized RMSE than the localization algorithms analyzed when the number of the RNs and the proportion of noise vary for different configuration topologies of the RNs in a single-hop and multi-hop network scenario. The improved PMA [40] has better normalized RMSE performance, since it computes the NOI’s localization based on a correlation matrix that considers the noisy environment. Besides, this method considers a constant parameter called the damping factor, which improves the convergence in the estimation of the NOI’s position, providing the solution that minimizes the localization error. The WMCL method reduces the sampling area where the NOI is found by using the bounded box method [38] and it improves the localization efficiency of the SMC method by using the position information of the RNs’ neighboring nodes. The hop distance method uses the average distance per hop between two RNs to estimate the position of the NOI [38]. Three stages are carried out in this method to estimate the NOI’s position: broadcast, calculation of the distance matrix, and localization estimation [38]. The disadvantage of the hop distance method is that the RNs must be evenly distributed throughout the whole network to reach high accuracy in the estimation of the NOI’s position. In [38] the fingerprint technique is used to estimate the NOI’s position; the fingerprint technique performs the NOI localization in two stages: an offline stage and an online stage.
Within the literature related to range-based algorithms we can mention DV-Distance [41], multilateration [42], Multidimensional-Scaling (MDS) [43], the hyperbolic positioning algorithm [21], the weighted hyperbolic positioning algorithm [21], [33], the circular and weighted circular positioning algorithm [21], Weighted Least-Squares (WLS) multilateration [21], Least-Squares DV-Hop (LSDV-Hop) [44], vertex projection [20], vertex projection with correcting factor and maximum likelihood [20]. In MWSNs Bergamo and Mazzimi [45] propose a range-based algorithm that uses the information of the positions of the RNs placed on two corners of the same side of a rectangular space. The mobile NOI measures the RSS of the RNs and estimates their position through triangulation. The localization accuracy of this algorithm is affected by the fading away of the signals and mobility of the NOI. Due to the RNs remaining static, the localization of the mobile NOI is limited, given that the RSS decreases as the NOI distances itself from its respective RNs. Therefore, the results of the estimated distance between the mobile NOI and its respective RNs are vague [45]. In [24], the authors propose the dead reckoning algorithm, which estimates the NOI’s position in discrete time intervals called checkpoints. The dead reckoning carries out the estimation of the NOI localization in two stages: initialization and sequent. In the initialization phase, the NOI is localized by means of trilateration. In the next phase, only two RNs are used to localize the NOI. In this phase, two possible NOI localizations are obtained through Bézout’s theorem [46].
One of the problems with the DV-Hop algorithm is the increase in cumulative error of the average distance per hop when the number of hops in the network increases. In contrast, the APIT method involves a high computational cost on the network [33], and the MDS algorithm also has a high computational cost, because it is a centralized algorithm. The classic multilateration algorithm and the hyperbolic algorithm solve the localization problem by a Least Squares (LS) estimator, but do not involve the noise factor in estimating the position of the NOI. As a consequence, these algorithms can present significant errors in the estimated position of the NOI in two situations: the first for scenarios where the noise level is high, and the second for scenarios with a small number of RNs that are also distributed with an irregular geometry. Additionally, in situations where the distances between the NOI and RNs are not available, the multilateration algorithm suffers from problems such as uncertainty, inconsistency, and ambiguity [47]. WLS multilateration solves the localization problem with a WLS estimator but involves a higher computational cost than the classic multilateration algorithm and weighted hyperbolic positioning algorithm, the latter being an iterative algorithm that calculates the position of the NOI with the minimum localization error [21]. The Monte Carlo method requires many iterations and an excessive computational time during the sample generation stage [24].
In [24], the authors propose two classes of localization algorithms for MWSNs: adaptive and predictive. The adaptive localization algorithms carry out the localization of the NOI at constant time intervals based on the NOI’s movement, where the estimation of the NOI’s current position is obtained from previous estimations. This method allows the NOI to increase its localization frequency when it moves rapidly or to reduce its localization frequency when its movement is sluggish. The predictive algorithms estimate the NOI’s movement pattern and predict its future movement. The main aim of this method is to consider the frequency of the NOI’s localization instead of the localization algorithm.
The authors in [48] propose the localization scheme called Vehicles joint UAVs Topology Discovery (VUTD) for IoT applications. This scheme finds the physical topology of a network with low cost and high accuracy. Experimental results show that the VUTD performs better than the VTD algorithm in terms of the average localization error and localization ratio. Compared to the UTD algorithm, the VUTD localization scheme reduces the cost of localization discovery by 77.7%. In [49], a classification of range-free and range-based location techniques in underwater environments (UWSN) is presented together with the main weaknesses and strengths of the location algorithms analyzed. In contrast, the authors in [50] propose the DEIDV-Hop algorithm, which decreases the error of the average distance per network hop. Experimental results show that this algorithm has lower average localization error with more stability and convergence speed than the DV-Hop, PSO, and GSODV-Hop algorithms for different network topologies. In [51] a particle filtering-based localization algorithm is proposed that achieves high target tracking accuracy and a favorable balance with respect to network accuracy and consumption compared to other algorithms analyzed in the study. Reference [52] proposes an improvement of the DV-Hop algorithm based on an online sequential position computation and the optimized calculation of the average distance per hop. Their results show that the online sequential DV-Hop method performs better in terms of localization error than DV-Hop, CC-DV-Hop, and the Parallel Efficient Projection Algorithm (PEPA) for various random WSN topologies. Reference [53] introduces the proposed weighting DV-Hop localization algorithm using modified artificial bee colony optimization, which has less node localization error than the DV-Hop AW, HW, and EW algorithms. Previous proposals [48], [50], [52], [53] use RSS and node connectivity information to estimate the distance between the NOI and RNs.
The localization algorithms evaluated in this article use ToA, resulting in greater NOI localization accuracy in terms of normalized RMSE than that of other localization proposals based on network connectivity information and RSS. However, estimating the ToA requires greater hardware complexity.
Reference [33] of the manuscript presents a comparison of localization algorithms MA (Hyperbolic Positioning Algorithm) and WMA (Weighted Hyperbolic Positioning Algorithm), where WMA shows better performance based on metrics of accuracy calculated by the MSE and precision based on the localization error distribution than that shown by MA. Additionally, reference [20] presents a performance comparison in terms of normalized RMSE of the Vertex Projection Algorithm (VPA), the Maximum Likelihood (ML) and the proposed VPA with correcting factor where the proposed method shows better performance than that of VPA and ML algorithm for single and multi-hop scenarios.
Model Description
This section describes the localization scenario in MWSNs, where it is assumed that the RNs are static with known positions and the NOI is moving. In this scenario, localization is described based on a reference coordinate system defined by the RNs, and sensors whose positions are unknown and will be determined by applying a localization algorithm. The algorithms analyzed in this work are range-based and use the ToA to increase localization accuracy. However, in mobile scenarios, it is very important to estimate the distance between the NOI and RNs, because from that estimation, the localization algorithms are executed. Thus, the more error there is in estimating the distance between the NOI and the RNs, the greater the error of the NOI localization. Thus, different new ideas need to produce a localization algorithm to help in the reduction of the error of the estimated distance. This work presents an algorithm that calculates a correcting factor in estimating the distance between the NOI and RNs.
Figure 1(a) shows the mobility scenario in a WSN of the NOI identified as node
The separation distance \begin{equation*} D_{t}=\sqrt {\left [{ x_{A}-x_{t} }\right]^{2}+\left [{ y_{A}-y_{t} }\right]^{2}}.\tag{1}\end{equation*}
Replacing the movement equations \begin{align*} D_{t}=\sqrt {\begin{array}{l} \left [{ x_{A}-x_{t-1}-{v_{t-1}\Delta T}_{t-1}\cos \left ({\theta _{t-1} }\right) }\right]^{2} \\ +\left [{ y_{A}-y_{t-1}-{v_{t-1}\Delta T}_{t-1}\sin \left ({\theta _{t-1} }\right) }\right]^{2} \\ \end{array}}.\tag{2}\end{align*}
Note that
Q-Q plot beta for (a)
Figure 2(a) shows the obtained pdf of the parameter
Hence, in a mobility environment we assume that the individual distance between NOI \begin{equation*} f_{X}\left ({x }\right)=\frac {1}{\mathrm {B}\left ({\alpha,\beta }\right)}x^{\alpha -1}{(1-x)}^{\beta -1},\quad 0< x< 1,\tag{3}\end{equation*}
\begin{align*} \mathrm {E}\left [{ X }\right]=&\frac {\alpha }{\alpha +\beta }, \tag{4}\\ \mathrm {Var}\left [{ X }\right]=&\frac {\alpha \beta }{\left ({\alpha +\beta }\right)^{2}\left ({\alpha +\beta +1 }\right)}.\tag{5}\end{align*}
Assuming the random variable \begin{align*} f_{X}\left ({x }\right)= &\frac {1}{x_{m}\mathrm {B}\left ({\alpha,\beta }\right)}\left ({\frac {x}{x_{m}} }\right)^{\alpha -1}\left ({1-\frac {x}{x_{m}} }\right)^{\beta -1}, \\& \qquad \qquad \qquad \qquad \qquad \qquad \qquad 0< x< x_{m}.\tag{6}\end{align*}
Therefore, the statistics of the random variable \begin{align*} \mathrm {E}\left [{ X }\right]=&\frac {\alpha x_{m}}{\alpha +\beta }, \tag{7}\\ \mathrm {Var}\left [{ X }\right]=&\frac {\alpha \beta x_{m}}{\left ({\alpha +\beta }\right)^{2}\left ({\alpha +\beta +1 }\right)}.\tag{8}\end{align*}
By means of equations (7)–(8) the statistical parameters of the random variable
We can observe that in the instant of time
Taking as reference RN \begin{equation*} \delta _{A}=\sum \limits _{j=1}^{n_{A}-1} {D\left ({\mathbf {A}_{j-1},\mathbf {A}_{j} }\right)+D\left ({\mathbf {A}_{n_{A}-1},\mathbf {Z} }\right),}\tag{9}\end{equation*}
\begin{equation*} \delta _{A}=\sum \limits _{j=1}^{n_{A}-1} {d\left ({\mathbf {A}_{j-1},\mathbf {A}_{j} }\right)+d\left ({\mathbf {A}_{n_{A}-1},\mathbf {Z} }\right)+\varepsilon _{A}}\tag{10}\end{equation*}
\begin{equation*} \delta _{A}=\sum \limits _{j=1}^{n_{A}-1} {d\left ({\mathbf {A}_{j-1},\mathbf {A}_{j} }\right)+d\left ({\mathbf {A}_{n_{A}-1},\mathbf {Z} }\right)+\varepsilon _{A}+\varphi _{A}.}\tag{11}\end{equation*}
The beta random variable
Localization Algorithms Analyzed
This section presents the analysis of the localization algorithms MA, MLA and WMA, considering the NOI’s mobility in the one-hop and multi-hop scenarios.
A. Multilateration Algorithm (MA)
Taking Figure 4 as our reference, we can estimate the position of NOI \begin{equation*} \delta _{i}^{2}=\left ({x_{i}-\tilde {x} }\right)^{2}+\left ({y_{i}-\tilde {y} }\right)^{2},\quad i=1,2,\ldots,N.\tag{12}\end{equation*}
\begin{align*}& 2\tilde {x}x_{i}+2\tilde {y}y_{i}-2\tilde {x}x_{1}-2\tilde {y}y_{1} \\& \qquad \qquad \qquad \qquad \quad =x_{i}^{2}+y_{i}^{2}-x_{1}^{2}-y_{1}^{2}-\delta _{i}^{2}+\delta _{1}^{2}.\tag{13}\end{align*}
By obtaining all the equations in (13) for \begin{align*}& \left [{ {\begin{array}{cc} x_{2}-x_{1} & y_{2}-y_{1}\\ \vdots & \vdots \\ x_{\mathrm {N}}-x_{1} & y_{\mathrm {N}}-y_{1}\\ \end{array}} }\right]\left [{ {\begin{array}{c} \tilde {x}\\ \tilde {y}\\ \end{array}} }\right] \\& \qquad \qquad \quad =\frac {1}{2}\left [{ {\begin{array}{c} x_{2}^{2}+y_{2}^{2}-x_{1}^{2}-y_{1}^{2}-\delta _{2}^{2}+\delta _{1}^{2}\\ \vdots \\ x_{\mathrm {N}}^{2}+y_{\mathrm {N}}^{2}-x_{1}^{2}-y_{1}^{2}-\delta _{\mathrm {N}}^{2}+\delta _{1}^{2}\\ \end{array}} }\right].\tag{14}\end{align*}
Then, the linear problem can be formulated by \begin{equation*} \mathbf {H}\tilde {\mathbf {p}}=\mathbf {b}.\tag{15}\end{equation*}
\begin{align*} \mathbf {b}=\frac {1}{2}\left [{ {\begin{array}{c} x_{2}^{2}+y_{2}^{2}-x_{1}^{2}-y_{1}^{2}-\delta _{2}^{2}+\delta _{1}^{2}\\ \vdots \\ x_{\mathrm {N}}^{2}+y_{\mathrm {N}}^{2}-x_{1}^{2}-y_{1}^{2}-\delta _{\mathrm {N}}^{2}+\delta _{1}^{2}\\ \end{array}} }\right].\tag{16}\end{align*}
Finally, the position \begin{equation*} \tilde {\mathbf {p}}={(\mathbf {H}^{T}\mathbf {H)}}^{\mathbf {-1}}\mathbf {H}^{T}\mathbf {b. }\tag{17}\end{equation*}
Equation (17) shows that the position
B. Maximum Likelihood Algorithm (MLA)
Taking Figure 4 as our reference and using the same notation as in equation (11), we can get the CDF (Cumulative Distribution Function) of the estimated distance \begin{align*} F_{\delta _{A}}\left ({v }\right)=&Pr \left ({\varepsilon _{A}+\varphi _{A}\le v-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right), \\=&F_{\varepsilon _{A}+\varphi _{A}}\left ({v-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right).\tag{18}\end{align*}
The pdf of the estimated distance \begin{equation*} f_{\delta _{A}}\left ({v }\right)=f_{\varepsilon _{A}+\varphi _{A}}\left ({v-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right).\tag{19}\end{equation*}
In equation (19) we can observe that the pdf of the estimated distance \begin{equation*} f_{\delta _{A}}\left ({v }\right)=\int _{-\infty }^\infty {f_{\varphi _{A}}\left ({\tau }\right)f_{\varepsilon _{A}}\left ({v-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right)-\tau }\right)d\tau.}\tag{20}\end{equation*}
\begin{align*}& f_{\varphi _{A}}\left ({\tau }\right) \\ &\,\,=\frac {\lambda _{AZ}\left ({\tau \lambda _{AZ} }\right)^{\alpha -1}\left ({1-\tau \lambda _{AZ} }\right)^{\beta -1}}{\mathrm {B}\left ({\alpha,\beta }\right)}, \\&\quad 0< \tau < \mu _{AZ}, \tag{21}\\ & f_{\varepsilon _{A}}\left ({v-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right)-\tau }\right) \\ &\,\, = \frac {\lambda _{A}^{n_{A}}e^{-\lambda _{A}\left ({v-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right)-\tau }\right)}}{\left ({n_{A}-1 }\right)!}\cdot \\& \quad \left ({v-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right)-\tau }\right)^{n_{A}-1},\quad v,~\lambda _{A}\ge 0,\tag{22}\end{align*}
\begin{align*}&\hspace {-1.2pc}f_{L}\left ({\delta _{A} }\right) \\[-1pt]=&f_{\varepsilon _{A}}\left ({v-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right)f_{\varphi _{A}}\left ({v-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right), \tag{23}\\[-1pt]&\hspace {-1.2pc}f_{L}\left ({\delta _{A} }\right) \\[-1pt]=&\frac {\lambda _{A}^{n_{A}}e^{-\lambda _{A}\left [{ \delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right]}\left [{ \delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right]^{n_{A}-1}}{\left ({n_{A}-1 }\right)!}\cdot \\[-1pt]&\frac {\lambda _{AZ}^{\alpha }\left ({\delta _{A}\!-\!d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right)^{\alpha -1}\left ({1\!-\!\lambda _{AZ}\left [{ \delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right] }\right)^{\beta -1}}{\mathrm {B}\left ({\alpha,\beta }\right)}. \\ {}\tag{24}\end{align*}
By applying the log-likelihood in equation (24), we obtain \begin{align*} \mathrm {ln}~f_{L}\left ({\delta _{A} }\right)=&\mathrm {ln}\left ({\frac {\lambda _{A}^{n_{A}}\lambda _{AZ}^{\alpha }}{\mathrm {B}\left ({\alpha,\beta }\right)\left ({n_{A}-1 }\right)!} }\right)-\lambda _{A}\left [{ \delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right] \\[-1pt]&+\left ({n_{A}-1 }\right)\mathrm {ln}\left [{ \delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right]+\left ({\alpha -1 }\right) \\[-1pt]&\times \,\mathrm {ln}\left [{ \delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right] \\[-1pt]&+\left ({\beta -1 }\right)\mathrm {ln}\left [{ 1-\lambda _{AZ}\left ({\delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right) }\right].\tag{25}\end{align*}
By maximizing the log-likelihood function of equation (25) with respect to \begin{align*} \frac {\partial \mathrm {ln}\,f_{L}\left ({\delta _{A} }\right)}{\partial \delta _{A}}=&-\lambda _{A}+\frac {n_{A}-1}{\delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right)}+\frac {\alpha -1}{\delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right)} \\[-1pt]&-\, \frac {\lambda _{AZ}\left ({\beta -1 }\right)}{1-\lambda _{AZ}\left ({\delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) }\right)}=0, \tag{26}\\[-1pt]&\hspace {-2pc}\frac {\beta -1}{\delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right)-{1/\lambda }_{AZ}}+\frac {n_{A}+\alpha -2}{\delta _{A}-d\left ({\mathbf {A}_{0},\mathbf {Z} }\right)}=\lambda _{A}. \\ {}\tag{27}\end{align*}
From equation (27) we obtain a second-order equation, whose solution is given by \begin{align*} \delta _{A}= &d\left ({\mathbf {A}_{0},\mathbf {Z} }\right) \\& \qquad +\frac {k_{A}\pm \sqrt {k_{A}^{2}-4\left ({\lambda _{A} \mathord {\left /{ {\vphantom {\lambda _{A} \lambda _{AZ}}} }\right. } \lambda _{AZ} }\right)\left ({n_{A}+\alpha -2 }\right)}}{2\lambda _{A}},\tag{28}\end{align*}
\begin{align*} \delta _{A}= &\sum \limits _{j=1}^{n_{A}-1} {d\left ({\mathbf {A}_{j-1},\mathbf {A}_{j} }\right)+d\left ({\mathbf {A}_{n_{A}-1},\mathbf {Z} }\right)+\frac {k_{A}}{2\lambda _{A}}} \\& \qquad \qquad \quad -\frac {\sqrt {k_{A}^{2}-4\left ({\lambda _{A} /\lambda _{AZ} }\right)\left ({n_{A}+\alpha -2 }\right)}}{2\lambda _{A}}. \tag{29}\end{align*}
By solving the term \begin{align*} d\left ({\mathbf {A}_{n_{A}-1},\mathbf {Z} }\right)= &\delta _{A}-\sum \limits _{j=1}^{n_{A}-1} {d\left ({\mathbf {A}_{j-1},\mathbf {A}_{j} }\right)-\frac {k_{A}}{2\lambda _{A}}} \\ & \,+\frac {\sqrt {k_{A}^{2}-4\left ({\lambda _{A} /\lambda _{AZ} }\right)\left ({n_{A}+\alpha -2 }\right)}}{2\lambda _{A}}. \tag{30}\end{align*}
Taking Figure 4 as our reference, we can estimate the position of NOI \begin{align*} {d\left ({\mathbf {A}_{n_{A}-1},\mathbf {Z} }\right)}^{2}=&\left ({x_{\mathbf {A}_{n_{A}-1}}-\tilde {x} }\right)^{2} \\[-1pt]&+\left ({y_{\mathbf {A}_{n_{A}-1}}-\tilde {y} }\right)^{2}=T_{A}^{2}, \tag{31}\\[-1pt] {d\left ({\mathbf {B}_{n_{B}-1},\mathbf {Z} }\right)}^{2}=&\left ({x_{\mathbf {B}_{n_{B}-1}}-\tilde {x} }\right)^{2} \\[-1pt]&+\left ({y_{\mathbf {B}_{n_{B}-1}}-\tilde {y} }\right)^{2}+=T_{B}^{2}, \tag{32}\\[-1pt] {d\left ({\mathbf {C}_{n_{C}-1},\mathbf {Z} }\right)}^{2}=&\left ({x_{\mathbf {C}_{n_{C}-1}}-\tilde {x} }\right)^{2} \\[-1pt]&+\left ({y_{\mathbf {C}_{n_{C}-1}}-\tilde {y} }\right)^{2}+=T_{C}^{2}.\tag{33}\end{align*}
Equations (31)–(33) represent a nonlinear problem. By performing the subtraction \begin{align*}&\hspace {-1.2pc}2\tilde {x}x_{\mathbf {B}_{n_{B}-1}}-2\tilde {x}x_{\mathbf {A}_{n_{A}-1}}+2\tilde {y}y_{\mathbf {B}_{n_{B}-1}}-2\tilde {y}y_{\mathbf {A}_{n_{A}-1}} \\[-1pt]=&x_{\mathbf {B}_{n_{B}-1}}^{2}+y_{\mathbf {B}_{n_{B}-1}}^{2}-x_{\mathbf {A}_{n_{A}-1}}^{2}-y_{\mathbf {A}_{n_{A}-1}}^{2}-T_{B}^{2}+T_{A}^{2}, \tag{34}\\[-1pt]&\hspace {-1.2pc}2\tilde {x}x_{\mathbf {C}_{n_{C}-1}}-2\tilde {x}x_{\mathbf {A}_{n_{A}-1}}+2\tilde {y}y_{\mathbf {C}_{n_{C}-1}}-2\tilde {y}y_{\mathbf {A}_{n_{A}-1}} \\[-1pt]=&x_{\mathbf {C}_{n_{C}-1}}^{2}+y_{\mathbf {C}_{n_{C}-1}}^{2}-x_{\mathbf {A}_{n_{A}-1}}^{2}-y_{\mathbf {A}_{n_{A}-1}}^{2}-T_{C}^{2}+T_{A}^{2}.\tag{35}\end{align*}
By expressing equations (34)–(35) in a matrix form [20] we obtain \begin{align*}&\hspace {-1.2pc}\left [{ {\begin{array}{cc} x_{\mathbf {B}_{n_{B}-1}}-x_{\mathbf {A}_{n_{A}-1}} & y_{\mathbf {B}_{n_{B}-1}}-y_{\mathbf {A}_{n_{A}-1}}\\ x_{\mathbf {C}_{n_{C}-1}}-x_{\mathbf {A}_{n_{A}-1}} & y_{\mathbf {C}_{n_{C}-1}}-y_{\mathbf {A}_{n_{A}-1}}\\ \end{array}} }\right]\left [{ {\begin{array}{c} \tilde {x}\\ \tilde {y}\\ \end{array}} }\right] \\=&\frac {1}{2}\left [{ {\begin{array}{c} x_{\mathbf {B}_{n_{B}-1}}^{2}+y_{\mathbf {B}_{n_{B}-1}}^{2}-x_{\mathbf {A}_{n_{A}-1}}^{2}-y_{\mathbf {A}_{n_{A}-1}}^{2}-T_{B}^{2}+T_{A}^{2}\\ x_{\mathbf {C}_{n_{C}-1}}^{2}+y_{\mathbf {C}_{n_{C}-1}}^{2}-x_{\mathbf {A}_{n_{A}-1}}^{2}-y_{\mathbf {A}_{n_{A}-1}}^{2}-T_{C}^{2}+T_{A}^{2}\\ \end{array}} }\right]. \\ {}\tag{36}\end{align*}
Then the linear problem can be formulated by \begin{equation*} \tilde {\mathbf {H}}\tilde {\mathbf {p}}=\tilde {\mathbf {b}},\tag{37}\end{equation*}
\begin{align*} \tilde {\mathbf {b}}\!=\!\frac {1}{2}\left [{ {\begin{array}{c} x_{\mathbf {B}_{n_{B}-1}}^{2}+y_{\mathbf {B}_{n_{B}-1}}^{2}-x_{\mathbf {A}_{n_{A}-1}}^{2}-y_{\mathbf {A}_{n_{A}-1}}^{2}-T_{B}^{2}+T_{A}^{2}\\ x_{\mathbf {C}_{n_{C}-1}}^{2}+y_{\mathbf {C}_{n_{C}-1}}^{2}-x_{\mathbf {A}_{n_{A}-1}}^{2}-y_{\mathbf {A}_{n_{A}-1}}^{2}-T_{C}^{2}+T_{A}^{2}\\ \end{array}} }\right]. \\ {}\tag{38}\end{align*}
Finally, the position \begin{equation*} \tilde {\mathbf {p}}={(\tilde {\mathbf {H}}^{T}\tilde {\mathbf {H}})}^{\mathbf {-1}}\tilde {\mathbf {H}}^{T}\tilde {\mathbf {b}}.\tag{39}\end{equation*}
In a multi-hop scenario, the number of hops between NOI
C. Weighted Multilateration Algorithm (WMA)
There are diverse range-based localization techniques to calculate the position of the NOI, for example, hyperbolic positioning algorithm (multilateration) [21], weighted hyperbolic (weighted multilateration) [21], circular [21], weighted circular [21], MDS [43], etc. The hyperbolic positioning algorithm and weighted hyperbolic positioning algorithm solve the localization problem through multilateration [21], [42] obtaining a linear equation that can be easily solved by a LS estimator. The circular positioning algorithm and weighted circular algorithm calculate the NOI’s position through the gradient descent method [54] iteratively until we find the position of the NOI that minimizes the MSE. The MDS algorithm calculates the NOI’s position through the spectral decomposition of the matrix of distance between the RNs; however, this method implies a high computational cost, because it is a centralized algorithm, which is why a single node must perform the entire computing of the network [43]. Therefore, we select the weighted multilateration algorithm since this variant of the classic multilateration algorithm only adds a covariance matrix in the classic multilateration algorithm, which is why the complexity order of this algorithm is the same as in the classic multilateration algorithm. The covariance matrix contains the information about the estimated distance between the NOI and the RNs; therefore this matrix contains the weights of how accurate the estimated distances are between the NOI and the RNs of their real value, which implies a higher accuracy for the localization of the NOI.
By taking Figure 4 as a reference, the distance \begin{equation*} \tilde {\mathbf {p}}=\left ({\mathbf {H}^{T}\mathbf {S}^{-1}\mathbf {H} }\right)^{-1}\mathbf {H}^{T}\mathbf {S}^{-1}\mathbf {b }\tag{40}\end{equation*}
\begin{align*} \mathbf {S}= &\Biggl [{ {\begin{array}{ccc} \mathrm {Var}\left ({\delta _{1}^{2} }\right)+\mathrm {Var}(\delta _{2}^{2}) & \mathrm {Var}\left ({\delta _{1}^{2} }\right) & \boldsymbol {\cdots }\\ \mathrm {Var}\left ({\delta _{1}^{2} }\right) & \mathrm {Var}\left ({\delta _{1}^{2} }\right)+\mathrm {Var}(\delta _{3}^{2}) & \boldsymbol {\cdots }\\ \boldsymbol {\vdots } & \boldsymbol {\vdots } & \ddot {\boldsymbol s }\\ \mathrm {Var}\left ({\delta _{1}^{2} }\right) & \mathrm {Var}\left ({\delta _{1}^{2} }\right) & \boldsymbol {\cdots }\\ \end{array}} } \\& \qquad \qquad \qquad \qquad \qquad \qquad \,{ {\begin{array}{c} \mathrm {Var}\left ({\delta _{1}^{2} }\right)\\ \mathrm {Var}\left ({\delta _{1}^{2} }\right)\\ \vdots \\ \mathrm {Var}\left ({\delta _{1}^{2} }\right)+\mathrm {Var}(\delta _{N}^{2})\\ \end{array}} }\Biggr]\tag{41}\end{align*}
As seen in equation (41), the matrix elements of covariance \begin{equation*} \mathrm {Var}\left ({\delta _{i}^{2} }\right)\mathrm {=E}\left ({\delta _{i}^{4} }\right)-\left [{ \mathrm {E}\left ({\delta _{i}^{2} }\right) }\right]^{2},\tag{42}\end{equation*}
\begin{align*} \delta _{i}^{2}=&\left ({d_{i}+\varphi _{i}+\varepsilon _{i} }\right)^{2}=d_{i}^{2}+2d_{i}\varphi _{i} \\&+2d_{i}\varepsilon _{i}+2\varphi _{i}\varepsilon _{i}+\varphi _{i}^{2}+\varepsilon _{i}^{2}, \\ \delta _{i}^{4}=&\left ({d_{i}+\varphi _{i}+\varepsilon _{i} }\right)^{4}=d_{i}^{4}+4d_{i}^{3}\varphi _{i}+6d_{i}^{2}\varphi _{i}^{2}+4d_{i}\varphi _{i}^{3} \\&+\varphi _{i}^{4}+4d_{i}^{3}\varepsilon _{i}+12d_{i}^{2}\varphi _{i}\varepsilon _{i}+12d_{i}\varphi _{i}^{2}\varepsilon _{i}+4{\varphi _{i}^{3}\varepsilon }_{i} \\&+6d_{i}^{2}\varepsilon _{i}^{2}+12d_{i}\varphi _{i}\varepsilon _{i}^{2}+6{\varphi _{i}^{2}\varepsilon }_{i}^{2}+4d_{i}\varepsilon _{i}^{3}+4\varphi _{i}\varepsilon _{i}^{3}+\varepsilon _{i}^{4}, \\&\hspace {-1.2pc}\mathrm {E}\left ({\delta _{i}^{4} }\right) \\=&d_{i}^{4}+4d_{i}^{3}\mathrm {E}\left ({\varphi _{i} }\right)+6d_{i}^{2}\mathrm {E}\left ({\varphi _{i}^{2} }\right)+4d_{i}\mathrm {E}\left ({\varphi _{i}^{3} }\right) \\&+\mathrm {E}\left ({\varphi _{i}^{4} }\right)+4d_{i}^{3}\mathrm {E}\left ({\varepsilon _{i} }\right)+12d_{i}^{2}\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right) \\&+12d_{i}\mathrm {E}\left ({\varphi _{i}^{2} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right)\!+\!4\mathrm {E}\left ({\varphi _{i}^{3} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right)\!+\!6\mathrm {E}\left ({\varphi _{i}^{2} }\right)\mathrm {E}\left ({\varepsilon _{i}^{2} }\right) \\&+12d_{i}\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i}^{2} }\right)+6d_{i}^{2}\mathrm {E}\left ({\varepsilon _{i}^{2} }\right)+4d_{i}\mathrm {E}\left ({\varepsilon _{i}^{3} }\right) \\&+4\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i}^{3} }\right)+\mathrm {E}\left ({\varepsilon _{i}^{4} }\right), \\&\hspace {-1.2pc}\left [{ \mathrm {E}\left ({\delta _{i}^{2} }\right) }\right]^{2} \\=&\Biggl [{ d_{i}^{2}+\mathrm {E}\left ({\varphi _{i}^{2} }\right)+\mathrm {E}\left ({\varepsilon _{i}^{2} }\right)+2d_{i}\mathrm {E}\left ({\varepsilon _{i} }\right) } \\&+ { 2d_{i}\mathrm {E}\left ({\varphi _{i} }\right)+2\mathrm {E}\left ({\varepsilon _{i} }\right)\mathrm {E}\left ({\varphi _{i} }\right) }\Biggr]^{2} \\=&\left [{ d_{i}^{2}+\mathrm {E}\left ({\varphi _{i}^{2} }\right)+\mathrm {E}\left ({\varepsilon _{i}^{2} }\right) }\right]^{2} \\&+\,4\left [{ d_{i}^{2}+\mathrm {E}\left ({\varphi _{i}^{2} }\right)+\mathrm {E}\left ({\varepsilon _{i}^{2} }\right) }\right]\cdot \\&\left [{ d_{i}\mathrm {E}\left ({\varepsilon _{i} }\right)+d_{i}\mathrm {E}\left ({\varphi _{i} }\right)+\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right) }\right] \\&+{4\left [{ d_{i}\mathrm {E}\left ({\varepsilon _{i} }\right)+d_{i}\mathrm {E}\left ({\varphi _{i} }\right)+\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right) }\right]}^{2}, \\&\hspace {-1.2pc}\left [{ \mathrm {E}\left ({\delta _{i}^{2} }\right) }\right]^{2} \\=&d_{i}^{4}+2d_{i}^{2}\mathrm {E}\left ({\varphi _{i}^{2} }\right)+2d_{i}^{2}\mathrm {E}\left ({\varepsilon _{i}^{2} }\right) \\&+2\mathrm {E}\left ({\varphi _{i}^{2} }\right)\mathrm {E}\left ({\varepsilon _{i}^{2} }\right)+\mathrm {E}^{2}\left ({\varphi _{i}^{2} }\right)+\mathrm {E}^{2}\left ({\varepsilon _{i}^{2} }\right)+4d_{i}^{3}\mathrm {E}\left ({\varepsilon _{i} }\right) \\&+4d_{i}^{3}\mathrm {E}\left ({\varphi _{i} }\right)+4d_{i}^{2}\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right)+4d_{i}\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varphi _{i}^{2} }\right) \\&+4d_{i}\mathrm {E}\left ({\varphi _{i}^{2} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right)+4\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varphi _{i}^{2} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right) \\&+\,4d_{i}\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i}^{2} }\right) \\&+4d_{i}\mathrm {E}\left ({\varepsilon _{i} }\right)\mathrm {E}\left ({\varepsilon _{i}^{2} }\right)\!+\!4\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right)\mathrm {E}\left ({\varepsilon _{i}^{2} }\right)\!+\!4d_{i}^{2}\mathrm {E}^{2}\left ({\varepsilon _{i} }\right) \\&+8d_{i}^{2}\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right)+4d_{i}^{2}\mathrm {E}^{2}\left ({\varphi _{i} }\right)+8d_{i}\mathrm {E}^{2}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right) \\&+8d_{i}\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}^{2}\left ({\varepsilon _{i} }\right)+4\mathrm {E}^{2}\left ({\varphi _{i} }\right)\mathrm {E}^{2}\left ({\varepsilon _{i} }\right).\end{align*}
Finally, the term \begin{align*}&\hspace {-1.2pc}\mathrm {Var}\left ({\delta _{i}^{2} }\right) \\=&4d_{i}^{2}\mathrm {Var}\left ({\varphi _{i} }\right)+4d_{i}\mathrm {E}\left ({\varphi _{i}^{3} }\right)+\mathrm {Var}\left ({\varphi _{i}^{2} }\right) \\&+8d_{i}\mathrm {E}\left ({\varepsilon _{i} }\right)\mathrm {Var}\left ({\varphi _{i} }\right)+4\mathrm {E}\left ({\varphi _{i}^{3} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right)+4\mathrm {Var}\left ({\varphi _{i}\varepsilon _{i} }\right) \\&+8d_{i}\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {Var}\left ({\varepsilon _{i} }\right)+4d_{i}^{2}\mathrm {Var}\left ({\varepsilon _{i} }\right)+4d_{i}\mathrm {E}\left ({\varepsilon _{i}^{3} }\right) \\&+\mathrm {Var}\left ({\varepsilon _{i}^{2} }\right) \\&+4\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i}^{3} }\right)-4d_{i}\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varphi _{i}^{2} }\right) \\&-4\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varphi _{i}^{2} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right) \\&-4d_{i}\mathrm {E}\left ({\varepsilon _{i} }\right)\mathrm {E}\left ({\varepsilon _{i}^{2} }\right)-4\mathrm {E}\left ({\varphi _{i} }\right)\mathrm {E}\left ({\varepsilon _{i} }\right)\mathrm {E}\left ({\varepsilon _{i}^{2} }\right).\tag{43}\end{align*}
Equation (43) shows that the term
Single-Hop and Multi-Hop Correcting Factor
This section describes the proposed algorithm, the contribution of which is to increase the accuracy of the NOI localization using a correcting factor in the estimated distance between the NOI and the RNs. The correcting factor is a parameter that compensates for the exponential noise factor due to the ToA inaccuracies. The estimated distance between the NOI and the RNs represents an overestimation of the true distance between the NOI and the RNs, due to the exponentially distributed noise in the ToA estimation. Therefore, the correcting factor minimizes the MSE between the true NOI – RNs distance, and the estimated distance between them. Hence, by using the correcting factor in the localization algorithms, we minimize the estimated distance error between the NOI and RNs and obtain an estimated value that is closer to the true distance between the NOI and the respective RNs. The use of this correcting factor is the reason we have a smaller error in the algorithms MA CF and WMA CF compared to other algorithms when estimating the NOI’s position.
By taking Figure 4 as a reference, the estimated distance
A. Single-Hop Correcting Factor
Considering that NOI \begin{equation*} \mathrm {E}\left ({\varepsilon _{A}+\varphi _{A} }\right)=\frac {1}{\lambda _{A}}+\frac {\alpha }{\lambda _{AZ}(\alpha +\beta)}.\tag{44}\end{equation*}
Since \begin{equation*} d_{A}=\frac {\delta _{A}}{1+a},\tag{45}\end{equation*}
\begin{align*} \mathrm {MSE}=&\mathrm {E}\left \{{\left [{ d_{A}-\delta _{A} }\right]^{2} }\right \} \\=&\delta _{A}^{2}\mathrm {E}\left \{{\left [{ \frac {1}{1+\gamma _{A}}-\frac {1}{1+a} }\right]^{2} }\right \}.\tag{46}\end{align*}
Minimizing the given MSE in equation (46) with respect to the parameter \begin{align*}& \frac {d}{da}\delta _{A}^{2}\mathrm {E}\left \{{\left [{ \frac {1}{1+\gamma _{A}}-\frac {1}{1+a} }\right]^{2} }\right \} \\& \qquad \quad =2\delta _{A}^{2}\mathrm {E}\left \{{\left [{ \frac {1}{1+\gamma _{A}}-\frac {1}{1+a} }\right]\frac {1}{\left ({1+a }\right)^{2}} }\right \}=0.\tag{47}\end{align*}
From equation (47), we obtain the parameter \begin{equation*} a={\mathrm {E}\left \{{\left ({1+\gamma _{A} }\right)^{-1} }\right \}}^{-1}-1.\tag{48}\end{equation*}
In all scenarios, the random variables \begin{equation*} \mathrm {E}\left \{{\left ({1+\gamma _{A} }\right)^{-1} }\right \}=\int _{0}^\infty {\frac {1}{1+x}f_{\varepsilon _{A}+\varphi _{A}}\left ({x }\right)} dx,\tag{49}\end{equation*}
\begin{align*} f_{\varphi _{A}}\left ({z }\right)=&\frac {\lambda _{AZ}\left ({z\lambda _{AZ} }\right)^{\alpha -1}\left ({1-z\lambda _{AZ} }\right)^{\beta -1}}{\mathrm {B}\left ({\alpha,\beta }\right)}, \\&0< z< 1 /\lambda _{AZ}, \tag{50}\\ f_{\varepsilon _{A}}\left ({z }\right)=&\lambda _{A}e^{-\lambda _{A}z},\quad z,~\lambda _{A}\ge 0, \tag{51}\\ f_{\varepsilon _{A}+\varphi _{A}}\left ({z }\right)=&\frac {\lambda _{A}\lambda _{AZ}e^{-\lambda _{A}z}}{\mathrm {B}\left ({\alpha,\beta }\right)}\cdot \\&\int \limits _{0}^{1 /\lambda _{A}} {\left ({\tau \lambda _{AZ} }\right)^{\alpha -1}\left ({1-\tau \lambda _{AZ} }\right)^{\beta -1}e^{\lambda _{A}\tau }d\tau.} \\ {}\tag{52}\end{align*}
B. Multi-Hop Correcting Factor
Assuming \begin{equation*} a={\mathrm {E}\left \{{\left ({1+\gamma }\right)^{-1} }\right \}}^{-1}-1.\tag{53}\end{equation*}
In all scenarios, the random variable \begin{equation*} \mathrm {E}\left \{{\left ({1+\gamma }\right)^{-1} }\right \}=\int _{0}^\infty {\frac {1}{1+x}f_{\varepsilon +\varphi }\left ({x }\right)} dx,\tag{54}\end{equation*}
\begin{align*} f_{\varphi }\left ({v }\right)=&\frac {\lambda _{AZ}\left ({v\lambda _{AZ} }\right)^{\alpha -1}\left ({1-v\lambda _{AZ} }\right)^{\beta -1}}{\mathrm {B}\left ({\alpha,\beta }\right)}, \\&0< v< 1 /\lambda _{AZ}, \tag{55}\\ f_{\varepsilon }\left ({z }\right)=&\frac {\lambda ^{r}e^{-\lambda v}}{\left ({r-1 }\right)!}v^{r-1},\quad v,~\lambda \ge 0, \tag{56}\\ f_{\varepsilon +\varphi }\left ({v }\right)=&\frac {\lambda ^{r}\lambda _{AZ}e^{-\lambda v}}{\mathrm {B}\left ({\alpha,\beta }\right)\left ({r-1 }\right)!}\cdot \\&\int \limits _{0}^{1} /\lambda {\left ({v\lambda _{AZ} }\right)^{\alpha -1}\left ({1\!-\!v\lambda _{AZ} }\right)^{\beta -1}\left ({v\!-\!\tau }\right)^{r-1}e^{\lambda \tau }d\tau.} \\ {}\tag{57}\end{align*}
Equations (55)–(56) represent the pdfs of the random variables of the n-Erlang and beta types, respectively. Through equation (57), we obtain the resulting pdf of the pdfs’ convolution of the n-Erlang and beta types of random variables.
Results
This section presents the performance results obtained with the MA, MA CF, WMA, WMA CF and MLA algorithms. The NOI’s mobility parameters are \begin{equation*} RMSE=\sqrt {\frac {1}{K}\sum \limits _{k=1}^{K} {\left ({x-\tilde {x}_{k} }\right)^{2}+\left ({y-\tilde {y}_{k} }\right)^{2}}}\tag{58}\end{equation*}
The localization algorithm’s performance is obtained from two evaluation scenarios, a single-hop scenario and a multi-hop scenario, where we consider the number and distribution of the RNs. In every simulation scenario we consider a network where we vary the number of RNs from 3 to 7 nodes and the NOI is randomly chosen within the sensing area. Network coverage defined through the communications radius,
A. Mobility Analysis
Figures 7, 8 and 9 show the normalized RMSE of the localization algorithms MA, MA CF, WMA, WMA CF and MLA of each of the 100 positions of the NOI trajectory in a sensing area of 100m x 100m. Each value of the normalized RMSE was obtained through 5000 iterations, where we observe there is very little variation of the normalized RMSE for each of the 100 positions of the NOI trajectory. The 100 positions of the NOI trajectory represent the NOI’s positions at different instants of time
Normalized RMSE of the trajectory of the NOI of the algorithm WMA CF varying the number of RNs from 3 to 5 nodes.
Normalized RMSE of the trajectory of the NOI of the algorithm WMA CF for different distributions of the RNs.
Table 1 shows the simulation parameters used throughout the numerical results experiments conducted for the localization algorithms when calculating the normalized RMSE presented in Figures 7, 8 and 9. In Figure 13, it can be observed that the performance of the algorithms is described in terms of the normalized RMSE, which is a numerical and adimensional value.
Distribution of the RNs with well-defined geometry and extra nodes randomly distributed.
Normalized RMSE vs proportion of noise considering (a) 3 RNs and (b) 4 RNs for the first case in a single-hop network.
Figure 7 shows the normalized RMSE considering a network with 5 random RNs, where 3 RNs are distributed with a well-defined triangular geometry. The results obtained show that the algorithms MA and WMA using a correcting factor improve the normalized RMSE compared to those that do not use a correcting factor. Finally, we learn that the algorithm WMA CF shows the best performance with respect to the rest of the algorithms. One can see that the WMA CF algorithm presents a normalized RMSE of 0.08, while the WMA algorithm has a normalized RMSE value of 0.12, because the proposed algorithm WMA CF applies the correcting factor of the estimated distance between the NOI and the respective RNs, and thus it decreases error.
Figure 8 shows that for a network of 3 defined RNs with random positions that always remain fixed, we get a very vague localization of the NOI. However, by increasing the number of RNs, the normalized RMSE of the algorithm WMA CF decreases.
Figure 9 presents the normalized RMSE of the algorithm WMA CF for a network with 5 RNs. The results show that with a rectangular geometry (4 RNs distributed in a rectangle-like shape) we obtain a more accurate localization than with a triangular geometry (3 RNs distributed in a well-defined triangular shape). However, a network with 5 RNs arranged in random positions does not guarantee a good localization of the NOI, since there are situations where the 5 RNs are very close to each other, which in turn does not provide sufficient coverage of the NOI’s area.
B. Single-Hop Scenario
Table 2 presents the test cases in order to evaluate the localization algorithms’ normalized RMSE performance.
Figures 10, 11 and 12 show examples of the RNs’ distribution for each one of the cases described in Table 2, respectively. The RNs are represented by red triangles, the green square represents the NOI and the red circles are the nodes in the network with an unknown position.
These test cases were designed to obtain different behaviors of the normalized RMSE for different geometric distributions of the RNs such as the triangular, the square and the heptagonal, while also augmenting the number of RNs in the network, starting from a triangular geometry, and finally varying the number of RNs with totally random geometric distributions. The advantage of performing the test cases is to determine the ideal geometry and the necessary number of RNs in the network to obtain the best normalized RMSE performance of the localization algorithms.
Table 3 shows the simulation parameters that determine the localization algorithm’s normalized RMSE performance for the test cases that appear on Table 2 in the single-hop and multi-hop scenarios.
1) Case 1
Figure 13 presents the normalized RMSE of the aforementioned localization algorithms with variations in the proportion of noise, which is a constant parameter in all the results obtained. The proportion of noise is a factor with which the parameters
Normalized RMSE vs proportion of noise considering (a) 5 RNs and (b) 7 RNs for the first case in a single-hop network.
Figure 14 shows that the WMA and WMA CF algorithms improve their normalized RMSE performance as the number of RNs increases to 5 and 7 nodes. However, a rise in the number of RNs does not improve the normalized RMSE performance of the MA, MA CF and MLA algorithms.
The results we obtained in Figure 15 show that the MA and MLA algorithms retain their normalized RMSE performance as the number of RNs rises from 3 to 7 RNs. Therefore, this result indicates that 3 RNs are enough to obtain an estimation of the NOI position when the MA or MLA localization algorithms are used. According to Figure 16, the WMA and WMA CF algorithms improve their normalized RMSE performance as the number of RNs rises. This normalized RMSE improvement is because by having a greater number of RNs, we decrease the localization error between the NOI and the RNs, which implies a reduction in the variance of the estimated distances between the NOI and the RNs.
Normalized RMSE vs proportion of noise starting from 3 and up to 7 RNs for (a) MA and (b) MLA for the first case in a single-hop network.
Normalized RMSE vs proportion of noise starting from 3 and up to 7 RNs for (a) WMA and (b) WMA CF for the first case in a single-hop network.
2) Case 2
This case considers (a) 3 RNs arranged in a well-defined triangular geometry and (b) 5 RNs distributed in a pentagonal geometry. According to the results shown in Figure 17(b), there is a slight reduction of the normalized RMSE of the localization techniques presented with respect to the results shown in Figure 14(a) where we consider 5 RNs, since we obtain a greater coverage area of the NOI with a solid pentagonal geometry than with a solid triangular geometry.
Normalized RMSE vs proportion of noise considering (a) 3 RNs and (b) 5 RNs for the second case in a single-hop network.
Figure 18 shows that the normalized RMSE performance improves for the algorithms WMA and WMA CF as the number of RNs rises. This improvement is seen starting at 4 RNs. The WMA CF algorithm presents less normalized RMSE than the WMA algorithm due to the correcting factor, which decreases the separation distance error between the NOI and the RNs.
Normalized RMSE vs proportion of noise starting from 3 and up to 7 RNs for (a) WMA and (b) WMA CF for the second case in a single-hop network.
3) Case 3
Figure 19(a) reports the normalized RMSE of the localization techniques considering 5 randomly arranged RNs. A comparison of these results to the results shown in Figure 17(b) for 5 RNs arranged with a solid pentagonal geometry shows that more normalized RMSE is obtained with the RNs randomly arranged. The analyzed localization techniques present a decrease of the normalized RMSE as the number of RNs rises to 7 nodes according to Figure 19(b).
Normalized RMSE vs proportion of noise considering (a) 5 RNs and (b) 7 RNs for the third case in a single-hop network.
The normalized RMSE of the localization techniques presents very high error values for 4 RNs arranged randomly; thus, after 5 RNs we obtain a more robust normalized RMSE as shown in Figure 20. When considering a network with 3 randomly arranged RNs, there is no guarantee of a good NOI localization, because the area covered by 3 RNs in some cases may be very small, which can make the localization of the NOI extremely vague.
Normalized RMSE vs proportion of noise starting from 3 and up to 7 RNs for (a) WMA and (b) WMA CF for the third case in a single-hop network.
C. Multi-Hop Scenario
1) Case 1
The MLA algorithm presents a normalized RMSE performance with many variations considering 3 RNs, because this algorithm considers the 3 nodes that are closest to the NOI to be the routes that best approximate the real distance between the NOI and the RNs (Figure 21(a)); therefore, by selecting these nodes we obtain an irregular geometry to estimate the NOI’s position. On the other hand, when there are 5 RNs, the algorithm presents a better normalized RMSE performance (Figure 21(b)).
Normalized RMSE vs proportion of noise considering (a) 3 RNs and (b) 5 RNs for the first case in a multi-hop network.
The WMA CF algorithm improves its normalized RMSE performance as the number of RNs rises, as shown in Figure 22. Figure 22(b) shows a similar behavior for the MLA algorithm. The MLA algorithm shows an uneven increase of normalized RMSE as the proportion noise varies for 3 RNs, as shown in Figure 22(b).
Normalized RMSE vs proportion of noise starting from 3 and up to 7 RNs for (a) WMA CF and (b) MLA for the first case in a multi-hop network.
2) Case 2
Figure 23 shows the normalized RMSE of the aforementioned localization algorithms as the proportion noise varies, using 3 fixed RNs arranged in a well-defined triangular geometry and 5 fixed RNs arranged in a pentagonal geometry. By augmenting to 5 RNs with a well-defined pentagonal geometry, we can see an improvement in the normalized RMSE performance with respect to the ones shown in Figure 23(a).
Normalized RMSE vs proportion of noise considering (a) 3 RNs and (b) 5 RNs for the second case in a multi-hop network.
The WMA CF algorithm presents an important normalized RMSE performance improvement as the number of RNs rises and with solid geometries of regular polygons (Figure 24(a)). The results shown in Figure 24(b) show that the MLA algorithm improves its normalized RMSE performance as the number of RNs rises, presenting a better performance starting at 4 RNs.
Normalized RMSE vs proportion of noise starting at 3 and up to 7 RNs for (a) WMA CF and (b) MLA for the second case in a multi-hop network.
Figure 24 shows that the WMA CF algorithm (Figure 24(a)) has a smaller normalized RMSE value than the MLA algorithm does (Figure 24(b)) when the proportion of noise and the number of RNs are varied. For example, for a network with 3 RNs, the WMA CF algorithm has a maximum normalized RMSE value of 0.6, while the MLA algorithm reaches normalized RMSE values above one. This is because the MLA algorithm does not consider a fixed distribution of RNs, but rather the nodes closest to the NOI to estimate its position, which implies that the coverage area of those nodes is very small and the normalized RMSE values obtained could be inconsistent.
3) Case 3
Figure 25(a) shows the normalized RMSE of the localization techniques considering 5 randomly arranged RNs. A comparison of these results to those shown in Figure 23(b) for 5 RNs arranged with a solid pentagonal geometry shows that we obtain less normalized RMSE with the solid geometry than with the randomly arranged RNs. Figure 25(b), shows that there is a decrease of the normalized RMSE with the localization techniques presented as the number of RNs rises to 7 nodes.
Normalized RMSE vs proportion of noise considering (a) 5 RNs and (b) 7 RNs for the third case in a multi-hop network.
Figure 25 clearly shows that for a network with 5 RNs and another with 7 RNs, the WMA CF algorithm performs better in terms of normalized RMSE than the other algorithms analyzed, reaching maximum normalized RMSE values of approximately 0.5 for 5 RNs and 0.3 for 7 RNs. This is because the proposed WMA CF algorithm uses the correcting factor to calculate the separation distance between the NOI and the respective RNs, and it also uses the WMA algorithm, which has a lower normalized RMSE than the MA algorithm.
Figure 26 shows that the normalized RMSE of the localization techniques presents error values that are very high for 3 randomly arranged RNs, which means that by starting at 4 RNs we obtain a normalized RMSE value that is more robust. With a network with 3 randomly arranged RNs, there is no guarantee of a good NOI localization, because the area covered by 3 RNs can be very small in some cases; therefore, there is much inaccuracy in the NOI localization. The MLA algorithm does not present a good normalized RMSE performance when RNs are distributed randomly. We can observe that the correcting factor decreases the localization error of the MA and WMA localization algorithms; in addition, Figure 26 shows that the normalized RMSE performance of the WMA CF algorithm presents more robustness when this parameter is added, which is observable starting at 4 RNs in the network (Figure 26(a)).
Normalized RMSE vs proportion of noise starting from 3 and up to 7 RNs for (a) WMA CF and (b) MLA for the third case in a multi-hop network.
In the three test cases previously described, the WMA CF algorithm presents a better normalized RMSE performance than the other analyzed algorithms. In case 2, where the RNs are distributed with well-defined geometries, the analyzed localization algorithms present less normalized RMSE than in cases 1 and 3. In case 3, where the RNs are randomly arranged in the sensing area, the localization algorithms present the worst normalized RMSE performance, since this case implies that the RNs are not necessarily distributed in such a manner that the NOI is within the coverage area formed by the geometry of the RNs. Case 1 shows that as the number of RNs rises, assuming a well-defined triangular geometry of the RN ensemble, the MA algorithm maintains the same normalized RMSE whilst the normalized RMSE of the WMA algorithm decreases as the number of RNs increases regardless of the geometry of the RNs; additionally, the correcting factor introduced in the MA and WMA algorithms decreases their normalized RMSE. Finally, for 5 or more RNs in the network, the WMA CF localization algorithm presents greater robustness than the other analyzed algorithms.
Conclusion
This paper determines the normalized RMSE performance of range-based localization algorithms as the proportion of noise and the number of RNs vary. By analyzing the results obtained we learn that the MA and MLA algorithms in a single-hop scenario present a similar performance considering at least 3 RNs with a well-defined triangular geometry. However, in a multi-hop scenario, the MLA algorithm does not present a robust normalized RMSE performance because it considers the nodes closest to the NOI to obtain its position, and the geometry shaped by the nodes closest to the NOI is totally random, with very low coverage in the sensing area. This study shows that the algorithm we propose, WMA CF, yields a better performance than the other analyzed algorithms considering both a single-hop and a multi-hop scenario. The algorithm we propose considers the analysis of the environment of noise due to the ToA and the NOI’s mobility in the calculus of the WMA algorithm’s correlation matrix and it also adds the correcting factor, which decreases NOI localization error. The correcting factor corrects the estimated distance between the NOI and the respective RNs, which improves the accuracy of the NOI localization. According to the results we obtained, our proposed WMA CF algorithm presents a greater robustness than the other analyzed algorithms considering at least 5 RNs, either with a random distribution or with a well-defined geometry. It is considered as future work the performance evaluation of the localization algorithms in 3D scenarios, in addition to the consideration of different scenarios with mobility such as in vehicular networks. Also, these localization algorithms can be used in diverse applications within the IoT because such applications need to collect and fusion data from low-cost sensors deployed in networks. Most of these applications use data collected that have an important dependency on ubiquitous information.