Introduction
Recently, an increasing number of location-based services (LBS) have been implemented to facilitate the experience of human daily life. People spend approximately 80%–90% of their time indoors, which has become a significant factor for LBS applications [1], [2]. Highly precise localization systems are expected to be helpful for indoor LBS applications. Because wireless local access network and sensor networks have been installed in many buildings, numerous approaches have been made to use these networks for localization acquisition [3], [4]. The primary goal is to describe the distance between the receiver and a reference grid, i.e., a number of known reference points, as a function of the received signal strength (RSS), and then estimate the distance from the receiver to the reference grid by comparing the measured field strength with the actual field strength. The major challenge of these radio-based RSS localizations is the variations in the RSS that result from the fluctuating and unpredictable nature of radio signals, which can lead to a significant location estimation error in indoor environments. Furthermore, one downside of radio-based RSS localization is its high implementation cost resulting from the need for a wide deployment of network infrastructures.
Lighting plays an essential role in daily life and is considered a fundamental function of indoor environments. Solid-state lighting has been attracting considerable attention because of its versatility and advantages over conventional lighting sources in applications such as monitor backlighting, traffic indicators, and general illumination. Taking advantage of low energy consumption, long lifetime, small size, and easy production, light emitting diodes (LEDs) are replacing the incandescent lamp as a dominant part of the next generation of lighting sources. Indoor localization using visible light communication (VLC) based on LED lighting has been considered. In contrast to the phenomena of significant radio signal scattering and diffusing in indoor environments, optical signals from LED lighting are relatively stable because lighting sources are often fixed to ceilings, which facilitates line-of-sight (LOS) transmission. Localization systems must set a reference grid and then obtain the receiver position according to the distances between the receiver and various reference nodes. Several ranging methods have been developed to obtain the distance information required in the positioning algorithm, including time of arrival (TOA), time difference of arrival (TDOA) [5], [6] , and RSS [7], [8]. For propagation-time-based ranging techniques, including TOA and TDOA, the inaccuracy of time synchronization often leads to ranging errors. In the context of LED lighting, a photodetector (PD) can easily obtain the RSS without any additional hardware to calculate the position of the target by using the distance measurement from at least three reference nodes. However, the average RSS of every position in the region of interest (ROI) should be pre-established. This raises implementation costs and necessitates the inconvenient execution of numerous RSS measurements in the ROI in advance.
Currently, smart devices are typically equipped with camera modules that have been used to implement
image-sensor-based VLC (IS-VLC) signal reception, specifically leading to the development of indoor localization
technology based on the image sensors of camera modules [9]–[11]. In [9], an indoor positioning
system based on two LED luminaries was implemented with the assumption that LED coordinates were provided. Moreover,
its localization accuracy was degraded because of misalignment between ceiling and floor coordinates.
Image-sensor-based trilateration was developed and implemented in [10],
[11]. The focal length of the camera module was assumed to be known in
[9], [10], but the focal length of the
camera module is a specified parameter in the module and might differ from one smartphone to another. In this paper,
we propose and demonstrate indoor localization using
The Proposed Localization System
2.1 System Model
The system model of the proposed
2.2 The Principle of ID Assignment With Orientation
The block diagram of the proposed localization system, which consists of IS-VLC and
2.3 Indoor Localization With K
-Pairwise LED IS-VLP
Based on the principle of ID assignment with orientation, the smartphone can identify the mapping between the
received LED IDs and their images on the image sensor and determine the system coordinates of those LED luminaries.
The proposed
Acquiring system coordinates through IS-VLC: The system coordinates of LED luminaries
are acquired through the subsystem of IS-VLC. Currently, most IS-VLC are discussed in the context of LOS. However, LOS transmission could cause strong image sensor blooming effect when detecting modulated optical signals [12] . Moreover, the smartphone camera should always align with the LED luminaries in the LOS scenario. This might limit the reception of system coordinates. To enable users to easily receive LED IDs, we implemented non-line-of-sight (NLOS) IS-VLC, in which the smartphone camera receives the optical modulated signal from surrounding reflective surfaces. The smartphone uses the rear camera to capture a photo with a size of({s{x_i},s{y_i}}) rows and{N_r} columns, and the modulated optical signal is indicated by bright and dark fringes on the image when we properly set the exposure time and ISO level of the camera.{N_c} For further image processing to extract the LED IDs, the captured image was converted into a grayscale format. Notably, we only used the central width
of the captured image to detect the logic values on the image. This not only reduces the computation of image processing but also prevents the effect of shadowing caused by objects surrounding the smartphone. As illustrated in Fig. 3(a), because of an NLOS transmission, the image was not subjected to the blooming effect. As portrayed in Fig. 3(b), for the central widthw of the image, the grayscale values were averaged for each row, resulting in anw column vector{N_r} \times 1 , in which{{\bf G}} = [ {{{\bar{g}}_1},{{\bar{g}}_2}, \ldots,{{\bar{g}}_{{N_r}}}} ] denotes the grayscale values of the central{\bar{g}_i} pixels on thew -th row. The data were demodulated by using second-order polynomial fitting for the averaged row pixel values ini . As represented in Fig. 3(c), these grayscale values were regressed to build a second-order polynomial decision threshold. Thus, the averaged row grayscale values could be demodulated to logic values and the IDs of LED luminaries could be retrieved according to the threshold.{\rm{G}} Constructing a geometric relationship between system coordination and image sensor coordination: After acquiring the LED IDs, the smartphone identifies the system coordinates of each LED luminary and begins to construct a geometric relationship between system coordination and image sensor coordination. The smartphone begins to extract the image sensor coordinates
of the LED luminaries({i{x_j},i{y_j}}) through its front camera. From the use of any two LED luminaries, for example,{L_j} ,{L_i} , the geometric relationship among the system coordinatesi=1, 2 of the LED luminaries({s{x_i},s{y_i}}) , the system coordinates{L_i} of the smartphone({{x_S},{y_S}}) , the image sensor coordinatesS of the center on the image sensor({i{x_S},i{x_S}}) , and the image sensor coordinates{I_S} of images of the LED luminaries({i{x_i},i{y_i}}) can be determined and shown in Fig. 4, from the top-to-down view along the{I_i} -axis. Notably, this geometrical interpretation can be viewed as an orthogonal projection of the localization system onto the xy-plane. Thus, the smartphone positionz and the center of the image centerS are viewed as the same point in Fig. 4, and the focal length between the image sensor and the camera lens is not included in this geometrical interpretation.{I_S} Fig. 3.(a) One image showing the bright and dark fringe in the scenario of NLOS. (b) The central
matrix for data demodulation. (c) Grayscale values in{N_r} \times w (blue line) and the threshold (red line).{{\bf G}} Fig. 4.Relationship between the system coordinate system and the image-sensor coordinate system.
By using typical image processing, including the process of grayscale conversion, Gaussian filtering, image thresholding, and Canny contour detection, the image sensor coordinates of the LED luminaries
,({i{x_i},i{y_i}}) , can be calculated. Based on the similar relationship between triangles formed by pointsi=1,\,2 and\{ {{L_1},M,S} \} or by the points\{ {{I_1},{I_M},{I_S}} \} and\{ {M,{L_2},S} \} displayed in Fig. 4, the smartphone system coordinates were estimated as follows:\{ {{I_M},{I_2},{I_S}} \} \begin{equation} \left\{ {\begin{array}{l} {{x_s} = s{x_1} + (i{x_1} - i{x_s}) \times \frac{{s{w_1}}}{{i{w_2}}}}\\ {{y_s} = s{y_1} + (i{y_1} - i{y_s}) \times \frac{{s{w_1}}}{{i{w_2}}}} \end{array}} \right.\quad {\rm{or}}\quad \left\{ {\begin{array}{l} {{x_s} = s{x_2} + (i{x_2} - i{x_s}) \times \frac{{s{w_1}}}{{i{w_2}}}}\\ {{y_s} = s{y_2} + (i{y_2} - i{y_s}) \times \frac{{s{w_1}}}{{i{w_2}}}} \end{array}} \right.\tag{1} \end{equation} View Source\begin{equation} \left\{ {\begin{array}{l} {{x_s} = s{x_1} + (i{x_1} - i{x_s}) \times \frac{{s{w_1}}}{{i{w_2}}}}\\ {{y_s} = s{y_1} + (i{y_1} - i{y_s}) \times \frac{{s{w_1}}}{{i{w_2}}}} \end{array}} \right.\quad {\rm{or}}\quad \left\{ {\begin{array}{l} {{x_s} = s{x_2} + (i{x_2} - i{x_s}) \times \frac{{s{w_1}}}{{i{w_2}}}}\\ {{y_s} = s{y_2} + (i{y_2} - i{y_s}) \times \frac{{s{w_1}}}{{i{w_2}}}} \end{array}} \right.\tag{1} \end{equation}
Indoor localization using
-pairwise LED IS-VLP: In a practical indoor space, ceilings are not often parallel to the floor, which results in a misalignment between the LED luminary and floor coordinates [9]. To compensate for the localization error caused by the misalignment between these coordinates, we propose the adoption ofK -pairwise LED IS-VLP to possibly reduce the localization error by only using one pairwise LED IS-VLP. When the smartphone registersK LED luminaries on its image sensor, where{N_{{\rm{LED}}}} , we are able to direct{N_{{\rm{LED}}}} \geq 2 -pairwise LEDs to execute the positioning method in (1), in whichK . For example, three pairwise LEDs are available, including theK = ({\begin{array}{c} {{N_{{\rm{LED}}}}}\\ 2 \end{array}}) and{L_1} ,{L_2} and{L_2} , and{L_3} and{L_1} pairs in the scenario presented in Fig. 1. According to (1), we are able to acquire the estimated coordinates of the smartphone{L_3} by using the{{{\bf p}}^{({i,j})}} = ({x_S^{({i,j})},y_S^{({i,j})}}) and{L_i} pair. The final localization of the smartphone{L_j} could simply be calculated by using an arithmetic mean of all{{{\bf p}}^{(f)}} = ({x_S^{(f)},y_S^{(f)}}) as follows:{{{\bf p}}^{({i,j})}} where\begin{align} x_S^{\left(f \right)} & = \frac{1}{K}\mathop \sum \limits_{\left({i,j} \right) \in Q} x_S^{\left({i,j} \right)},\nonumber\\ y_S^{\left(f \right)} & = \frac{1}{K}\mathop \sum \limits_{\left({i,j} \right) \in Q} y_S^{\left({i,j} \right)},\tag{2} \end{align} View Source\begin{align} x_S^{\left(f \right)} & = \frac{1}{K}\mathop \sum \limits_{\left({i,j} \right) \in Q} x_S^{\left({i,j} \right)},\nonumber\\ y_S^{\left(f \right)} & = \frac{1}{K}\mathop \sum \limits_{\left({i,j} \right) \in Q} y_S^{\left({i,j} \right)},\tag{2} \end{align}
denotes the set of all possible LED luminary pairs.Q
Experimental Setup and Results
Two indoor lighting and positioning environments were used to verify our proposed method. First, three LED
luminaries,
To evaluate the proposed indoor VLP system, the smartphone was placed at the locations of the predefined testing
points (TPs) on the floor at a height of 85 cm. A total of
Fig. 5 presents the experimental results of the single pairwise LED IS-VLP
method with four different
Localization results of the
Fig. 6 presents the localization results obtained by using the proposed
Localization results of the
We also implemented the image-sensor-based trilateration (IS-based trilateration) presented in
[10] and [11].
Table 1 presents a comparison of the maximum localization error
In the second experimental condition, three LED luminaries,
Conclusion
This study implements an indoor localization system by integrating NLOS IS-VLC and the proposed