Loading [MathJax]/extensions/MathZoom.js
LIVER: A Tightly Coupled LiDAR-Inertial-Visual State Estimator With High Robustness for Underground Environments | IEEE Journals & Magazine | IEEE Xplore

LIVER: A Tightly Coupled LiDAR-Inertial-Visual State Estimator With High Robustness for Underground Environments


Abstract:

In this letter, we propose a tightly coupled LiDAR-inertial-visual (LIV) state estimator termed LIVER, which achieves robust and accurate localization and mapping in unde...Show More

Abstract:

In this letter, we propose a tightly coupled LiDAR-inertial-visual (LIV) state estimator termed LIVER, which achieves robust and accurate localization and mapping in underground environments. LIVER starts with an effective strategy for LIV synchronization. A robust initialization process that integrates LiDAR, vision, and IMU is realized. A tightly coupled, nonlinear optimization-based method achieves highly accurate LiDAR-inertial-visual odometry (LIVO) by fusing LiDAR, visual, and IMU information. We consider scenarios in underground environments that are unfriendly to LiDAR and cameras. A visual-IMU-assisted method enables the evaluation and handling of LiDAR degeneracy. A deep neural network is introduced to eliminate the impact of poor lighting conditions on images. We verify the performance of the proposed method by comparing it with the state-of-the-art methods through public datasets and real-world experiments, including underground mines. In underground mines test, tightly coupled methods without degeneracy handling lead to failure due to self-similar areas (affecting LiDAR) and poor lighting conditions (affecting vision). In these conditions, our degeneracy handling approach successfully eliminates the impact of disturbances on the system.
Published in: IEEE Robotics and Automation Letters ( Volume: 9, Issue: 3, March 2024)
Page(s): 2399 - 2406
Date of Publication: 18 January 2024

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

Slam, as one of the most fundamental modules, remains undoubtedly at the center of robotics research. After more than thirty years of development, SLAM has become a relatively mature research field with a wide range of applications. However, existing results have focused more on urban and indoor office scenes. Related research is still very challenging in extreme conditions, such as underground environments [1], [2]. The underground environments have some unfriendly characteristics for SLAM. First, the lighting conditions in the underground environments are poor, which brings significant challenges to the visual SLAM. Secondly, there are self-similar areas in underground environments, in which LiDAR SLAM is degenerate normally. Fortunately, despite these challenges, there has been some progress in recent years. The recent DARPA Subterranean (SubT) Challenge has promoted the development of underground SLAM [3]. A series of loosely coupled multi-robot SLAM algorithms has been developed based on LiDAR and IMU, supplemented by visual and thermal vision. It indicates that multi-sensor fusion is a feasible solution for underground space detection. However, most works are loosely coupled methods. In contrast, tightly coupled methods have higher robustness due to the fusion of more aspects of sensor information [4].

Select All
1.
K. Ebadi et al., "Present and future of SLAM in extreme environments: The DARPA SubT Challenge", IEEE Trans. Robot, vol. 40, pp. 936-959, 2024.
2.
Z. Song, X. Zhang, T. Li, S. Zhang, Y. Wang and J. Yuan, "IR-VIO: Illumination-robust visual-inertial odometry based on adaptive weighting algorithm with two-layer confidence maximization", IEEE/ASME Trans. Mechatron., vol. 28, no. 4, pp. 1920-1929, Aug. 2023.
3.
"DARPA subterranean challenge", Mar. 2022, [online] Available: https://www.subtchallenge.com.
4.
D. Wisth, M. Camurri, S. Das and M. Fallon, "Unified multi-modal landmark tracking for tightly coupled lidar-visual-inertial odometry", IEEE Robot. Automat. Lett., vol. 6, no. 2, pp. 1004-1011, Apr. 2021.
5.
X. Zuo, P. Geneva, W. Lee, Y. Liu and G. Huang, "LIC-Fusion: LiDAR-inertial-camera odometry", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst, pp. 5848-5854, 2019.
6.
X. Zuo et al., "LIC-Fusion 2.0: LiDAR-Inertial-Camera odometry with sliding-window plane-feature tracking", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst, pp. 5112-5119, 2020.
7.
C. Zheng, Q. Zhu, W. Xu, X. Liu, Q. Guo and F. Zhang, "FAST-LIVO: Fast and tightly-coupled sparse-direct LiDAR-Inertial-Visual Odometry", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., pp. 4003-4009, 2022.
8.
J. Lin and F. Zhang, " R 3 LIVE: A robust real-time RGB-colored LiDAR-Inertial-Visual tightly-coupled state estimation and mapping package ", Proc. IEEE Int. Conf. Robot. Automat., pp. 10672-10678, 2022.
9.
T. Shan, B. Englot, C. Ratti and D. Rus, "LVI-SAM: Tightly-coupled lidar-visual-inertial odometry via smoothing and mapping", Proc. IEEE Int. Conf. Robot. Automat., pp. 5692-5698, 2021.
10.
X. Lang et al., "Coco-LIC: Continuous-time tightly-coupled LiDAR-Inertial-Camera Odometry using non-uniform B-spline", IEEE Robot. Automat. Lett., vol. 8, no. 11, pp. 7074-7081, Nov. 2023.
11.
K. Ebadi et al., "LAMP: Large-scale autonomous mapping and positioning for exploration of perceptually-degraded subterranean environments", Proc. IEEE Int. Conf. Robot. Automat., pp. 80-86, 2020.
12.
S. Khattak, H. Nguyen, F. Mascarich, T. Dang and K. Alexis, "Complementary multi-modal sensor fusion for resilient robot pose estimation in subterranean environments", Proc. Int. Conf. Unmanned Aircr. Syst., pp. 1024-1029, 2020.
13.
Y. Chang et al., "LAMP 2.0: A robust multi-robot SLAM system for operation in challenging large-scale underground environments", IEEE Robot. Automat. Lett., vol. 7, no. 4, pp. 9175-9182, Oct. 2022.
14.
J. Zhang and S. Singh, "Laser-visual-inertial odometry and mapping with high robustness and low drift", J. Field Robot., vol. 35, no. 8, pp. 1242-1264, 2018.
15.
A. I. Mourikis and S. I. Roumeliotis, "A multi-state constraint Kalman filter for vision-aided inertial navigation", Proc. IEEE Int. Conf. Robot. Automat., pp. 3565-3572, 2007.
16.
J. Zhang, M. Kaess and S. Singh, "On degeneracy of optimization-based state estimation problems", Proc. IEEE Int. Conf. Robot. Automat., pp. 809-816, 2016.
17.
T. Tuna et al., "X-ICP: Localizability-aware LiDAR registration for robust localization in extreme environments", IEEE Trans. Robot, vol. 40, pp. 452-471, 2024.
18.
J. Nubert, E. Walther, S. Khattak and M. Hutter, "Learning-based localizability estimation for robust LiDAR localization", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., pp. 17-24, 2022.
19.
F. Han et al., "DAMS-LIO: A degeneration-aware and modular sensor-fusion LiDAR-inertial odometry", Proc. IEEE Int. Conf. Robot. Automat., pp. 2745-2751, 2023.
20.
J. Zhang and S. Singh, "LOAM: Lidar odometry and mapping in real-time", Proc. Robot.: Sci. Syst. Conf., pp. 1-9, 2014.
21.
D. Galvez-López and J. D. Tardos, "Bags of binary words for fast place recognition in image sequences", IEEE Trans. Robot., vol. 28, no. 5, pp. 1188-1197, Oct. 2012.
22.
G. Kim, S. Choi and A. Kim, "Scan context++: Structural place recognition robust to rotation and lateral variations in urban environments", IEEE Trans. Robot., vol. 38, no. 3, pp. 1856-1874, Jun. 2022.
23.
M. Kaess et al., "iSAM2: Incremental smoothing and mapping using the Bayes tree", Int. J. Robot. Res., vol. 31, no. 2, pp. 216-235, 2012.
24.
T. Qin, P. Li and S. Shen, "VINS-Mono: A robust and versatile monocular visual-inertial state estimator", IEEE Trans. Robot., vol. 34, no. 4, pp. 1004-1020, Aug. 2018.
25.
C. Li, C. Guo and C. C. Loy, "Learning to enhance low-light image via zero-reference deep curve estimation", IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 8, pp. 4225-4238, Aug. 2022.
26.
J. Yin, A. Li, T. Li, W. Yu and D. Zou, "M2DGR: A multi-sensor and multi-scenario SLAM dataset for ground robots", IEEE Robot. Automat. Lett., vol. 7, no. 2, pp. 2266-2273, Apr. 2022.
27.
T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti and D. Rus, "LIO-SAM: Tightly-coupled lidar inertial odometry via smoothing and mapping", Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst., pp. 5135-5142, 2020.
28.
W. Xu, Y. Cai, D. He, J. Lin and F. Zhang, "FAST-LIO2: Fast direct LiDAR-inertial odometry", IEEE Trans. Robot., vol. 38, no. 4, pp. 2053-2073, Aug. 2022.

Contact IEEE to Subscribe

References

References is not available for this document.