Loading [MathJax]/extensions/MathMenu.js
LIVER: A Tightly Coupled LiDAR-Inertial-Visual State Estimator With High Robustness for Underground Environments | IEEE Journals & Magazine | IEEE Xplore

LIVER: A Tightly Coupled LiDAR-Inertial-Visual State Estimator With High Robustness for Underground Environments


Abstract:

In this letter, we propose a tightly coupled LiDAR-inertial-visual (LIV) state estimator termed LIVER, which achieves robust and accurate localization and mapping in unde...Show More

Abstract:

In this letter, we propose a tightly coupled LiDAR-inertial-visual (LIV) state estimator termed LIVER, which achieves robust and accurate localization and mapping in underground environments. LIVER starts with an effective strategy for LIV synchronization. A robust initialization process that integrates LiDAR, vision, and IMU is realized. A tightly coupled, nonlinear optimization-based method achieves highly accurate LiDAR-inertial-visual odometry (LIVO) by fusing LiDAR, visual, and IMU information. We consider scenarios in underground environments that are unfriendly to LiDAR and cameras. A visual-IMU-assisted method enables the evaluation and handling of LiDAR degeneracy. A deep neural network is introduced to eliminate the impact of poor lighting conditions on images. We verify the performance of the proposed method by comparing it with the state-of-the-art methods through public datasets and real-world experiments, including underground mines. In underground mines test, tightly coupled methods without degeneracy handling lead to failure due to self-similar areas (affecting LiDAR) and poor lighting conditions (affecting vision). In these conditions, our degeneracy handling approach successfully eliminates the impact of disturbances on the system.
Published in: IEEE Robotics and Automation Letters ( Volume: 9, Issue: 3, March 2024)
Page(s): 2399 - 2406
Date of Publication: 18 January 2024

ISSN Information:

Funding Agency:


I. Introduction

Slam, as one of the most fundamental modules, remains undoubtedly at the center of robotics research. After more than thirty years of development, SLAM has become a relatively mature research field with a wide range of applications. However, existing results have focused more on urban and indoor office scenes. Related research is still very challenging in extreme conditions, such as underground environments [1], [2]. The underground environments have some unfriendly characteristics for SLAM. First, the lighting conditions in the underground environments are poor, which brings significant challenges to the visual SLAM. Secondly, there are self-similar areas in underground environments, in which LiDAR SLAM is degenerate normally. Fortunately, despite these challenges, there has been some progress in recent years. The recent DARPA Subterranean (SubT) Challenge has promoted the development of underground SLAM [3]. A series of loosely coupled multi-robot SLAM algorithms has been developed based on LiDAR and IMU, supplemented by visual and thermal vision. It indicates that multi-sensor fusion is a feasible solution for underground space detection. However, most works are loosely coupled methods. In contrast, tightly coupled methods have higher robustness due to the fusion of more aspects of sensor information [4].

Contact IEEE to Subscribe

References

References is not available for this document.