I. Introduction
With the capacity of estimating ego-motion in six degrees of freedom (DOF) and simultaneously building dense and high precision maps of surrounding environments, LiDAR-based SLAM has been widely applied in the field of autonomous driving vehicles [1], drones [2], [3], and etc. With the
https://youtu.be/9lqRHmlN_MA.
https://github.com/hku-mars/r2live.
development of LiDAR technologies, the emergence of low-cost LiDARs (e.g., Livox LiDAR [4]) makes LiDAR more accessible. Following this trend, a number of related works [5]–[9] draw the attention of the community to this field of research. However, the accuracy of LiDAR-based SLAM methods would significantly degrade or even fail in those scenarios with few available geometry features, which is more critical for those LiDARs with small FoV [10]. In such scenarios, adding visual features could increase the system's robustness and accuracy. In this work, we propose a LiDAR-inertial-visual fusion framework to obtain the state estimation of higher robustness and accuracy. The main contributions of our work are:We develop a tightly-coupled LiDAR-inertial-visual system for real-time state estimation and mapping. Building on several key techniques from current state-of-the-art LiDAR-inertial and visual-inertial navigation systems, the system consists of a high-rate filter-based odometry and a low-rate factor graph optimization. The filter-based odometry fuses the measurements of LiDAR, inertial, and camera sensors within an error-state iterated Kalman filter to achieve real-time performance. The factor graph optimization refines a local map of keyframe poses and visual landmark positions.
We conduct various experiments showing that the developed system is able to run in various challenging scenarios with aggressive motion, sensor failure, and even in narrow tunnel-like environments with a large number of moving objects and small LiDAR field of view. It achieves more accurate and robust results than the current existing baselines and is accurate enough to be used to reconstruct large-scale, indoor-outdoor dense 3D maps of building structures (see Fig. 1).
We open-source the system, which could benefit the whole robotic community and serve as a baseline for comparison in this field of research.