I. Introduction
Light detection and ranging (LiDAR) can directly capture detailed structural information of the surrounding environment with accurate distance measurements, which makes it widely used in simultaneous localization and mapping (SLAM) [1]. Earlier 2-D LiDAR frameworks comprise FastSLAM [2], GMapping [3], LagoSLAM [4], cartographer [5], and so on. These algorithms can perform basic navigation tasks but cannot competently handle 3-D or more complex environments. To address this issue, the well-known LOAM framework is proposed to achieve real-time state estimation and mapping with low drift through 3-D LiDAR, in which the point-to-line and point-to-plane residuals are minimized for pose optimization [6]. On this basis, Shan and Englot [7] conducted point cloud segmentation to eliminate outliers, leading to more reliable point cloud registration and a lighter computational burden. Despite these successes, LiDAR itself has several inevitable limitations. On the one hand, the low resolution in the vertical direction leads to restricted measurements in environments with sparse vertical structures. On the other hand, LiDARs mounted on dynamic carriers are susceptible to motion distortion, directly affecting the accuracy of the point cloud registration.