I. Introdution
A fundamental requirement for the autonomous operation of mobile robots is the estimation of their location, which is also necessary in the construction of maps and essential for navigation control. In this sense, the use of exteroceptive sensors such as LiDARs (Light Detecting And Range) and cameras are commonly used to capture the characteristics of the surroundings to build the environment model and estimate the robot’s location using SLAM (Simultaneous Localization And Mapping) techniques. In recent years, numerous SLAM algorithm dedicated to specific sensors have emerged, including the popular LiDAR SLAM [1]–[3] and Visual SLAM [4]–[6] solutions.