I. Introduction & Related Works
In recent decades, the development of robot perception has increased the potential for robot navigation in extreme environments. Among other widely used sensors, light detection and ranging (LiDAR) and camera sensors are often selected for robot navigation despite their perceptual limitation in extreme environments. Recently, significant advances have been made in visual odometry, which estimates 3D trajectory and motion using sequential images from cameras and LiDARs [1]–[4]. However, in environments with insufficient illumination, the use of a camera does not guarantee sufficient visibility. In addition, sensors such as LiDAR sensors with short wavelengths and low permeability do not provide enough information for motion estimation in disaster sites containing dense smoke [5].