I. Introduction
When multiple robots participate in the SLAM process, each robot generates a local map within its local frame. These local maps serve as sources of information for localization, obstacle avoidance, navigation, and path planning. Later, these maps can be shared and merged into a global map to provide a better representation of the environment, thereby enabling efficient and accurate task completion. In practical applications, the complexity of the map merging process is determined by various factors. These include the knowledge of the relative positions and orientations of the robots, as well as the correct fusion of sensor data from each robot. The sensors may have different accuracies, inherent noise, or ranges. The differences in sensors across different robotic platforms (aerial or ground) can lead to significant discrepancies in the generated maps, making direct merging of these maps more challenging [1] . A feasible solution is to rely on the overlapping regions between the local maps generated by each robot to align and merge the maps [2] . The most challenging and critical part of this process is to find the transformation relationship between the overlapping regions (point cloud registration).