I. Introduction
Nowadays, the area of autonomous robotics is developing at a very high pace. Its market constituted 1,61 billion USD in 2021 and is expected to grow 13 times by 2030 [1]. For the last decade, mobile robots have been successfully implemented in many areas including goods delivery [2], warehouse logistic [3] –[6], autonomous transport, disinfection [7], [8], and agriculture [9]. Autonomous robots are starting to work alongside humans executing tasks of increasing complexity. Such wide area of application requires robotic systems, including both hardware and software parts, to be robust, safe and efficient in challenging environments, e.g. in day and night conditions [10]. As a part of the software architecture, autonomous robots typically have a perception subsystem that mainly solves the problem of Simultaneous Localization and Mapping (SLAM). It generates a map and defines the position of the robot within the map at the same time. Thus, SLAM is a crucial task that must be solved accurately and efficiently to perform primary robot operation. Currently, many research collectives, both academic and industrial, are aimed at developing new SLAM approaches, designing task-driven methods and improving the existing pipelines to increase their robustness.
Distribution of regions assigned to a single neural network for rendering. Top: in green – left region, in blue – right region, in between – regions intersection. In the middle: corresponding regions rendering. At the bottom: the final merged global map without visible stitching.