I. Introduction
3D sensors (e.g., depth sensor, time-of-flight sensor, and LiDAR) are capable of perceiving fine 3D geometric in-formation of the scene but unable to capture appearance details (e.g., color and texture) of the surroundings, compared with image sensors. In lots of robotic applications, only 3D sensors are utilized without any color information, which makes 3D data visualization challenging. Therefore, it is desirable to visualize with vivid color because colorized 3D data is perceptually more meaningful and credible, which often conveys rich semantics clues, thus not only providing better scene understanding to human beings but also significant improvements for visual recognition [1], [2] in modern AR/VR and robotic applications. As shown in Fig. 1, compared with the original point cloud with coordinates only, with the support of color information, the colorized point cloud makes the scene easier to understand visually, greatly improving the recognizability of objects. Therefore, point cloud colorization is an emerging topic for better 3D data visualization and visual perception.