Abstract:
Accurate three-dimensional (3D) color models play a crucial role in various emerging applications, such as geographic surveying and mapping, smart cities, and digital twi...Show MoreMetadata
Abstract:
Accurate three-dimensional (3D) color models play a crucial role in various emerging applications, such as geographic surveying and mapping, smart cities, and digital twin cities. To achieve high-fidelity representations of large-scale scenes, recent advancements in machine learning techniques and LiDAR-based Simultaneous Localization and Mapping (SLAM) have been instrumental. Leveraging these cutting-edge technologies, this paper presents a sensor-fusion framework that integrates a solid-state LiDAR, an inertial measurement unit (IMU), and a monocular camera. Additionally, a point cloud upsampling technique is introduced into LiDAR SLAM to enhance the density of the point clouds. Within this framework, the LiDAR-IMU odometry accurately estimates the positions and poses of the collected point clouds. Simultaneously, synchronized images are captured and utilized to incorporate texture information into the point clouds. The proposed framework demonstrates its capability to generate highly detailed and dense 3D color models for large-scale outdoor scenes, all within a limited on-site scanning time. Extensive experimental results validate the effectiveness and efficiency of the proposed approach.
Published in: 2023 Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)
Date of Conference: 31 October 2023 - 03 November 2023
Date Added to IEEE Xplore: 20 November 2023
ISBN Information:
ISSN Information:
References is not available for this document.
Select All
1.
Y. Hu, T. Fu, G. Niu, Z. Liu and M.-O. Pun, "3D map reconstruction using a monocular camera for smart cities", The Journal of Supercomputing, vol. 78, no. 14, pp. 16 512-16 528, 2022.
2.
Z. Zhu, S. Peng, V. Larsson et al., "Nice-slam: Neural implicit scalable encoding for slam", Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12 786-12 796, 2022.
3.
Q. Hu, B. Yang, S. Khalid, W. Xiao, N. Trigoni and A. Markham, "Sensaturban: Learning semantics from urban-scale photogrammetric point clouds", International Journal of Computer Vision, vol. 130, no. 2, pp. 316-343, 2022.
4.
Y. Zhu, C. Zheng, C. Yuan, X. Huang and X. Hong, "Camvox: A low-cost and accurate lidar-assisted visual slam system", Proc. IEEE International Conference on Robotics and Automation (ICRA), pp. 5049-5055, 2021.
5.
J. Lin and F. Zhang, "R 3 live: A robust real-time rgb-colored lidar-inertial-visual tightly-coupled state estimation and mapping package", Proc. 2022 International Conference on Robotics and Automation (ICRA), pp. 10 672-10 678, 2022.
6.
Z. Liu, Y. Hu, T. Fu and M.-O. Pun, "Dense three-dimensional color reconstruction with data fusion and image-guided depth completion for large-scale outdoor scenes", Proc. 2022-2022 IEEE International Geoscience and Remote Sensing Symposium, pp. 3468-3471, 2022.
7.
C. Yuan, X. Liu, X. Hong and F. Zhang, "Pixel-level extrinsic self calibration of high resolution lidar and camera in targetless environments", IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 7517-7524, 2021.
8.
W. Feng, J. Li, H. Cai, X. Luo and J. Zhang, "Neural points: Point cloud representation with neural fields for arbitrary upsampling", Proc. the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 18 633-18 642, 2022.
9.
Y. Qian, J. Hou, S. Kwong and Y. He, "Pugeo-net: A geometry-centric network for 3d point cloud upsampling", Computer Vision–ECCV 2020: 16th European Conference Glasgow UK August 23–28 2020 Proceedings Part XIX, pp. 752-769, 2020.