Loading [MathJax]/extensions/MathZoom.js
Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review | IEEE Journals & Magazine | IEEE Xplore

Deep Learning for Image and Point Cloud Fusion in Autonomous Driving: A Review


Abstract:

Autonomous vehicles were experiencing rapid development in the past few years. However, achieving full autonomy is not a trivial task, due to the nature of the complex an...Show More

Abstract:

Autonomous vehicles were experiencing rapid development in the past few years. However, achieving full autonomy is not a trivial task, due to the nature of the complex and dynamic driving environment. Therefore, autonomous vehicles are equipped with a suite of different sensors to ensure robust, accurate environmental perception. In particular, the camera-LiDAR fusion is becoming an emerging research theme. However, so far there has been no critical review that focuses on deep-learning-based camera-LiDAR fusion methods. To bridge this gap and motivate future research, this article devotes to review recent deep-learning-based data fusion approaches that leverage both image and point cloud. This review gives a brief overview of deep learning on image and point cloud data processing. Followed by in-depth reviews of camera-LiDAR fusion methods in depth completion, object detection, semantic segmentation, tracking and online cross-sensor calibration, which are organized based on their respective fusion levels. Furthermore, we compare these methods on publicly available datasets. Finally, we identified gaps and over-looked challenges between current academic researches and real-world applications. Based on these observations, we provide our insights and point out promising research directions.
Published in: IEEE Transactions on Intelligent Transportation Systems ( Volume: 23, Issue: 2, February 2022)
Page(s): 722 - 739
Date of Publication: 17 March 2021

ISSN Information:


I. Introduction

Recent breakthroughs in deep learning and sensor technologies have motivated rapid development of autonomous driving technology, which could potentially improve road safety, traffic efficiency and personal mobility [1]–[3]. However, technical challenges and the cost of exteroceptive sensors have constrained current applications of autonomous driving systems to confined and controlled environments in small quantities. One critical challenge is to obtain an adequately accurate understanding of the vehicle’s 3D surrounding environment in real-time. To this end, sensor fusion, which leverages multiple types of sensors with complementary characteristics to enhance perception and reduce cost, has become an emerging research theme.

Contact IEEE to Subscribe

References

References is not available for this document.