Loading [MathJax]/extensions/MathZoom.js
Quantification of Uncertainty and Its Applications to Complex Domain for Autonomous Vehicles Perception System | IEEE Journals & Magazine | IEEE Xplore

Quantification of Uncertainty and Its Applications to Complex Domain for Autonomous Vehicles Perception System


Abstract:

Over the last decades, deep neural networks (DNNs) have penetrated all fields of science and the real world. As a result of the lack of quantifiable data and model uncert...Show More

Abstract:

Over the last decades, deep neural networks (DNNs) have penetrated all fields of science and the real world. As a result of the lack of quantifiable data and model uncertainty, deep learning is frequently brittle, illogical, and challenging to provide trustworthy assurance for autonomous vehicles’ (AVs) perception. This hole is filled by the suggested approach to uncertainty quantification. Nevertheless, most of the previous studies focused on the methodology and there is still a lack of research on the application of AV. To the best of our knowledge, this survey is the first time to review the application of uncertainty in the field of AV perception and localization. First, this survey analyzes the sources of uncertainty in autonomous perception, including the uncertainty brought on by sensor internal and external factors as well as the sensor distortion caused by complex scenes. Second, we propose an evaluation criterion and use the criterion to carry out a quantitative analysis of the perception field of application for AVs, and we discuss the mainstream datasets. Third, we put forward a number of open issues and raise some future research directions, which are of guiding significance to readers who are beginning to enter this field. We believe that epistemic uncertainty is currently the dominant research direction and that there is still a long way to go in the study of aleatoric uncertainty. And this survey will be devoted to promoting the development of uncertainty research on AV perception.
Article Sequence Number: 5010217
Date of Publication: 13 March 2023

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

In the past decade, deep neural networks (DNNs) [1], [2], [3] have been penetrated into all fields of science and the real world, such as medical image analysis [4], intelligent connected vehicle (ICV) [5], measurement [6], and robotics [7]. In the field of ICV, deep learning solves the problem of feature extraction faced by machine learning [8], [9] in the training of autonomous driving vehicles, thus making the detection and decision-making of neural networks more accurate. Therefore, it has been widely applied in all levels of the ICV field, including planning and decision-making [10], [11], [12], [13], perception [14], [15], [16], [17], [18], [19], map and positioning [20], [21], [22], [23], and so on.

Select All
1.
C. Szegedy, A. Toshev and D. Erhan, "Deep neural networks for object detection", Proc. Adv. Neural Inf. Process. Syst., pp. 1-9, 2013.
2.
K. Wang, S. Zhang, J. Chen, F. Ren and L. Xiao, "A feature-supervised generative adversarial network for environmental monitoring during hazy days", Sci. Total Environ., vol. 748, Dec. 2020.
3.
K. Wang et al., "The adaptability and challenges of autonomous vehicles to pedestrians in urban China", Accident Anal. Prevention, vol. 145, Sep. 2020.
4.
D. Shen, G. Wu and H. Suk, "Deep learning in medical image analysis", Annu. Rev. Biomed. Eng., vol. 19, pp. 221-248, Jun. 2017.
5.
B. Kisacanin, "Deep learning for autonomous vehicles", Proc. IEEE 47th Int. Symp. Multiple-Valued Log. (ISMVL), pp. 142, May 2017.
6.
E. Perez and E. Zappa, "Video motion magnification to improve the accuracy of vision-based vibration measurements", IEEE Trans. Instrum. Meas., vol. 71, pp. 1-12, 2022.
7.
N. Sünderhauf et al., "The limits and potentials of deep learning for robotics", Int. J. Robot. Res., vol. 37, no. 4, pp. 405-420, 2018.
8.
M. I. Jordan and T. M. Mitchell, "Machine learning: Trends perspectives and prospects", Science, vol. 349, no. 6245, pp. 255-260, 2015.
9.
K. Wang, S. Ma, F. Ren and J. Lu, "SBAS: Salient bundle adjustment for visual SLAM", IEEE Trans. Instrum. Meas., vol. 70, pp. 1-9, 2021.
10.
W. Schwarting, J. Alonso-Mora and D. Rus, "Planning and decision-making for autonomous vehicles" in Annual Review of Control Robotics and Autonomous Systems, CA, USA, vol. 1, no. 1, 2018.
11.
L. Caltagirone, M. Bellone, L. Svensson and M. Wahde, "LIDAR-based driving path generation using fully convolutional neural networks", Proc. IEEE 20th Int. Conf. Intell. Transp. Syst. (ITSC), pp. 1-6, Oct. 2017.
12.
S. Dixit et al., "Trajectory planning and tracking for autonomous overtaking: State-of-the-art and future prospects", Annu. Rev. Control, vol. 45, pp. 76-86, Jan. 2018.
13.
X. Hu, X. Chen, G. T. Parks and W. Yao, "Review of improved Monte Carlo methods in uncertainty-based design optimization for aerospace vehicles", Prog. Aerosp. Sci., vol. 86, pp. 20-27, Oct. 2016.
14.
H. Zhu, K.-V. Yuen, L. Mihaylova and H. Leung, "Overview of environment perception for intelligent vehicles", IEEE Trans. Intell. Transp. Syst., vol. 18, no. 10, pp. 2584-2601, Oct. 2017.
15.
J. Van Brummelen, M. O’Brien, D. Gruyer and H. Najjaran, "Autonomous vehicle perception: The technology of today and tomorrow", Transp. Res. C Emerg. Technol., vol. 89, pp. 384-406, Apr. 2018.
16.
K. Wang, T. Zhou, X. Li and F. Ren, "Performance and challenges of 3D object detection methods in complex scenes for autonomous driving", IEEE Trans. Intell. Vehicles, Oct. 2022.
17.
J. Janai, F. Güney, A. Behl and A. Geiger, "Computer vision for autonomous vehicles: Problems datasets and state-of-the-art", Found. Trends Comput. Graph. Vis., vol. 12, no. 1, 2017.
18.
D. Weik, R. Nauber, C. Kupsch, L. Buttner and J. Czarske, "Uncertainty quantification of ultrasound image velocimetry for liquid metal flow mapping", IEEE Trans. Instrum. Meas., vol. 70, pp. 1-11, 2021.
19.
K. Wang, L. Pu, J. Zhang and J. Lu, "Gated adversarial network based environmental enhancement method for driving safety under adverse weather conditions", IEEE Trans. Intell. Vehicles, Jan. 2022.
20.
S. Lowry et al., "Visual place recognition: A survey", IEEE Trans. Robot., vol. 32, no. 1, pp. 1-19, Feb. 2016.
21.
S. Kuutti, S. Fallah, K. Katsaros, M. Dianati, F. Mccullough and A. Mouzakitis, "A survey of the state-of-the-art localization techniques and their potentials for autonomous vehicle applications", IEEE Internet Things J., vol. 5, no. 2, pp. 829-846, Apr. 2018.
22.
K. Wang, S. Luo, T. Chen and J. Lu, "Salient-VPR: Salient weighted global descriptor for visual place recognition", IEEE Trans. Instrum. Meas., vol. 71, pp. 1-8, 2022.
23.
K. Wang, T. Zhou, Z. Zhang, T. Chen and J. Chen, "PVF-DectNet: Multi-modal 3D detection network based on perspective-voxel fusion", Eng. Appl. Artif. Intell., vol. 120, Apr. 2023, [online] Available: https://www.sciencedirect.com/science/article/pii/S0952197623001355.
24.
J. Choi, D. Chun, H. Kim and H.-J. Lee, "Gaussian YOLOv3: An accurate and fast object detector using localization uncertainty for autonomous driving", Proc. IEEE/CVF Int. Conf. Comput. Vis. (ICCV), Oct. 2019.
25.
D. Hall, F. Dayoub, J. Skinner, H. Zhang and N. Sunderhauf, "Probabilistic object detection: Definition and evaluation", Proc. IEEE/CVF Winter Conf. Appl. Comput. Vis. (WACV), pp. 1031-1040, Mar. 2020.
26.
E. Stenborg, C. Toft and L. Hammarstrand, "Long-term visual localization using semantically segmented images", Proc. IEEE Int. Conf. Robot. Autom. (ICRA), pp. 6484-6490, May 2018.
27.
M. Valdenegro-Toro, "Deep sub-ensembles for fast uncertainty estimation in image classification", arXiv:1910.08168, 2019.
28.
R. Ghanem, H. Owhadi and D. Higdon, Handbook of Uncertainty Quantification, Berlin, Germany:Springer, 2017.
29.
Y. Gal and Z. Ghahramani, "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning", J. Mach. Learn. Res., vol. 48, pp. 1050-1059, Jun. 2016.
30.
F. K. Gustafsson, M. Danelljan and T. B. Schon, "Evaluating scalable Bayesian deep learning methods for robust computer vision", Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Workshops (CVPRW), pp. 318-319, Jun. 2020.

Contact IEEE to Subscribe

References

References is not available for this document.