Loading [MathJax]/extensions/MathMenu.js

Showing 1-7 of 7 results

Results

Various artificial intelligence (AI) algorithms have been developed for autonomous vehicles (AVs) to support environmental perception, decision making and automated driving in real-world scenarios. Existing AI methods, such as deep learning and deep reinforcement learning, have been criticized due to their black box nature. Explainable AI technologies are important for assisting users in understan...Show More
The demand for explainability in complex machine learning (ML) models is ever more pressing, particularly within safety-critical domains like autonomous driving. The Integrated Gradient (IG), a prominent attribution-based explainable artificial intelligence method, offers an effective solution for explaining and diagnosing ML models. However, IG is primarily designed for deep neural network models...Show More
The explainability of complex machine-learning models is becoming increasingly significant in safety-critical domains such as autonomous driving. In this context, counterfactual explanation (CE), as an effective explainability method in explainable artificial intelligence, plays an important role. It aims to identify minimal alterations to input that can change the model’s output, thereby revealin...Show More
Autonomous driving (AD) is an enhancement or replacement for human drivers, while AD has brought a huge challenge to automotive testing and evaluation (T&E) technologies. As AD replaces drivers’ perception, decision-making and control, the T&E procedure for autonomous vehicles (AVs) changes from independent T&E of human-vehicle binary to integrated T&E of human-vehicle systems. Complex weather, tr...Show More
Artificial intelligence (AI) techniques have been widely implemented in the domain of autonomous vehicles (AVs). However, existing AI techniques, such as deep learning and ensemble learning, have been criticized for their black-box nature. Explainable AI is an effective methodology to understand the black box and build public trust in AVs. In this article, a maximum entropy-based Shapley Additive ...Show More
Decision-making for autonomous vehicles is critical to achieving safe and efficient autonomous driving. In recent years, deep reinforcement learning (DRL) techniques have emerged as the most promising way to enable intelligent decision-making. However, DRL with ‘black box’ nature is not widely understood by humans, thus hindering their social acceptance. In this paper, we combine SHapley Additive ...Show More
When design, test and validate an intelligent agent, assessing its intelligence is essential. While autonomous vehicles (AVs) are deployed to a certain degree, it is still hard to assess their intelligence because it highly depends on tested scenarios but in real world tested scenarios are limited and far away from edges. Therefore, this paper attempts to propose an intelligence assessment approac...Show More