Loading [MathJax]/extensions/MathMenu.js
Adaptive Trust Model for Multi-Agent Teaming Based on Reinforcement-Learning-Based Fusion | IEEE Journals & Magazine | IEEE Xplore

Adaptive Trust Model for Multi-Agent Teaming Based on Reinforcement-Learning-Based Fusion


Abstract:

The performance of agents is highly influenced by multiple factors, including ability, decision, and states. Trust modeling is widely used to boost the performance of mul...Show More

Abstract:

The performance of agents is highly influenced by multiple factors, including ability, decision, and states. Trust modeling is widely used to boost the performance of multiagent teaming (MAT). However, most existing trust models rely on statistical methods or preset parameters to assess the trust value in the MAT scenario. In this article, an adaptive trust model is proposed to evaluate comprehensive trust values based on multiple pieces of evidence from variant sources. The proposed trust model leverages information fusion and RL to properly fuse multiple pieces of evidence to generate trust value for every agent in MAT. The trust value is then used in an interaction protocol with MAT to increase the efficiency of cooperation. To verify the performance of the proposed trust model, a ball-collection experiment is designed for MAT to work cooperatively in simulation environments. Two different scenario settings are used to demonstrate the adaptability and robustness of the proposed trust model. The results are further compared with human-designed fusion methods. The comparison shows that the proposed trust model has a better representation of agent performance, namely convergence speed, than human-designed methods in different scenario settings.
Page(s): 229 - 239
Date of Publication: 04 October 2023
Electronic ISSN: 2471-285X

Funding Agency:


I. Introduction

The trust model, which has been used in multiagent teaming (MAT), is an intelligent reasoning approach that is capable of teaming with human and autonomous agents to make up for its inadequacies, perform complex tasks and improve its behavior over time [1], [2]. More specifically, the trust model is developed to represent capability, state and condition based on observation, communication, and control to address interagent interaction for multiagent teams. In agent teaming, the core issue is to accurately assess the agents' performance and effectively coordinate their work. To accomplish tasks in MAT, agents must trust each other to protect the interests and welfare of every other individual on the team [3], [4]. Trust is important in these contexts because trust directly affects the willingness of people to accept robot-produced information, follow robot suggestions, and thus benefit from the advantage inherent in agent teams [5]. Trust also plays an important role in this issue and provides a possible way to team agents [6], [7], [8]. This importance is not only admitted in the field of robot control [9], [10], [11], but also cloud computing [12] and cyber security [13], [14]. The significance of trust is also mentioned in [4], which connected trust with the selection of data, cost-effectiveness, resiliency, and computation ability. Although this article focuses on Fog computation rather than agent teaming, the role of trust is the same during information sharing. To define trust in the Human-Computer Interaction environment, Hou [9] proposed a model that decomposes the trust of an AI agent into six components, which are intention, measurability, predictability, agility, communication, and transparency. In a similar work presented in [10], the authors add security as the 7-th component. Another recent research [11] model agent trust with a 3-tier architecture (HST3). HST3 includes a Trust Service that filters out permissible information in bi-directional communication between the swarm of agents and humans. However, the concepts mentioned above require a foundational understanding of the agents and tasks and do not provide a flexible framework for modeling trust that can adapt to changing circumstances. This article proposes an adaptive trust model incorporating reinforcement learning (RL) to learn a flexible architecture capable of adapting to different circumstances. We verify the proposed model using a robot simulation environment with various settings, demonstrating its ability to adapt and remain robust across diverse scenarios.

Contact IEEE to Subscribe

References

References is not available for this document.