I. Introduction
The trust model, which has been used in multiagent teaming (MAT), is an intelligent reasoning approach that is capable of teaming with human and autonomous agents to make up for its inadequacies, perform complex tasks and improve its behavior over time [1], [2]. More specifically, the trust model is developed to represent capability, state and condition based on observation, communication, and control to address interagent interaction for multiagent teams. In agent teaming, the core issue is to accurately assess the agents' performance and effectively coordinate their work. To accomplish tasks in MAT, agents must trust each other to protect the interests and welfare of every other individual on the team [3], [4]. Trust is important in these contexts because trust directly affects the willingness of people to accept robot-produced information, follow robot suggestions, and thus benefit from the advantage inherent in agent teams [5]. Trust also plays an important role in this issue and provides a possible way to team agents [6], [7], [8]. This importance is not only admitted in the field of robot control [9], [10], [11], but also cloud computing [12] and cyber security [13], [14]. The significance of trust is also mentioned in [4], which connected trust with the selection of data, cost-effectiveness, resiliency, and computation ability. Although this article focuses on Fog computation rather than agent teaming, the role of trust is the same during information sharing. To define trust in the Human-Computer Interaction environment, Hou [9] proposed a model that decomposes the trust of an AI agent into six components, which are intention, measurability, predictability, agility, communication, and transparency. In a similar work presented in [10], the authors add security as the 7-th component. Another recent research [11] model agent trust with a 3-tier architecture (HST3). HST3 includes a Trust Service that filters out permissible information in bi-directional communication between the swarm of agents and humans. However, the concepts mentioned above require a foundational understanding of the agents and tasks and do not provide a flexible framework for modeling trust that can adapt to changing circumstances. This article proposes an adaptive trust model incorporating reinforcement learning (RL) to learn a flexible architecture capable of adapting to different circumstances. We verify the proposed model using a robot simulation environment with various settings, demonstrating its ability to adapt and remain robust across diverse scenarios.