Loading [MathJax]/extensions/MathZoom.js
Graph Attention Network-Based Multi-Agent Reinforcement Learning for Traffic Signal Priority Control | IEEE Conference Publication | IEEE Xplore

Graph Attention Network-Based Multi-Agent Reinforcement Learning for Traffic Signal Priority Control


Abstract:

Reinforcement learning (RL) is widely applied to the problem of adaptive traffic signal control (ATSC). This paper presents an efficient RL algorithm to address ATSC issu...Show More

Abstract:

Reinforcement learning (RL) is widely applied to the problem of adaptive traffic signal control (ATSC). This paper presents an efficient RL algorithm to address ATSC issues. We adopt the decentralized multi-agent advantage actor-critic (A2C) algorithm, where distributed control methods introduce new challenges: the environment becomes partially observable for each local agent. To tackle this issue, we incorporate spatial infor-mation from neighboring agents using graph attention networks (GAT) to learn collaborative control strategies. Additionally, to better adapt to real-world road environments, we propose a hybrid reward function model. This model ensures the traffic efficiency for social vehicles while emphasizing the priority of emergency vehicles (EMV), thereby reducing their travel time. We numerically evaluate the proposed method in a simulated environment containing 25 intersections. Experimental results demonstrate that the approach presented in this paper has significant advantages compared to similar algorithms applied to multi-agent ATSC.
Date of Conference: 01-03 November 2024
Date Added to IEEE Xplore: 13 February 2025
ISBN Information:

ISSN Information:

Conference Location: Qingdao, China
No metrics found for this document.

I. Introduction

In recent decades, with the continuous acceleration of urban-ization and the sharp increase in the number of vehicles, traffic congestion has become an increasingly prominent issue and a common challenge faced by cities worldwide. Controlling traf-fic signals is an effective method to improve road efficiency, al-leviate traffic congestion, and reduce environmental pollution. Early ATSC algorithms such as SCOOT [1] and SCATS [2] relied on manually designed plans. However, in the face of the increasingly complex and dynamic traffic environment, these empirical observation-based methods are no longer favored. In recent years, the rapid advancement of artificial intelligence has significantly propelled the development of ATSC. ATSC has incorporated various interdisciplinary technologies, such as fuzzy logic algorithms [3], genetic algorithms [4], and the most widely used RL [5]. RL is a trial-and-error learning approach that models the ATSC problem as a markov decision process, selecting actions based on state information collected from the environment to maximize rewards.

Usage
Select a Year
2025

View as

Total usage sinceFeb 2025:16
05101520JanFebMarAprMayJunJulAugSepOctNovDec0160000000000
Year Total:16
Data is updated monthly. Usage includes PDF downloads and HTML views.
Contact IEEE to Subscribe

References

References is not available for this document.