Loading [MathJax]/extensions/MathMenu.js
Progression Cognition Reinforcement Learning With Prioritized Experience for Multi-Vehicle Pursuit | IEEE Journals & Magazine | IEEE Xplore

Progression Cognition Reinforcement Learning With Prioritized Experience for Multi-Vehicle Pursuit


Abstract:

Multi-vehicle pursuit (MVP) such as autonomous police vehicles pursuing suspects is important but very challenging due to its mission and safety-critical nature. While mu...Show More

Abstract:

Multi-vehicle pursuit (MVP) such as autonomous police vehicles pursuing suspects is important but very challenging due to its mission and safety-critical nature. While multi-agent reinforcement learning (MARL) algorithms have been proposed for MVP in structured grid-pattern roads, the existing algorithms use random training samples in centralized learning, which leads to homogeneous agents showing low collaboration performance. For the more challenging problem of pursuing multiple evaders, these algorithms typically select a fixed target evader for pursuers without considering dynamic traffic situation, which significantly reduces pursuing success rate. To address the above problems, this paper proposes a Progression Cognition Reinforcement Learning with Prioritized Experience for MVP (PEPCRL-MVP) in urban multi-intersection dynamic traffic scenes. PEPCRL-MVP uses a prioritization network to assess the transitions in the global experience replay buffer according to each MARL agent’s parameters. With the personalized and prioritized experience set selected via the prioritization network, diversity is introduced to the MARL learning process, which can improve collaboration and task-related performance. Furthermore, PEPCRL-MVP employs an attention module to extract critical features from dynamic urban traffic environments. These features are used to develop a progression cognition method to adaptively group pursuing vehicles. Each group efficiently targets one evading vehicle. Extensive experiments conducted with a simulator over unstructured roads of an urban area show that PEPCRL-MVP is superior to other state-of-the-art methods. Specifically, PEPCRL-MVP improves pursuing efficiency by 3.95% over Twin Delayed Deep Deterministic policy gradient-Decentralized Multi-Agent Pursuit and its success rate is 34.78% higher than that of Multi-Agent Deep Deterministic Policy Gradient. Codes are open-sourced.
Published in: IEEE Transactions on Intelligent Transportation Systems ( Volume: 25, Issue: 8, August 2024)
Page(s): 10035 - 10048
Date of Publication: 26 January 2024

ISSN Information:

Funding Agency:

Citations are not available for this document.

Getting results...

Contact IEEE to Subscribe

References

References is not available for this document.