Loading [MathJax]/extensions/MathMenu.js
Multi-Objective Aerial Collaborative Secure Communication Optimization via Generative Diffusion Model-Enabled Deep Reinforcement Learning | IEEE Journals & Magazine | IEEE Xplore

Multi-Objective Aerial Collaborative Secure Communication Optimization via Generative Diffusion Model-Enabled Deep Reinforcement Learning


Abstract:

Due to flexibility and low-cost, unmanned aerial vehicles (UAVs) are increasingly crucial for enhancing coverage and functionality of wireless networks. However, incorpor...Show More

Abstract:

Due to flexibility and low-cost, unmanned aerial vehicles (UAVs) are increasingly crucial for enhancing coverage and functionality of wireless networks. However, incorporating UAVs into next-generation wireless communication systems poses significant challenges, particularly in sustaining high-rate and long-range secure communications against eavesdropping attacks. In this work, we consider a UAV swarm-enabled secure surveillance network system, where a UAV swarm forms a virtual antenna array to transmit sensitive surveillance data to a remote base station (RBS) via collaborative beamforming (CB) so as to resist mobile eavesdroppers. Specifically, we formulate an aerial secure communication and energy efficiency multi-objective optimization problem (ASCEE-MOP) to maximize the secrecy rate of the system and to minimize the flight energy consumption of the UAV swarm. To address the non-convex, NP-hard and dynamic ASCEE-MOP, we propose a generative diffusion model-enabled twin delayed deep deterministic policy gradient (GDMTD3) method. Specifically, GDMTD3 leverages an innovative application of diffusion models to determine optimal excitation current weights and position decisions of UAVs. The diffusion models can better capture the complex dynamics and the trade-off of the ASCEE-MOP, thereby yielding promising solutions. Simulation results highlight the superior performance of the proposed approach compared with traditional deployment strategies and some other deep reinforcement learning (DRL) benchmarks. Moreover, performance analysis under various parameter settings of GDMTD3 and different numbers of UAVs verifies the robustness of the proposed approach.
Published in: IEEE Transactions on Mobile Computing ( Volume: 24, Issue: 4, April 2025)
Page(s): 3041 - 3058
Date of Publication: 20 November 2024

ISSN Information:

Funding Agency:


I. Introduction

Unmanned aerial vehicles (UAVs), noted for their flexibility and low-cost, have become increasingly pivotal in various sectors, including military surveillance [1], environmental monitoring [2], and emergency response [3], etc. With the widespread deployment of the sixth generation (6G) wireless networks, UAVs are foreseen to play a crucial role in wireless networks as well as key enablers of innovative wireless applications [4]. For instance, UAVs can serve as the mobile aerial base stations [5] to support temporary and instant network coverage, which is especially valuable when the ground infrastructure is disrupted or the network capacity is insufficient to meet the demands. Moreover, UAVs can function as the aerial relays [6] for connecting the ground users to the distant base stations and extending the coverage, particularly in rural and remote areas. Furthermore, UAVs can also access the wireless network by acting as the mobile users [7], enabling them to obtain real-time data and support various applications such as precision agriculture, aerial goods delivery, and environmental monitoring.

Contact IEEE to Subscribe

References

References is not available for this document.