Loading [MathJax]/extensions/MathZoom.js
Multi-Agent DRL for User Association and Power Control in Terrestrial-Satellite Network | IEEE Conference Publication | IEEE Xplore

Multi-Agent DRL for User Association and Power Control in Terrestrial-Satellite Network


Abstract:

In the past few years, satellite communications have greatly affected our daily lives. Because the resources of terrestrial-satellite network are limited, how to allocate...Show More

Abstract:

In the past few years, satellite communications have greatly affected our daily lives. Because the resources of terrestrial-satellite network are limited, how to allocate resources of terrestrial-satellite network through effective methods have become a major challenge. We propose a framework for energy efficiency optimization of terrestrial-satellite network based on Non-orthogonal multiple access (NOMA). In our framework, we adopt a multi-agent deep deterministic policy gradient (MADDPG) method to obtain the maximum energy efficiency by user association and power control. Finally, the simulation results show that the proposed method has better optimization performance compared with the traditional singleagent deep reinforcement learning algorithm and can efficiently solve the problems of user association and power control in the integrated terrestrial-satellite network.
Date of Conference: 07-11 December 2021
Date Added to IEEE Xplore: 02 February 2022
ISBN Information:
Conference Location: Madrid, Spain

Funding Agency:

References is not available for this document.

I. Introduction

In recent years, the NOMA schemes are often applied in the integrated terrestrial-satellite network that consists of BSs on the ground and satellites [1]–[3]. It is considered as a promising scenario. As a multiple access technology, NOMA can improve the total energy efficiency of the system [4]–[6]. In the terrestrial-satellite network [7], BSs provide low-cost communication services, while satellites can be used to cover and serve users who are in the underdeveloped areas. This system can achieve wider coverage area and better service quality. With the continuous increase of data traffic in communications, one of the main challenges is how to use effective methods for resource allocation and improve the system energy efficiency.

Select All
1.
W. Lu, K. An, T. Liang and X. Yan, "Robust beamforming in multibeam satellite systems with non-orthogonal multiple access", IEEE Wireless Commun. Lett., [online] Available: .
2.
X. Zhu, C. Jiang, L. Kuang, N. Ge and J. Lu, "Non-orthogonal multiple access based integrated terrestrial-satellite networks", IEEE J. Sel. Areas Commun., vol. 35, no. 10, pp. 2253-2267, Oct. 2017.
3.
M. Jia, Q. Gao, Q. Guo, X. Gu and X. Shen, "Power multiplexing NOMA and bandwidth compression for satellite-terrestrial networks", IEEE Trans. on Veh. Technol., vol. 68, no. 11, pp. 11107-11117, Nov. 2019.
4.
H. Zhang et al., "Energy efficient dynamic resource optimization in NOMA system", IEEE Trans. Wireless Commun., vol. 17, no. 9, pp. 5671-5683, Sep. 2018.
5.
A. A. Nasir, H. D. Tuan, T. Q. Duong and M. Debbah, "NOMA throughput and energy efficiency in energy harvesting enabled networks", IEEE Trans. Wireless Commun., vol. 67, no. 9, pp. 6499-6511, Sep. 2019.
6.
H. Zhang, H. Zhang, W. Liu, K. Long, J. Dong and V. C. M. Leung, "Energy efficient user clustering hybrid precoding and power optimization in terahertz MIMO-NOMA systems", IEEE J. Sel. Areas Commun., vol. 38, no. 9, pp. 2074-2085, Sep. 2020.
7.
S. Fu, J. Gao and L. Zhao, "Integrated resource management for terrestrial-satellite systems", IEEE Trans. on Veh. Technol., vol. 69, no. 3, pp. 3256-3266, Mar. 2020.
8.
H. Zhang, N. Yang, W. Huangfu, K. Long and V. C. M. Leung, "Power control based on deep reinforcement learning for spectrum sharing", IEEE Trans. Wireless Commun., vol. 19, no. 6, pp. 4209-4219, Jun. 2020.
9.
P. V. R. Ferreira et al., "Reinforcement learning for satellite communications: from LEO to deep space operations", IEEE Commun. Mag., vol. 57, no. 5, pp. 70-75, May. 2019.
10.
X. Hu, S. Liu, R. Chen, W. Wang and C. Wang, "A deep reinforcement learning-based framework for dynamic resource allocation in multibeam satellite systems", IEEE Commun. Lett., vol. 22, no. 8, pp. 1612-1615, Aug. 2018.
11.
P. V. R. Ferreira et al., "Multiobjective reinforcement learning for cognitive satellite communications using deep neural network ensem-bles", IEEE J. Sel. Areas Commun., vol. 36, no. 5, pp. 1030-1041, May. 2018.
12.
C. Zhou et al., "Deep reinforcement learning for delay-oriented IoT task scheduling in space-air-ground integrated network", IEEE Trans. Wireless Commun., [online] Available: .
13.
R. Lowe, Y. Wu, A. Tamar, J. Harb, O. P. Abbeel and I. Mordatch, "Multiagent actor-critic for mixed cooperative-competitive environments", Proc. Adv. Neural Inf. Process. Syst., pp. 6379-6390, 2017.

Contact IEEE to Subscribe

References

References is not available for this document.