Loading [MathJax]/extensions/MathMenu.js
Deep Reinforcement Learning Based Admission Control for Throughput Maximization in Mobile Edge Computing | IEEE Conference Publication | IEEE Xplore

Deep Reinforcement Learning Based Admission Control for Throughput Maximization in Mobile Edge Computing


Abstract:

With the development of wireless network technologies, such as LTE/5G, Mobile Cloud Computing (MCC) has been proposed as a solution for mobile devices that need to carry ...Show More

Abstract:

With the development of wireless network technologies, such as LTE/5G, Mobile Cloud Computing (MCC) has been proposed as a solution for mobile devices that need to carry out high-complexity computation with limited resources. Technically, with MCC, high-complexity computation tasks are offloaded from mobile devices to cloud servers. However, MCC does not work well for time-sensitive mobile applications due to the relatively long latency between mobile devices and cloud servers. Mobile Edge Computing (MEC), is expected to solve the problem with MCC. With MEC, edge servers, instead of cloud servers, are deployed at the edge of the network to provide offloading services to mobile devices. Since edge servers are much closer to mobile devices, the resulting latency is significantly lower. Despite the advantages of MEC over MCC, edge servers are not as resource-abundant as cloud servers. Consequently, when many offloaded tasks arrive at an edge server, admission control needs to be in place to arrive at the best performance. In this paper, we propose a Deep Reinforcement Learning (DRL) based admission control scheme, DAC, to maximize the system throughput of an edge server. Our experimental results indicate that DAC outperforms the existing admission control schemes for MEC in terms of system throughput.
Date of Conference: 27-30 September 2021
Date Added to IEEE Xplore: 10 December 2021
ISBN Information:

ISSN Information:

Conference Location: Norman, OK, USA

Funding Agency:

References is not available for this document.

I. Introduction

With the wide deployment of mobile devices, e.g. smart-phones, the quantity and type of mobile applications have grown rapidly over the past years. Nowadays, many mobile applications, such as online games and virtual reality, involve high-complexity computation that requires a large amount of hardware resources, which often exceeds the processing capacity of mobile devices [1]–[4]. Even if mobile devices have sufficient resources to execute these applications, the high-complexity nature of these applications typically leads to lengthy execution time, which is not energy-friendly to battery-powered mobile devices. To meet the computation requirements of these mobile applications or reduce the energy consumption of mobile devices, Mobile Cloud Computing (MCC) was proposed as a solution to this computation problem. With MCC, high-complexity computation tasks are offloaded from mobile devices to cloud servers. Once cloud servers complete the offloaded tasks, the computation results are returned to mobile devices.

Select All
1.
A. Al-Shuwaili and O. Simeone, "Energy-efficient resource allocation for mobile edge computing-based augmented reality applications", IEEE Wireless Communications Letters, vol. 6, no. 3, pp. 398-401, 2017.
2.
L. Liu, Y. Zhou, J. Yuan, W. Zhuang and Y. Wang, "Economically optimal ms association for multimedia content delivery in cache-enabled heterogeneous cloud radio access networks", IEEE Journal on Selected Areas in Communications, vol. 37, no. 7, pp. 1584-1593, 2019.
3.
Y. Zhou, L. Tian, L. Liu and Y. Qi, "Fog computing enabled future mobile communication networks: A convergence of communication and computing", IEEE Communications Magazine, vol. 57, no. 5, pp. 20-27, 2019.
4.
Q. Zhang, M. Lin, L. T. Yang, Z. Chen, S. U. Khan and P. Li, "A double deep q-learning model for energy-efficient edge scheduling", IEEE Transactions on Services Computing, vol. 12, no. 5, pp. 739-749, 2018.
5.
T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, et al., "Continuous control with deep reinforcement learning", arXiv preprint, 2015.
6.
Q. Xia, W. Liang and W. Xu, "Throughput maximization for online request admissions in mobile cloudlets", 38th Annual IEEE Conference on Local Computer Networks, pp. 589-596, 2013.
7.
J. Xu, L. Chen and S. Ren, "Online learning for offloading and autoscaling in energy harvesting mobile edge computing", IEEE Transactions on Cognitive Communications and Networking, vol. 3, no. 3, pp. 361-373, 2017.
8.
M. Hu, L. Zhuang, D. Wu, Y. Zhou, X. Chen and L. Xiao, "Learning driven computation offloading for asymmetrically informed edge computing", IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 8, pp. 1802-1815, 2019.
9.
X. Deng, J. Li, L. Shi, Z. Wei, X. Zhou and J. Yuan, "Wireless powered mobile edge computing: Dynamic resource allocation and throughput maximization", IEEE Transactions on Mobile Computing, 2020.
10.
Y. Li and S. Wang, "An energy-aware edge server placement algorithm in mobile edge computing", 2018 IEEE International Conference on Edge Computing (EDGE), pp. 66-73, 2018.
11.
X. Wang, Y. Han, V. C. Leung, D. Niyato, X. Yan and X. Chen, "Convergence of edge computing and deep learning: A comprehensive survey", IEEE Communications Surveys Tutorials, vol. 22, no. 2, pp. 869-904, 2020.
12.
A. Zappone, M. Di Renzo and M. Debbah, "Wireless networks design in the era of deep learning: Model-based ai-based or both?", IEEE Transactions on Communications, vol. 67, no. 10, pp. 7331-7376, 2019.
13.
S. Sesia, I. Toufik and M. Baker, LTE-the UMTS long term evolution: from theory to practice, John Wiley Sons, 2011.

Contact IEEE to Subscribe

References

References is not available for this document.