Loading [MathJax]/extensions/MathMenu.js
An Energy Consumption Optimization Strategy for Mobile Edge Networks | IEEE Conference Publication | IEEE Xplore

An Energy Consumption Optimization Strategy for Mobile Edge Networks


Abstract:

Edge computing involves shifting computational and storage capabilities from centralized cloud computing centers to the edge of the network, allowing edge servers deploye...Show More

Abstract:

Edge computing involves shifting computational and storage capabilities from centralized cloud computing centers to the edge of the network, allowing edge servers deployed at the edge to respond to user requests. This approach eliminates the need for users to traverse the uplink from the edge network to the cloud computing center when initiating a business request, as well as the downlink from the cloud computing center when returning the computed results, significantly reducing transmission time. However, as latency requirements continue to increase, meeting higher latency demands will inevitably lead to the deployment of a large number of edge servers. The consequent energy consumption problem has become a pressing issue for service providers today. Studies have shown that user business requests are unevenly distributed over time, with periods of high and low demand. To cope with the uncertainty of user business requests and respond quickly, many edge servers remain idle or operate at low business volume, leading to significant energy consumption even when idle. This paper proposes a centralized server-controlled edge device hibernation strategy based on upper and lower thresholds, aiming to minimize the overall energy consumption of all edge servers in the system while meeting user latency requirements. By dividing time slots under upper and lower threshold monitoring, the paper analyzes the load conditions of each edge server in the system during each time slot and assigns computing task migration through the central server to reduce the number of idle and low-business volume edge servers, ultimately achieving the minimization of overall system energy consumption.
Date of Conference: 11-13 August 2023
Date Added to IEEE Xplore: 19 August 2024
ISBN Information:
Conference Location: Shanghai, China

Funding Agency:


I. INTRODUCTION

In traditional cloud computing, user-initiated requests are serviced through an uplink from the edge network to the central network and then to the cloud computing center. Once processed, the results are returned to the user via a downlink from the cloud computing center. However, with the development of 5G and 6G, and improvements in people's quality of life, there is a growing need for low-latency and high-bandwidth application scenarios, such as autonomous driving, smart cities, and healthcare[1]. To meet these increasing low-latency demands, edge computing has emerged. However, edge servers have significant differences in computing and storage capabilities compared to cloud computing centers. As various businesses expand over time, latency requirements have further increased, leading to the large-scale deployment of edge servers to quickly respond to user requests. Consequently, the energy consumption problem has become increasingly prominent. At present, there are the following related researches on edge computing:

Contact IEEE to Subscribe

References

References is not available for this document.