Loading web-font TeX/Main/Regular
Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing | IEEE Journals & Magazine | IEEE Xplore

Adaptive Federated Deep Reinforcement Learning for Proactive Content Caching in Edge Computing


Abstract:

With the aggravation of data explosion and backhaul loads on 5 G edge network, it is difficult for traditional centralized cloud to meet the low latency requirements for ...Show More

Abstract:

With the aggravation of data explosion and backhaul loads on 5 G edge network, it is difficult for traditional centralized cloud to meet the low latency requirements for content access. The federated learning (FL)-based proactive content caching (FPC) can alleviate the matter by placing content in local cache to achieve fast and repetitive data access while protecting the users’ privacy. However, due to the non-independent and identically distributed (Non-IID) data across the clients and limited edge resources, it is unrealistic for FL to aggregate all participated devices in parallel for model update and adopt the fixed iteration frequency in local training process. To address this issue, we propose a distributed resources-efficient FPC policy to improve the content caching efficiency and reduce the resources consumption. Through theoretical analysis, we first formulate the FPC problem into a stacked autoencoders (SAE) model loss minimization problem while satisfying resources constraint. We then propose an adaptive FPC (AFPC) algorithm combined deep reinforcement learning (DRL) consisting of two mechanisms of client selection and local iterations number decision. Next, we show that when training data are Non-IID, aggregating the model parameters of all participated devices may be not an optimal strategy to improve the FL-based content caching efficiency, and it is more meaningful to adopt adaptive local iteration frequency when resources are limited. Finally, experimental results in three real datasets demonstrate that AFPC can effectively improve cache efficiency up to 38.4\% and 6.84\%, and save resources up to 47.4\% and 35.6\%, respectively, compared with traditional multi-armed bandit (MAB)-based and FL-based algorithms.
Published in: IEEE Transactions on Parallel and Distributed Systems ( Volume: 33, Issue: 12, 01 December 2022)
Page(s): 4767 - 4782
Date of Publication: 26 August 2022

ISSN Information:

Funding Agency:


1 Introduction

With the explosive growth of mobile and Internet of Things (IoT) devices on 5 G edge network, the data generated on the edge have increased rapidly, which causes a sharp rise in mobile communication traffic and places a heavy burden on the backhaul link between the local base station (BS) and the Internet [1]. This makes it difficult for traditional cloud computing to meet the users’ low latency requirements for content access. Accordingly, edge computing (EC) [2], as an emerging computation paradigm, is considered as a promising solution to push computing tasks from the core network to the edge network. Thus, the EC-based content caching is considered as a hopeful approach to alleviate backhaul link burden by storing popular files at local cache entity [3], [4], [5].

Contact IEEE to Subscribe

References

References is not available for this document.