Split Federated Learning and Reinforcement based Codec Switching in Edge Platform | IEEE Conference Publication | IEEE Xplore

Split Federated Learning and Reinforcement based Codec Switching in Edge Platform


Abstract:

In recent times, Split Federated Learning (SFL) is being proposed in the domain of Artificial Intelligence (AI) & Machine Learning (ML), where AI/ML models are partitione...Show More

Abstract:

In recent times, Split Federated Learning (SFL) is being proposed in the domain of Artificial Intelligence (AI) & Machine Learning (ML), where AI/ML models are partitioned into two or more sub-networks among clients and servers (i.e., edge/cloud). However, in the SFL approach, it is not clear on what basis the AI/ML model is partitioned among the client and the edge. In a partial offload scenario, to mitigate this issue, we propose an Optimal Split Federated Learning (O-SFL) mechanism that finds an optimal split of a DNN model based on network bandwidth where no media data is transferred but only partial output of the model is shared among clients and the edge device. However, in full offload scenario, where media data is transferred from clients to edge, it is possible that the codec that is being currently used for encoding frames is not suitable due to the current network bandwidth fluctuations. To solve this issue, we propose a Reinforcement Learning based Codec Switching (RLCS) mechanism that provides a-priori detection of a suitable codec based on current networks bandwidth conditions. We perform simulations to compare the performance of the O-SFL which provides significant improvements over SFL (considering various split points) for total training time tested with Wi-Fi and LTE network (as 5G network is not available currently). We also show the performance of RLCS mechanism with respect to the traditional fixed video codec mechanism.
Date of Conference: 06-08 January 2023
Date Added to IEEE Xplore: 17 February 2023
ISBN Information:

ISSN Information:

Conference Location: Las Vegas, NV, USA

I. Introduction

In the era of 5G and beyond, today's centralized Machine Learning (ML) and Artificial Intelligence (AI) frameworks, training & testing of complex models using large datasets are performed at powerful servers (edge/cloud) [1]. The cen-tralized AI/ML framework consists of high computational capabilities which updates the model parameters where data-set is transmitted from client devices (such as IoT devices, Smartphones, etc.) to server/edge to perform training/testing. However, transmitting data or large data-set from a client device to the edge/cloud server for training/testing is costly [2] in-terms of bandwidth & latency and could pose privacy issues while using private or confidential datasets.

Contact IEEE to Subscribe

References

References is not available for this document.