Fairness-Aware Multi-Server Federated Learning Task Delegation Over Wireless Networks | IEEE Journals & Magazine | IEEE Xplore

Fairness-Aware Multi-Server Federated Learning Task Delegation Over Wireless Networks


Abstract:

In the rapidly advancing field of federated learning (FL), ensuring efficient FL task delegation while incentivizing FL client participation poses significant challenges,...Show More

Abstract:

In the rapidly advancing field of federated learning (FL), ensuring efficient FL task delegation while incentivizing FL client participation poses significant challenges, especially in wireless networks where FL participants' coverage is limited. Existing Contract Theory-based methods are designed under the assumption that there is only one FL server in the system (i.e., the monopoly market assumption), which in unrealistic in practice. To address this limitation, we propose Fairness-Aware Multi-Server FL task delegation approach (FAMuS), a novel framework based on Contract Theory and Lyapunov optimization to jointly address these intricate issues facing wireless multi-server FL networks (WMSFLN). Within a given WMSFLN, a task requester products multiple FL tasks and delegate them to FL servers which coordinate the training processes. To ensure fair treatment of FL servers, FAMuS establishes virtual queues to track their previous access to FL tasks, updating them in relation to the resulting FL model performance. The objective is to minimize the time-averaged cost in a WMSFLN, while ensuring all queues remain stable. This is particularly challenging given the incomplete information regarding FL clients' participation cost and the unpredictable nature of the WMSFLN state, which depends on the locations of the mobile clients. Extensive experiments comparing FAMuS against five state-of-the-art approaches based on two real-world datasets demonstrate that it achieves 6.91% higher test accuracy, 27.34% lower cost, and 0.63% higher fairness on average than the best-performing baseline.
Published in: IEEE Transactions on Network Science and Engineering ( Volume: 12, Issue: 2, March-April 2025)
Page(s): 684 - 697
Date of Publication: 28 November 2024

ISSN Information:

Funding Agency:


I. Introduction

In The evolving landscape of machine learning (ML), centralized learning approaches have traditionally taken the center stage [1]. As data generation scales up and concerns about data privacy become more widespread, such an approach faces inherent challenges [2], [3]. A promising way out of this gridlock is federated learning (FL) [4], which is a decentralized learning paradigm where devices, from smartphones to industrial IoT sensors, perform localized model training and collaboratively build global ML models. Rather than transmitting raw data, only model updates are uploaded to an FL server for coordination and aggregation, thereby achieving enhanced privacy preservation and the ability to utilize diverse, real-world data sources [5]. This salient feature of FL is further empowered by the development of wireless networks [6], [7], [8]. Today, exploring FL in the context of wireless networks emerges as an important field of research in areas such as connectivity, scalability, real-time collaboration, and energy efficiency.

Contact IEEE to Subscribe

References

References is not available for this document.