I. Introduction
Split Federated Learning (SFL) is a promising technology for distributed learning that combines the strengths of federated and split learning paradigms. In SFL, clients train only a part of the full model, called the client-side model, reducing their computational load. These client-side models are then synchronized to improve convergence speed. Correspondingly, this approach has obtained significant attention, particularly in wireless networks where mobile devices (MDs) are known to have limited battery and computational resources. However, while offering notable advantages, SFL introduces additional communication overhead when communicating with servers and raises privacy concerns due to the frequent exchange of client-side model outputs and model updates, which are correlated to the raw data. Therefore, the choice of the cut layer in SFL, which divides the model into client- and server-side models, greatly affects the overall latency and privacy of SFL. Optimizing SFL management to address these challenges remains a complicated issue [1], [2].