Loading [MathJax]/extensions/MathMenu.js
User Association and Resource Allocation in Large Language Model Based Mobile Edge Computing System over 6G Wireless Communications | IEEE Conference Publication | IEEE Xplore

User Association and Resource Allocation in Large Language Model Based Mobile Edge Computing System over 6G Wireless Communications


Abstract:

In the rapidly evolving landscape of large language models (LLMs) and mobile edge computing for 6G, the need for efficient service delivery to mobile users with constrain...Show More

Abstract:

In the rapidly evolving landscape of large language models (LLMs) and mobile edge computing for 6G, the need for efficient service delivery to mobile users with constrained computational resources has become paramount. Addressing this, our paper delves into a collaborative framework for model training where user data and model adapters are shared with servers to optimize performance. Within this framework, users initially update the first several layers of the adapters while freezing the other layers of them, leveraging their local datasets. Once this step is complete, these partially trained parameters are transmitted to servers. The servers, equipped with more robust computational capabilities, then update the subsequent layers. After this training, they send the enhanced parameters back to the users. This collaborative training approach ensures that mobile users with limited computational capacities can still benefit from advanced LLM services without being burdened by exhaustive computations. Central to our methodology is the DASHF algorithm, which encapsulates the Dinkelbach algorithm, alternating optimization, semidefinite relaxation (SDR), the Hungarian method, and a pioneering fractional programming technique from a recent IEEE JSAC paper [1]. The crux of DASHF is its capability to reformulate an optimization problem as Quadratically Constrained Quadratic Programming (QCQP) via meticulously crafted transformations, making it solvable by SDR and the Hungarian algorithm. Through extensive simulations, we demonstrate the effectiveness of the DASHF algorithm, offering significant insights for the advancement of collaborative LLM service deployments.
Date of Conference: 24-27 June 2024
Date Added to IEEE Xplore: 25 September 2024
ISBN Information:

ISSN Information:

Conference Location: Singapore, Singapore

I. Introduction

The proliferation of large language models (LLMs) marks a monumental leap in the realms of artificial intelligence and natural language processing. These models, with their deep structures and vast parameter sizes, offer capabilities that redefine the benchmarks of machine-human interactions for 6G [2]. However, the very nature of their size and intricacy means they cannot be effortlessly deployed, especially in constrained environments like mobile devices [3].

Contact IEEE to Subscribe

References

References is not available for this document.