Loading [a11y]/accessibility-menu.js
User Association and Resource Allocation in Large Language Model Based Mobile Edge Computing System over 6G Wireless Communications | IEEE Conference Publication | IEEE Xplore

User Association and Resource Allocation in Large Language Model Based Mobile Edge Computing System over 6G Wireless Communications


Abstract:

In the rapidly evolving landscape of large language models (LLMs) and mobile edge computing for 6G, the need for efficient service delivery to mobile users with constrain...Show More

Abstract:

In the rapidly evolving landscape of large language models (LLMs) and mobile edge computing for 6G, the need for efficient service delivery to mobile users with constrained computational resources has become paramount. Addressing this, our paper delves into a collaborative framework for model training where user data and model adapters are shared with servers to optimize performance. Within this framework, users initially update the first several layers of the adapters while freezing the other layers of them, leveraging their local datasets. Once this step is complete, these partially trained parameters are transmitted to servers. The servers, equipped with more robust computational capabilities, then update the subsequent layers. After this training, they send the enhanced parameters back to the users. This collaborative training approach ensures that mobile users with limited computational capacities can still benefit from advanced LLM services without being burdened by exhaustive computations. Central to our methodology is the DASHF algorithm, which encapsulates the Dinkelbach algorithm, alternating optimization, semidefinite relaxation (SDR), the Hungarian method, and a pioneering fractional programming technique from a recent IEEE JSAC paper [1]. The crux of DASHF is its capability to reformulate an optimization problem as Quadratically Constrained Quadratic Programming (QCQP) via meticulously crafted transformations, making it solvable by SDR and the Hungarian algorithm. Through extensive simulations, we demonstrate the effectiveness of the DASHF algorithm, offering significant insights for the advancement of collaborative LLM service deployments.
Date of Conference: 24-27 June 2024
Date Added to IEEE Xplore: 25 September 2024
ISBN Information:

ISSN Information:

Conference Location: Singapore, Singapore
References is not available for this document.

I. Introduction

The proliferation of large language models (LLMs) marks a monumental leap in the realms of artificial intelligence and natural language processing. These models, with their deep structures and vast parameter sizes, offer capabilities that redefine the benchmarks of machine-human interactions for 6G [2]. However, the very nature of their size and intricacy means they cannot be effortlessly deployed, especially in constrained environments like mobile devices [3].

Select All
1.
J. Zhao, L. Qian and W. Yu, "Human-centric resource allocation in the Metaverse over wireless communications", IEEE Journal on Selected Areas in Communications (JSAC), vol. 42, no. 3, pp. 514-537, 2024.
2.
P. Gao, J. Han, R. Zhang, Z. Lin, S. Geng, A. Zhou, W. Zhang, P. Lu, C. He, X. Yue et al., "Llama-adapter v2: Parameter-efficient visual instruction model", arXiv preprint, 2023.
3.
R. Zhang, J. Han, A. Zhou, X. Hu, S. Yan, P. Lu, et al., "Llama-adapter: Efficient fine-tuning of language models with zero-init attention", arXiv preprint, 2023.
4.
L. Dong, F. Jiang, Y. Peng, K. Wang, K. Yang, C. Pan, et al., "Lambo: Large language model empowered edge intelligence", arXiv preprint, 2023.
5.
Y. Shen, J. Shao, X. Zhang, Z. Lin, H. Pan, D. Li, et al., "Large language models empowered autonomous edge ai for connected intelligence", arXiv preprint, 2023.
6.
Z. Hong, X. Qiu, J. Lin, W. Chen, Y. Yu, H. Wang, et al., "Intelligence-endogenous management platform for computing and network convergence", IEEE Network, 2023.
7.
T. Guo, S. Guo, J. Wang, X. Tang and W. Xu, "Promptfl: Let federated participants cooperatively learn prompts instead of models-federated learning in age of foundation model", IEEE Transactions on Mobile Computing, 2023.
8.
B. Lai, J. Wen, J. Kang, H. Du, J. Nie, C. Yi, et al., "Resource-efficient generative mobile edge networks in 6g era: Fundamentals framework and case study", arXiv preprint, 2023.
9.
Q. Zeng, Y. Du, K. Huang and K. K. Leung, "Energy-efficient resource management for federated edge learning with CPU-GPU heterogeneous computing", IEEE Transactions on Wireless Communications, vol. 20, no. 12, pp. 7947-7962, 2021.
10.
D. Yang, G. Xue, X. Fang and J. Tang, "Incentive mechanisms for crowdsensing: Crowdsourcing with smartphones", IEEE/ACM Transactions on Networking, vol. 24, no. 3, pp. 1732-1744, 2015.
11.
W. Dinkelbach, "On nonlinear fractional programming", Management Science, vol. 13, no. 7, pp. 492-498, 1967.
12.
L. Qian and J. Zhao, "User association and resource allocation in large language model based mobile edge computing system over wireless communications", arXiv preprint, 2023.
13.
Y. Dai, D. Xu, S. Maharjan and Y. Zhang, "Joint computation offloading and user association in multi-task mobile edge computing", IEEE Transactions on Vehicular Technology, vol. 67, no. 12, pp. 12313-12325, 2018.

Contact IEEE to Subscribe

References

References is not available for this document.