I. Introduction
The concepts of meta-learning and federated learning are not new, but recent advances in gradient-based optimization have brought them back into the spotlight as promising solutions for fast learning. The authors of [1] investigated a task offloading mechanism based on federated reinforcement learning in mobile edge computing. The algorithm this effectiveness were determined by evaluating model loss at different learning rates to determine the change in learning loss. But these algorithms require training from scratch. In order to avoid training from scratch and obtain customized models for heterogeneous data sets, meta-learning is adopted to overcome the statistical heterogeneity problem encountered with federated learning. In particular, Finn and al [2] proposed a gradient-based algorithm called Model-Agnostic Meta-Learning (MAML), which directly optimizes learning performance over model initialization. Thus, the authors of [3] proposed a Meta-Learning-based task offloading algorithm (MELO) that trains a general Deep Neural Network (DNN) for different MEC task scenarios and can quickly learn to adapt to a new task. This algorithm addresses the limitation of federated learning, but requires the transfer of all data, resulting in the disclosure of personal data.