I. Introduction
The rapid development of Deep Neural Networks (DNN) coupled with the emergence of Open-Source Software (OSS) data has led to the accelerated advancement of DNN-based code intelligence models [1]. For example, Feng et.al [2] proposed a pre-trained code model CodeBERT with an Encoder architecture. The model is pre-trained on a large-scale code corpus to learn the semantic information of code snippets, providing powerful representations for tasks related to source code. To facilitate code generation tasks, Wang et.al [3] proposed a sequence-to-sequence pre-trained model CodeT5. At the same time, different kinds of code intelligence tasks are emerging, such as code defect detection [4], code summary generation [5] and so on. It has already become a common paradigm to fine-tune pre-trained models on downstream datasets to improve their performance on specific tasks [6].