I. Introduction
Serverless is an emerging cloud paradigm that enables developers to run code in the cloud without managing cloud resources [1], [2]. Serverless provides a pay-as-you-go pricing model, ensuring that cloud users are charged solely for the actual execution time and resources utilized by their functions, as opposed to paying for idle capacity. However, cloud providers must estimate the resources of serverless functions and schedule them to support dynamic request loads. Due to the resource configuration and function scheduling delay, resource prediction is necessary for time reservation in function update [3]. Resource prediction also helps anomaly detection by comparing monitored and predicted resource consumption. Multiple function instances are deployed on the same server [4] to improve hardware resource utilization. However, function instances deployed on the same physical server compete for shared resources (eg. memory bandwidth and CPU caches, etc) [5] – [8] which leads to interference. According to our experiments in § II, interference can decrease the performance (i.e., requests per second) by more than 58%. Interference adds complexity to resource prediction.