I. Introduction
Many deep learning systems have achieved state-of-the-art recognition performance when the training and testing data are identically distributed. However, neural networks make high-confidence predictions even for inputs that are completely unrecognizable and outside the training distribution [49], leading to a significant decline in prediction performance or even complete failure. Therefore, the detection of out-of-distribution testing samples is of great significance for the safe deployment of deep learning in real-world applications. This detection process determines whether an input is In-Distribution (ID) or Out-of-Distribution (OOD). OOD detection has been widely utilized in various domains, including medical diagnosis [45] , video self-supervised learning [53] and autonomous driving [6].