1. Introduction
Deep neural networks have shown remarkable success in a variety of applications, but their ability to generalize ro-bustly remains a challenging issue in deep learning. While highly trained and complex deep neural networks can per-form exceptionally well on test data that is identically dis-tributed (ID) with the training data, their effectiveness in accurately predicting inputs that fall outside of the training distribution is limited. This poses a significant hurdle to the generalization capability of deep neural network models. In safety-critical applications, it is preferable to detect out-of-distribution (OOD) inputs beforehand rather than relying on the model to make potentially unreliable predictions.