Loading [MathJax]/extensions/MathMenu.js
Enhancing Out-of-Distribution Detection with Multitesting-based Layer-wise Feature Fusion | IEEE Conference Publication | IEEE Xplore

Enhancing Out-of-Distribution Detection with Multitesting-based Layer-wise Feature Fusion


Abstract:

Deploying machine learning in open environments presents the challenge of encountering diverse test inputs that differ significantly from the training data. These out-of-...Show More

Abstract:

Deploying machine learning in open environments presents the challenge of encountering diverse test inputs that differ significantly from the training data. These out-of-distribution samples may exhibit shifts in local or global features compared to the training distribution. The machine learning (ML) community has responded with a number of methods aimed at distinguishing anomalous inputs from original training data. However, the majority of previous studies have primarily focused on the output layer or penultimate layer of pre-trained deep neural networks. In this paper, we propose a novel framework, Multitesting-based Layer-wise Out-of-Distribution (OOD) Detection (MLOD), to identify distributional shifts in test samples at different levels of features through rigorous multiple testing procedure. Our approach distinguishes itself from existing methods as it does not require modifying the structure or fine-tuning of the pre-trained classifier. Through extensive experiments, we demonstrate that our proposed framework can seamlessly integrate with any existing distance-based inspection method while efficiently utilizing feature extractors of varying depths. Our scheme effectively enhances the performance of out-of-distribution detection when compared to baseline methods. In particular, MLOD-Fisher achieves superior performance in general. When trained using KNN on CIFAR10, MLOD-Fisher significantly lowers the false positive rate (FPR) from 24.09% to 7.47% on average compared to merely utilizing the features of the last layer.
Date of Conference: 25-27 June 2024
Date Added to IEEE Xplore: 30 July 2024
ISBN Information:
Conference Location: Singapore, Singapore
References is not available for this document.

I. Introduction

Many deep learning systems have achieved state-of-the-art recognition performance when the training and testing data are identically distributed. However, neural networks make high-confidence predictions even for inputs that are completely unrecognizable and outside the training distribution [49], leading to a significant decline in prediction performance or even complete failure. Therefore, the detection of out-of-distribution testing samples is of great significance for the safe deployment of deep learning in real-world applications. This detection process determines whether an input is In-Distribution (ID) or Out-of-Distribution (OOD). OOD detection has been widely utilized in various domains, including medical diagnosis [45] , video self-supervised learning [53] and autonomous driving [6].

Getting results...

Contact IEEE to Subscribe

References

References is not available for this document.