Explainable Face Verification via Feature-Guided Gradient Backpropagation | IEEE Conference Publication | IEEE Xplore

Explainable Face Verification via Feature-Guided Gradient Backpropagation


Abstract:

Recent years have witnessed significant advancement in face recognition (FR) techniques, with their applications impacting people's lives including in security-sensitive ...Show More

Abstract:

Recent years have witnessed significant advancement in face recognition (FR) techniques, with their applications impacting people's lives including in security-sensitive areas. There is a growing need for reliable interpretation of decisions of such systems. Existing studies relying on various mechanisms have investigated the usage of saliency maps as an explanation approach, but suffer from different limitations. This paper first explores the spatial relationship between face image and its deep representation via gradient backpropagation. Then a new explanation approach called Feature-Guided Gradient Backpropagation (FGGB) has been conceived, which provides precise and insightful similarity and dissimilarity saliency maps to explain the “Accept” and “Reject” decision of an FR system. Extensive visual presentation and quantitative measurement have shown that FGGB achieves comparable results in similarity maps and superior performance in dissimilarity maps when compared to current state-of-the-art explainable face verification approaches.
Date of Conference: 27-31 May 2024
Date Added to IEEE Xplore: 11 July 2024
ISBN Information:

ISSN Information:

Conference Location: Istanbul, Turkiye

I. Introduction

Over the past decades, the accuracy of Face Recognition (FR) systems has been boosted thanks to the advanced technologies based on deep convolutional neural networks (DCNNs) [9], [11], [25], [27] and large-scale face datasets [3], [8], [34]. FR technology has become an increasingly important application, widely used in our daily lives and even security-critical applications, such as identity checks and access control. However, the DCNN-based FR systems often involve complicated and unintuitive decision-making processes, making it difficult to interpret or further improve. To address this problem, significant efforts have been de-voted to the objective of enhancing the transparency and interpretability of learning-based face recognition systems.

Contact IEEE to Subscribe

References

References is not available for this document.