I. Introduction
Face recognition has become a fundamental technology in multiple real-world applications, using pattern recognition-based tools to offer an increased level of security for many use cases. Moreover, face recognition solutions use increasingly more sophisticated artificial intelligence (AI) techniques, which have shown significantly improved performance in recent years. However, these performance gains are often associated with higher complexity and harder to understand and explain systems – consequently modern AI-based face recognition system are sometimes referred to as ‘black-box’ systems. This raises the risk of not trusting AI-based facial recognition technology if its results cannot be minimally explained, especially when this technology also brings risks in terms of privacy, which is a very sensitive societal issue. Thus, facial recognition explainability has become a pivotal step to understand AI-based facial recognition systems’ behavior and increase trust on this type of technology. Explainability can be the decisive factor to enable the usage of face recognition systems that conform with the European initiative entitled "Artificial Intelligence Act" [1], where most biometric recognition systems are considered "high-risk AI", and the requirements for their authorization will include the need for transparency and information to the user.