1 Introduction
Recently, with the emergence of deep convolutional neural networks (CNN) [1], [2], [3], [4], [5], research focus of face recognition (FR) has shifted to deep-learning-based approaches [6], [7], [8] and the accuracy was dramatically boosted to above 99.80% on the Labeled Faces in the Wild (LFW) dataset [9]. However, the recognition accuracy is not only aspect to attend when designing learning algorithms. As a growing number of applications based on FR have integrated into our lives, its potential for unfairness is raising alarm. For example, Amazon’s Rekognition Tool incorrectly matched the photos of 28 U.S. congressmen with the faces of criminals, especially the error rate was up to 39% for Black faces. According to these reports [10], [11], FR system seems discriminative based on classes like race, demonstrating significantly different accuracy when applied to different groups. Such bias can result in mistreatment of certain demographic groups, by either exposing them to a higher risk of fraud, or by making access to services more difficult. Consequently, there is an increased need to guarantee fairness for automatic systems and prevent discriminatory decisions.