I. Introduction
Automated face recognition has attracted increasing attention and is now widely used in multiple application areas, ranging from access control and surveillance to commerce and entertainment [1]–[3]. However, the widespread use of face recognition applications raises new security concerns [4], [5], making the robustness against presentation attacks a very active field of research [6]–[8]. The security of a biometric recognition system can be compromised at different architectural points, all the way from the presentation of the biometric trait to the final recognition decision [7], [9]. The attacks to a face recognition system can be broadly divided into indirect and direct attacks [10]. Indirect attacks are performed inside the recognition system to bypass the feature extractor, comparator, or tamper with the template database. Direct attacks, also referred to as presentation or spoofing attacks, are performed outside the biometric system by presenting falsified data, or artefacts [7], in front of the acquisition sensors using Presentation Attack Instruments (PAIs), for instance printed photos or electronic devices displaying a face. While the recognition system’s robustness against indirect attacks can be increased using conventional protection mechanisms, such as data encryption and intrusion prevention and detection [6], it is also critical to incorporate in the recognition systems efficient image analysis solutions to detect the presentation attacks. In this context, this paper addresses the problem of presentation attacks. The most common types of attacks involve: i) a printed face on a paper or a wrapped paper simulating the human face curvature; ii) a displayed face image or video on the screen of a portable device such as laptop, tablet, or mobile phone; and iii) 3D masks of various types, notably static masks and flexible silicon masks.