1. Introduction
Face recognition has been widely deployed in various application scenarios, such as access control, phone un-locking, and mobile payment. Reasons for this include its convenience and outstanding performance [15, 2, 3]. However, face recognition is susceptible to Presentation Attacks (PAs), such as high-resolution photos and videos of an authorized user [56, 10, 1, 20, 24]. Therefore, face Presentation Attack Detection (PAD) technology [32], which describes the process of identifying whether a face presented to the system is a bona fide (live) or PA, plays an important role to secure recognition from PAs [28]. These PA detectors are often built using authentic biometric data [22], raising ethical and legal challenges. Such challenges have recently been discussed in face recognition [7, 47], face morphing attack detection [13, 51], and face PAD [21]. There was previously a series of competitions on face PAD based on authentic data [46] and a competition targeting face morphing attack detection based on privacy synthetic training data [34]. However, this is the first competition targeting PAs on face recognition while limiting its development data to synthetic data. Given the legal privacy regulations, the collection, use, share, and maintenance of face data for biometric processing is extremely challenging [11]. For example, several large-scale face recognition datasets [9, 27, 38] were withdrawn by their creators with privacy and proper subjects consent issues being the main reason. One of the main solutions for this issue is the use of synthetic data [11]. This has been very recently and successfully proposed for the training of face recognition [47, 5, 6] and morphing attack detection [13, 34, 18, 12], among other processes such as model quantization [4, 40]. Furthermore, a recent work followed this motivation to take advantage of synthetic data to develop PADs in a privacy-friendly manner [21] and proved the usability of synthetic data for the development of face PADs. The utilized assumption is that learning to detect the differences between bona fide and attack samples of a synthetic origin can be used to detect these differences between authentic bona fide and attacks and thus train PAD without authentic private data.