I. Introduction
Deep Neural Networks (DNNs) are becoming ubiquitous in security-critical applications to deliver automated decisions such as face recognition, self-driving cars, malware detection, etc [47], [48], [50]. Subsequently, several security concerns have emerged regarding the potential vulnerabilities of the DNN algorithm itself [2]. Particularly, adversaries can deliberately craft special inputs, named adversarial examples (AEs), leading models to produce an output for their malicious intentions, such as misclassification.