I. Introduction
In recent years, Deep Neural Networks (DNNs) have achieved great success in various domains, such as image classification [1], [2], face recognition [3], [4], object detection [5], [6], [7], and autonomous driving [8], [9]. However, DNNs are known to be vulnerable to adversarial examples [10], [11], [12], [13], [14], which are crafted by adding imperceptible perturbations into clean images. Adversarial examples can cause severe threats in black-box security-sensitive applications, such as face recognition systems [15] and autonomous driving cars [16], due to their transferability, i.e., the adversarial examples generated on the surrogate model can be directly used to mislead unknown target models [17], [18], [19], [20], [21].