I. Introduction
Deep learning has achieved remarkable performance in a wide spectrum of challenging vision applications, such as autonomous driving [1], [2], face recognition [3], [4], person re-identification [5], [6], and computer-aided diagnosis [7], [8]. However, recently some works [9], [10] have shown that the deep models lack robustness and are highly vulnerable to the input adversarial examples. For example, given an input image, an adversarial attack of a target model is formulated as crafting the small perturbations on this image to fool the target model. And this adversarial example will be misclassified with very high confidence. Thus, the adversarial examples reveal important risks in deploying the deep learning models to many real-world applications. Recently, many extraordinary efforts have been made to study the tasks of adversarial attacks and defenses for better assessing and improving the robustness of deep models.