I. Introduction
Nowadays, Deep Neural Networks (DNNs) based classification methods have been well developed and widely deployed on many Computer Vision (CV) systems in many real-world applications [1]. However, the robustness and the security of the DL-based classification models are challenged by the existence of Adversarial Examples (AEs) [2]. Normally, the AEs are generated by adding carefully designed perturbations which are usually imperceptible to human eyes compared with clean image samples but can mislead the DNN classifiers by decreasing the accuracy of DNN classifiers to almost zero [3].