1. Introduction
In the past few years, we have witnessed the great success of deep neural networks (DNNs) in a variety of computer vision tasks, such as image classification, object detection, scene segmentation and so on. However, recent studies have revealed that DNNs based models are vulnerable to adversarial examples, even though the added magnitude of perturbations is small. In safety sensitive scenarios, such as autonomous driving [18] and medical diagnosis [19], adversarial inputs would enforce a machine learning system to produce erroneous decision, leading to unexpected situations that may be potentially dangerous.