I. Introduction and Related Work
Goodfellow et al. [1] and Szegedy et al. [2] first brought up the risk of adversarial attacks, small perturbations (often imperceptible by humans) that are carefully crafted and added to the input of state-of-the-art (SOTA) deep neural networks (DNNs). Without specific DNN training or mitigation mea-sures, these attacks lead to high-confidence wrong outputs of SOTA DNNs and convolutional neural networks (CNNs). This inherent vulnerability of DNN s poses an especially high risk when applying them in autonomous driving, facial recognition, or medical domains.