I. Introduction
Deep learning (DL) at the heart of the current rise of artifi-cial intelligence has been widely adopted in many application domains to solve a wide range of real-life problems. With the rapid progress in developing and deploying DL models, DL is fast achieving the maturity to enter into safety-critical and security-sensitive applications, such as autonomous driving [1], surveillance [2], malware detection [3], robotics [4], and speech recognition [5]. However, Szegedy [6] discovered in the image field that machine learning (ML) models are vulnerable to adversarial attacks. Such attacks are often instantiated by adversarial examples: carefully crafted inputs by adding disturbances that are imperceptible to humans can easily mislead a learned classifier to make incorrect predictions, which may cause catastrophic security and safety consequences.