I. Introduction
Over the recent years, Deep Neural Networks (DNNs) have become progressively popular and successful in various signal and image processing applications. Although these state-of-the-art DNNs are utilized for detection, segmentation, and classification etc., the presence of specially crafted adversarial perturbations pose as a major challenge towards the robust and accurate functioning of the deep networks. Such images, called as adversarial examples, have piqued the interest of the researchers over the recent years towards reducing the vulnerability of the DNNs and thus, improving the overall system performance.