I. Introduction
Deep neural networks (DNNs) are a crucial component of the artificial intelligence (AI) landscape due to their ability to perform complex tasks, modulation recognition [1], [2], wireless signal classification [3], [4], network intrusion detection and defense [5], [6], [7], object detection [8], [9], object tracking [10], [11], image classification [12], [13], [14], language translation [15], [16], and many more [17], [18], [19]. The availability of advanced hardware, such as GPUs, TPUs, and NPUs, has facilitated the training of DNNs and made them a popular research direction in AI [20], [21]. However, despite their strong learning ability, DNNs are susceptible to adversarial attacks, such as classical attack method Projected Gradient Descent (PGD) [22], Square attack [23] or C&W [24]. These attacks exploit the model’s sensitivity to small and carefully crafted perturbations in the input data, causing the DNN to produce false predictions. Adversarial attacks represent a serious challenge to the robustness of DNNs and require proactive attention and action to mitigate the risks they pose.