Adversarial Attacks and Defenses in Machine Learning-Empowered Communication Systems and Networks: A Contemporary Survey | IEEE Journals & Magazine | IEEE Xplore

Adversarial Attacks and Defenses in Machine Learning-Empowered Communication Systems and Networks: A Contemporary Survey


Abstract:

Adversarial attacks and defenses in machine learning and deep neural network (DNN) have been gaining significant attention due to the rapidly growing applications of deep...Show More

Abstract:

Adversarial attacks and defenses in machine learning and deep neural network (DNN) have been gaining significant attention due to the rapidly growing applications of deep learning in communication networks. This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques, with a focus on DNN-based classification models for communication applications. Specifically, we conduct a comprehensive classification of recent adversarial attack methods and state-of-the-art adversarial defense techniques based on attack principles, and present them in visually appealing tables and tree diagrams. This is based on a rigorous evaluation of the existing works, including an analysis of their strengths and limitations. We also categorize the methods into counter-attack detection and robustness enhancement, with a specific focus on regularization-based methods for enhancing robustness. New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks, and a hierarchical classification of the latest defense methods is provided, highlighting the challenges of balancing training costs with performance, maintaining clean accuracy, overcoming the effect of gradient masking, and ensuring method transferability. At last, the lessons learned and open challenges are summarized with future research opportunities recommended.
Published in: IEEE Communications Surveys & Tutorials ( Volume: 25, Issue: 4, Fourthquarter 2023)
Page(s): 2245 - 2298
Date of Publication: 26 September 2023

ISSN Information:

Funding Agency:


I. Introduction

Deep neural networks (DNNs) are a crucial component of the artificial intelligence (AI) landscape due to their ability to perform complex tasks, modulation recognition [1], [2], wireless signal classification [3], [4], network intrusion detection and defense [5], [6], [7], object detection [8], [9], object tracking [10], [11], image classification [12], [13], [14], language translation [15], [16], and many more [17], [18], [19]. The availability of advanced hardware, such as GPUs, TPUs, and NPUs, has facilitated the training of DNNs and made them a popular research direction in AI [20], [21]. However, despite their strong learning ability, DNNs are susceptible to adversarial attacks, such as classical attack method Projected Gradient Descent (PGD) [22], Square attack [23] or C&W [24]. These attacks exploit the model’s sensitivity to small and carefully crafted perturbations in the input data, causing the DNN to produce false predictions. Adversarial attacks represent a serious challenge to the robustness of DNNs and require proactive attention and action to mitigate the risks they pose.

Contact IEEE to Subscribe

References

References is not available for this document.