Loading [MathJax]/extensions/MathZoom.js
HAT: Hybrid Adversarial Training to Make Robust Deep Learning Classifiers | IEEE Conference Publication | IEEE Xplore

HAT: Hybrid Adversarial Training to Make Robust Deep Learning Classifiers


Abstract:

Deep learning has become state-of-the-art in real-life applications. However, current studies show that deep learning models are susceptible to adversarial attacks. Adver...Show More

Abstract:

Deep learning has become state-of-the-art in real-life applications. However, current studies show that deep learning models are susceptible to adversarial attacks. Adversarial attacks are well-crafted perturbed inputs that fool the deep learning models. An adversarial attack can easily fool the classifier thus posing a threat for deep learning models while deploying them in real-world applications. Our work explores various adversarial attacks and defenses against adversaries available in the literature. We find that existing defense strategies show good results on greyscale images like MNIST and FMNIST but, the robustness of the same defense models radically decreases on RGB images like the CIFAR10 dataset. Also, the robustness of a model greatly depends on the type of adversarial examples on which the model is trained for achieving robustness. We devise a defense technique based on adversarial training, called Hybrid Adversarial Training (HAT). During training, we augment HAT with state-of-art adversarial examples crafted by combining DeepFool and FGSM attack hence increasing the robustness of deep learning models in the stipulated amount of time against a variety of attacks. Empirically performance of HAT is evaluated on cutting-edge adversarial attacks using various benchmark datasets. Our model shows good performance in terms of robustness and time than existing defense models. Our defense model can withstand the strong adversarial attack on the CIFAR10, a benchmark RGB image dataset. HAT outperforms the existing models as our model shows 15% more robustness than existing defenses. HAT is also proficient to maintain the natural accuracy of classifiers.
Date of Conference: 23-25 March 2022
Date Added to IEEE Xplore: 02 May 2022
ISBN Information:
Conference Location: New Delhi, India

I. Introduction

Deep learning is state of the art in the field of modem artificial intelligence. As a result, deep learning has become a strong candidate to solve complex learning problems in a broad spectrum epically in image classification [1]. Deep learning models are extensively used in day-to-day tasks. In health care systems deep learning is used to predict diseases [2] and suggest medicines, while in the share market deep learning model is performing price predictions. In computer vision, deep learning is applied to self-driving cars, security surveillance with outstanding performance. However, the recent study shows that deep learning models are susceptible to adversarial attacks. The robustness of deep learning models against adversarial attacks is a critical issue and an active area of research.

Contact IEEE to Subscribe

References

References is not available for this document.