Review on Image Processing Based Adversarial Example Defenses in Computer Vision | IEEE Conference Publication | IEEE Xplore

Review on Image Processing Based Adversarial Example Defenses in Computer Vision


Abstract:

Recent research works showed that deep neural networks are vulnerable to adversarial examples, which are usually maliciously created by carefully adding deliberate and im...Show More

Abstract:

Recent research works showed that deep neural networks are vulnerable to adversarial examples, which are usually maliciously created by carefully adding deliberate and imperceptible perturbations to examples. Several states of the art defense methods are proposed based on the existing image processing methods like image compression and image denoising. However, such approaches are not the final optimal solution for defense adversarial perturbations in DNN models. In this paper, we reviewed two main approaches to deploying image processing methods as a defense. By analyzing and discus!sing the remaining issues, we present two open questions for future research direction including the definition of adversarial perturbations and noises, the novel defense-aware threat model. A further research direction is also given by re-thinking the impacts of adversarial perturbations on all frequency bands.
Date of Conference: 25-27 May 2020
Date Added to IEEE Xplore: 23 June 2020
ISBN Information:
Conference Location: Baltimore, MD, USA

I. Introduction

Nowadays, Deep Neural Networks (DNNs) based classification methods have been well developed and widely deployed on many Computer Vision (CV) systems in many real-world applications [1]. However, the robustness and the security of the DL-based classification models are challenged by the existence of Adversarial Examples (AEs) [2]. Normally, the AEs are generated by adding carefully designed perturbations which are usually imperceptible to human eyes compared with clean image samples but can mislead the DNN classifiers by decreasing the accuracy of DNN classifiers to almost zero [3].

Contact IEEE to Subscribe

References

References is not available for this document.