Loading [MathJax]/extensions/MathMenu.js
SAGE: Steering the Adversarial Generation of Examples With Accelerations | IEEE Journals & Magazine | IEEE Xplore

SAGE: Steering the Adversarial Generation of Examples With Accelerations


Abstract:

To generate image adversarial examples, state-of-the-art black-box attacks usually require thousands of queries. However, massive queries will introduce additional costs ...Show More

Abstract:

To generate image adversarial examples, state-of-the-art black-box attacks usually require thousands of queries. However, massive queries will introduce additional costs and exposure risks in the real world. Towards improving the attack efficiency, we carefully design an acceleration framework SAGE for existing black-box methods, which is composed of sLocator (initial point optimization) and sRudder (search process optimization). The core idea of SAGE in terms of 1) saliency map can guide the perturbations towards the most adversarial direction and 2) exploiting bounding box (bbox) to capture those salient pixels in the black-box attack. Meanwhile, we provide a series of observations and experiments that demonstrate bbox holds model invariance and process invariance. We extensively evaluate SAGE on four state-of-the-art black-box attacks involving three popular datasets (MNIST, CIFAR10, and ImageNet). The results show that SAGE could present fundamental improvements even against robust models that use adversarial training. Specifically, SAGE could reduce >20% of queries and improve the success rate of attacks to 95%~100%. Compared with the other acceleration framework, SAGE fulfills the more significant effect in a flexible, stable, and low-overhead manner. Moreover, our practical evaluation (Google Cloud Vision API) shows SAGE can be applied to real-world scenarios.
Page(s): 789 - 803
Date of Publication: 02 February 2023

ISSN Information:

Funding Agency:

No metrics found for this document.

I. Introduction

Deep Neural Networks (DNNs) are becoming ubiquitous in security-critical applications to deliver automated decisions such as face recognition, self-driving cars, malware detection, etc [47], [48], [50]. Subsequently, several security concerns have emerged regarding the potential vulnerabilities of the DNN algorithm itself [2]. Particularly, adversaries can deliberately craft special inputs, named adversarial examples (AEs), leading models to produce an output for their malicious intentions, such as misclassification.

Usage
Select a Year
2025

View as

Total usage sinceFeb 2023:819
05101520JanFebMarAprMayJunJulAugSepOctNovDec111612000000000
Year Total:39
Data is updated monthly. Usage includes PDF downloads and HTML views.

Contact IEEE to Subscribe

References

References is not available for this document.