Loading [MathJax]/extensions/MathMenu.js
Digital-to-Physical Visual Consistency Optimization for Adversarial Patch Generation in Remote Sensing Scenes | IEEE Journals & Magazine | IEEE Xplore

Digital-to-Physical Visual Consistency Optimization for Adversarial Patch Generation in Remote Sensing Scenes


Abstract:

In contrast to digital image adversarial attacks, adversarial patch attacks involve physical operations that project crafted perturbations into real-world scenarios. Duri...Show More

Abstract:

In contrast to digital image adversarial attacks, adversarial patch attacks involve physical operations that project crafted perturbations into real-world scenarios. During the digital-to-physical transition, adversarial patches inevitably undergo information distortion. Existing approaches focus on data augmentation and printer color gamut regularization to improve the generalization of adversarial patches to the physical world. However, these efforts overlook a critical issue within the adversarial patch crafting pipeline—namely, the significant disparity between the appearance of adversarial patches during the digital optimization phase and their manifestation in the physical world. This unexplored concern, termed “digital-to-physical visual inconsistency,” introduces inconsistent objectives between the digital and physical realms, potentially skewing optimization directions for adversarial patches. To tackle this challenge, we propose a novel harmonization-based adversarial patch attack. Our approach involves the design of a self-supervised harmonization method, seamlessly integrated into the adversarial patch generation pipeline. This integration aligns the appearance of adversarial patches overlaid on digital images with the imaging environment of the background, ensuring a consistent optimization direction with the primary physical attack goal. We validate our method through extensive testing on the aerial object detection task. To enhance the controllability of environmental factors for method evaluation, we construct a dataset of 3-D simulated scenarios using a graphics rendering engine. Extensive experiments on these scenarios demonstrate the efficacy of our approach. Our code and dataset are publicly accessible at https://github.com/WindVChen/VCO-AP.
Article Sequence Number: 5623017
Date of Publication: 07 May 2024

ISSN Information:

Funding Agency:

Citations are not available for this document.

I. Introduction

Deep learning models have exhibited remarkable performance in various domains, such as medical image analysis [1], [2], remote sensing image recognition [3], [4], denoising [5], and captioning [6], [7], [8]. However, Szegedy et al. [9] discovered that these high-performing models can be vulnerable to carefully crafted perturbations, leading to erroneous predictions. This vulnerability has sparked concerns about the robustness of artificial intelligence systems.

Cites in Papers - |

Cites in Papers - IEEE (2)

Select All
1.
Yu Zhang, Jianqi Chen, Zhenbang Peng, Yi Dang, Zhenwei Shi, Zhengxia Zou, "Physical Adversarial Attacks Against Aerial Object Detection With Feature-Aligned Expandable Textures", IEEE Transactions on Geoscience and Remote Sensing, vol.62, pp.1-15, 2024.
2.
Chenyang Liu, Keyan Chen, Haotian Zhang, Zipeng Qi, Zhengxia Zou, Zhenwei Shi, "Change-Agent: Toward Interactive Comprehensive Remote Sensing Change Interpretation and Analysis", IEEE Transactions on Geoscience and Remote Sensing, vol.62, pp.1-16, 2024.

Cites in Papers - Other Publishers (1)

1.
Qin Ye, Junqi Luo, Yi Lin, "A coarse-to-fine visual geo-localization method for GNSS-denied UAV with oblique-view imagery", ISPRS Journal of Photogrammetry and Remote Sensing, vol.212, pp.306, 2024.
Contact IEEE to Subscribe

References

References is not available for this document.