Loading [MathJax]/extensions/MathZoom.js
Call White Black: Enhanced Image-Scaling Attack in Industrial Artificial Intelligence Systems | IEEE Journals & Magazine | IEEE Xplore

Call White Black: Enhanced Image-Scaling Attack in Industrial Artificial Intelligence Systems


Abstract:

The increasing prevalence of deep neural networks (DNNs) in industrial artificial intelligence systems (IAISs) promotes the development of industrial automation. However,...Show More

Abstract:

The increasing prevalence of deep neural networks (DNNs) in industrial artificial intelligence systems (IAISs) promotes the development of industrial automation. However, the growing employment of DNNs also exposes them to various attacks. Recent studies have shown that the data preprocessing process of DNNs is vulnerable to image-scaling attack. Such attacks can craft an attack image, which looks like a given source image but becomes a different target image after being scaled to the target size. The attack images generated by existing image-scaling attacks are easily perceivable to the human visual system, significantly degrading the attack's stealthiness. In this paper, we investigate image-scaling attack from the perspective of signal processing. We unearth that the root cause of the weak deceiving effects of existing image-scaling attack images lies in the introduction of additional high-frequency signals during their construction. Thus, we propose an enhanced image-scaling attack (EIS), which employs adversarial images crafted based on the source (“clean”) images as the target images. Those adversarial images preserve the “clean” pixel information of source images, thereby significantly mitigating the emergence of additional high-frequency signals in the attack images. Specifically, we consider three realistic threat models covering deep models' training and inference phases. Correspondingly, we design three strategies tailored to generate adversarial images with vicious patterns. These patterns are subsequently integrated into the attack images, which can mislead a model with target input size after the necessary scaling operation. Extensive experiments validate the superior performance of the proposed image-scaling attack compared to the original one.
Published in: IEEE Transactions on Industrial Informatics ( Volume: 20, Issue: 4, April 2024)
Page(s): 6222 - 6233
Date of Publication: 01 January 2024

ISSN Information:

Funding Agency:

Citations are not available for this document.

I. Introduction

Industrial artificial intelligence systems (IAISs), including various smart devices with deployed AI techniques, can make decisions in many industrial scenarios without intervention of human vigor [1], [2], [3]. The most popular AI technique is the deep learning strategies. Deep neural networks (DNNs), as the powerful framework in deep learning, are becoming the mainstream choice for diverse industrial tasks. For example, the DNN-based recognition models can be deployed for the detections of conveyor belt idling, safety rope wearing, personnel intrusion, etc. Despite the remarkable processing capabilities, prior studies have revealed that DNNs are vulnerable to various adversarial attacks, which can mislead models' predictions by manipulating input images with crafted vicious patterns. Adversarial attacks, which generally include adversarial examples, backdoor attacks, and poisoning attacks, have raised great concerns in DNN-based applications. Especially in the security-sensitive industrial scenarios, the potential adversarial attacks may lead to severe accidents or economic losses [4].

Cites in Papers - |

Cites in Papers - IEEE (1)

Select All
1.
Junjian Li, Honglong Chen, Zhe Li, Anqing Zhang, Xiaomeng Wang, Xingang Wang, Feng Xia, "Generative Adversarial Network Based Image-Scaling Attack and Defense Modeling", IEEE Transactions on Emerging Topics in Computational Intelligence, vol.9, no.1, pp.861-873, 2025.
Contact IEEE to Subscribe

References

References is not available for this document.