Abstract:
The increasing prevalence of deep neural networks (DNNs) in industrial artificial intelligence systems (IAISs) promotes the development of industrial automation. However,...Show MoreMetadata
Abstract:
The increasing prevalence of deep neural networks (DNNs) in industrial artificial intelligence systems (IAISs) promotes the development of industrial automation. However, the growing employment of DNNs also exposes them to various attacks. Recent studies have shown that the data preprocessing process of DNNs is vulnerable to image-scaling attack. Such attacks can craft an attack image, which looks like a given source image but becomes a different target image after being scaled to the target size. The attack images generated by existing image-scaling attacks are easily perceivable to the human visual system, significantly degrading the attack's stealthiness. In this paper, we investigate image-scaling attack from the perspective of signal processing. We unearth that the root cause of the weak deceiving effects of existing image-scaling attack images lies in the introduction of additional high-frequency signals during their construction. Thus, we propose an enhanced image-scaling attack (EIS), which employs adversarial images crafted based on the source (“clean”) images as the target images. Those adversarial images preserve the “clean” pixel information of source images, thereby significantly mitigating the emergence of additional high-frequency signals in the attack images. Specifically, we consider three realistic threat models covering deep models' training and inference phases. Correspondingly, we design three strategies tailored to generate adversarial images with vicious patterns. These patterns are subsequently integrated into the attack images, which can mislead a model with target input size after the necessary scaling operation. Extensive experiments validate the superior performance of the proposed image-scaling attack compared to the original one.
Published in: IEEE Transactions on Industrial Informatics ( Volume: 20, Issue: 4, April 2024)
Funding Agency:
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Artificial Intelligence Industry ,
- Deep Neural Network ,
- Deep Models ,
- Training Phase ,
- Target Image ,
- Clear Image ,
- Input Size ,
- Source Images ,
- Target Size ,
- Human Visual System ,
- Process Perspective ,
- Threat Model ,
- Inference Phase ,
- Point Values ,
- Objective Evaluation ,
- Effective Imaging ,
- Output Image ,
- Distance Metrics ,
- Target Model ,
- Adversarial Examples ,
- Attack Success Rate ,
- Adversarial Attacks ,
- Fast Gradient Sign Method ,
- Attack Strategy ,
- Scaling Algorithm ,
- Attack Performance ,
- Spectrum Mapping ,
- Ratio Scale ,
- Attack Methods
- Author Keywords
Keywords assist with retrieval of results and provide a means to discovering other relevant content. Learn more.
- IEEE Keywords
- Index Terms
- Artificial Intelligence Industry ,
- Deep Neural Network ,
- Deep Models ,
- Training Phase ,
- Target Image ,
- Clear Image ,
- Input Size ,
- Source Images ,
- Target Size ,
- Human Visual System ,
- Process Perspective ,
- Threat Model ,
- Inference Phase ,
- Point Values ,
- Objective Evaluation ,
- Effective Imaging ,
- Output Image ,
- Distance Metrics ,
- Target Model ,
- Adversarial Examples ,
- Attack Success Rate ,
- Adversarial Attacks ,
- Fast Gradient Sign Method ,
- Attack Strategy ,
- Scaling Algorithm ,
- Attack Performance ,
- Spectrum Mapping ,
- Ratio Scale ,
- Attack Methods
- Author Keywords