I. Introduction
Industrial artificial intelligence systems (IAISs), including various smart devices with deployed AI techniques, can make decisions in many industrial scenarios without intervention of human vigor [1], [2], [3]. The most popular AI technique is the deep learning strategies. Deep neural networks (DNNs), as the powerful framework in deep learning, are becoming the mainstream choice for diverse industrial tasks. For example, the DNN-based recognition models can be deployed for the detections of conveyor belt idling, safety rope wearing, personnel intrusion, etc. Despite the remarkable processing capabilities, prior studies have revealed that DNNs are vulnerable to various adversarial attacks, which can mislead models' predictions by manipulating input images with crafted vicious patterns. Adversarial attacks, which generally include adversarial examples, backdoor attacks, and poisoning attacks, have raised great concerns in DNN-based applications. Especially in the security-sensitive industrial scenarios, the potential adversarial attacks may lead to severe accidents or economic losses [4].