Loading [MathJax]/extensions/MathMenu.js
IEEE Xplore Search Results

Showing 1-25 of 26,632 resultsfor

Results

In the field of text recognition models, existing adversarial attack algorithms mainly target white-box scenarios, which are usually limited to single-color backgrounds and short text length. However, with the complex color textures of ID card images and long text length, these methods are not satisfactory when performing adversarial attacks on ID card text data. To overcome these challenges, this...Show More
The robustness of Deep Neural Networks (DNNs) against adversarial attacks is an important topic in the area of deep learning. To fully investigate the robustness of DNNs, this study examines four frequently used white box adversarial attack techniques, namely, the Fast Gradient Sign Method (FGSM), Projected Gradient Descent (PGD), Basic Iterative Method (BIM), DeepFool, and their effects on DNN mo...Show More
Adversarial attacks in Time Series Forecasting (TSF) have become a topic of growing interest in recent years. However, most previously proposed black-box attack methods against TSF required a vast number of queries to the target model to ensure effective attack performance. In our approach, we aim to learn from adversarial examples and predict the sensitive locations within the data. Specifically,...Show More
Deep neural networks (DNNs) play key roles in various artificial intelligence applications such as image classification and object recognition. However, a growing number of studies have shown that there exist adversarial examples in DNNs, which are almost imperceptibly different from the original samples but can greatly change the output of DNNs. Recently, many white-box attack algorithms have bee...Show More
This paper presents a novel black and white box method for diagnosis and reduction of abnormal noise of the hub permanent-magnet synchronous motors (HPMSMs). The black and white box method is divided into three steps. In the first step, a black-box method is used to identify and diagnose all of the abnormal noise sources and corresponding working conditions including rotational speeds and loading ...Show More
Research done in Facial Privacy so far has entrenched the scope of gleaning race, age, and gender from a human’s facial image that are classifiable and compliant biometric attributes. Noticeable distortions, morphing, and face-swapping are some of the techniques that have been researched to restore consumers’ privacy. By fooling face recognition models, these techniques cater superficially to the ...Show More
The application of Artificial Intelligence (AI) and Machine Learning (ML) to cybersecurity challenges has gained traction in industry and academia, partially as a result of widespread malware attacks on critical systems such as cloud infrastructures and government institutions. Intrusion Detection Systems (IDS), using some forms of AI, have received widespread adoption due to their ability to hand...Show More
Vision Transformers (ViTs) have demonstrated remarkable performance in computer vision. However, they are still susceptible to adversarial examples. In this paper, we propose a novel adversarial attack method tailored for ViTs, by leveraging the inherent permutation-invariant of ViTs to generate highly transferable adversarial examples. Specifically, we split the image into patches of different sc...Show More
The design space exploration in black-box systems, which lack complete mechanistic models, has wide applications in scientific and industrial research. Bayesian optimization(BO) can obtain high-quality solutions with limited evaluations, which is particularly suitable for expensive black-box optimization problems. However, existing research indicates that BO meth-ods face challenges in high-dimens...Show More
Black-box adversarial attacks can be categorized into transfer-based and query-based attacks. The former usually has poor transfer performance due to the mismatch between the architectures of models, while the query-based attacks require massive queries and high dimensional optimization variables. In order to solve the above problems, we propose a novel attack framework integrating the advantages ...Show More
The vulnerability of deep neural networks to adversarial examples has raised huge concerns about the security of these algorithms. Black-box adversarial attacks have received a lot of attention as an influential method for evaluating model robustness. While various sophisticated adversarial attack methods have been proposed, the success rate in the black-box scenario still needs to be improved. To...Show More
To generate image adversarial examples, state-of-the-art black-box attacks usually require thousands of queries. However, massive queries will introduce additional costs and exposure risks in the real world. Towards improving the attack efficiency, we carefully design an acceleration framework SAGE for existing black-box methods, which is composed of sLocator (initial point optimization) and sRudd...Show More
With the increasing deployment of machine learning models across various domains, ensuring AI security has become a critical concern. Model evasion, a specific area of concern, involves attackers manipulating a model's predictions by perturbing the input data. The Fast Gradient Sign Method (FGSM) is a well-known technique for model evasion, typically used in white-box settings where the attacker h...Show More
Specific emitter identification(SEI) plays an integral role in network security. In recent years, deep neural networks (DNNs) have demonstrated significant success in various application scenarios. The robust feature extraction capabilities of DNNs have led to advancements in SEI. However, it has been shown that DNNs are susceptible to adversarial attacks. The proposal of well-performing adversari...Show More
In this paper, we study the problem of black-box attack and propose a new adversarial sample generation framework to attack the robust trained model (ARTM). Iterative methods can cause serious overfitting problems because they only consider the optimality of a single adversarial sample and ignore the entire sample distribution. The generative approach maps the entire original sample distribution t...Show More
The practical application of crop pest detection methods has been limited by the large number of parameters and computations, and we built a lightweight crop pest detection method YOLOLite-CSG in our previous research, which basically removed this limitation. However, further analysis shows that YOLOLite-CSG still has problems that affect the performance in terms of the prior box generation method...Show More
In the scenario of black-box adversarial attack, the target model's parameters are unknown, and the attacker aims to find a successful adversarial perturbation based on query feedback under a query budget. Due to the limited feedback information, existing query-based black-box attack methods often require many queries for attacking each benign example. To reduce query cost, we propose to utilize t...Show More
Graph adversarial attacks can be classified as either white-box or black-box attacks. White-box attackers typically exhibit better performance because they can exploit the known structure of victim models. However, in practical settings, most attackers generate perturbations under black-box conditions, where the victim model is unknown. A fundamental question is how to leverage a white-box attacke...Show More
We consider the problem of optimizing the parameters of a given denoising algorithm for restoration of a signal corrupted by white Gaussian noise. To achieve this, we propose to minimize Stein's unbiased risk estimate (SURE) which provides a means of assessing the true mean-squared error (MSE) purely from the measured data without need for any knowledge about the noise-free signal. Specifically, w...Show More
Current white-box attack to deep neural networks have achieved considerable success, but not for black-box attack. The main reason is poor transferability, as the adversarial examples are crafted with single deep neural networks model, and excessively depend on that model. To address that problem, we propose a rotation model enhancement algorithm to craft adversarial examples. We improve rotation ...Show More
Bacterial Cellulose (BC) is a promising biodegradable biopolymer synthesized by bacteria in a low energy consumption process based solely on renewable materials (a sucrose-based culture is required). BC-based composites exhibit excellent mechano-electrical transduction properties. However, there has been no systematic optimization of BC-based composites for the realization of transduction or low-c...Show More
Recent studies have shown that Graph Neural Networks (GNNs) are vulnerable to well-designed and imperceptible adversarial attack. Attacks utilizing gradient information are widely used in the field of attack due to their simplicity and efficiency. However, several challenges are faced by gradient-based attacks: 1) Generate perturbations use white-box attacks (i.e., requiring access to the full kno...Show More
The success of adversarial attacks on speaker recognition is mainly in white-box scenarios. When applying the adversarial voices that are generated by attacking white-box surrogate models to black-box victim models, i.e. transfer-based black-box attacks, the transferability of the adversarial voices is not only far from satisfactory, but also lacks interpretable basis. To address these issues, in ...Show More
This paper focuses on the transferability problem of adversarial examples towards black-box attack scenarios wherein model information such as the neural network structure is unavailable. To tackle this predicament, we propose a new adversarial example-generating scheme through bridging a data-modal conversion regime to spawn transferable adversarial examples without referring to the substitute mo...Show More
Voice Conversion (VC) technologies have advanced significantly, enabling voice cloning with just a few seconds of audio, posing serious risks to privacy, property, and reputation. In response to these threats, adversarial defense methods protect users by adding imperceptible perturbations to the audio, making it harder for VC models to clone the original voice. However, current methods are effecti...Show More

Standards Dictionary Terms