Jingming Xu - IEEE Xplore Author Profile

Showing 1-25 of 31 results

Filter Results

Show

Results

The vulnerabilities of deep learning models towards adversarial attacks have attracted increasing attention, especially when models are deployed in security-critical domains. Numerous defense methods, including reactive and proactive ones, have been proposed for model robustness improvement. Reactive defenses, such as conducting transformations to remove perturbations, usually fail to handle large...Show More
Federated learning (FL), collaborating with thousands of participants in a distributed manner, greatly protects the privacy of local data. However, recent research reveals that FL is at risk of privacy leakage attacks. Consequently, a variety of techniques have been applied to address the issue of privacy protection and effective distributed training of FL, such as differential privacy (DP), gradi...Show More
Adversarial attacks pose significant security risks in deep reinforcement learning (DRL) systems, making adversarial detection a crucial aspect of ensuring the safety and robustness of these models. Although numerous detection methods have been proposed, they often face limitations in detecting unknown adversarial attacks, require detailed knowledge of the target DRL model, or negatively impact th...Show More
The backdoor attacks have posed a severe threat to deep neural networks (DNNs). Online training platforms and third-party model training providers are more vulnerable to backdoor attacks due to uncontrollable data sources, untrusted developers or unmonitorable training processes. Researchers have proposed to detect the backdoor in the well-trained models, and then remove them by some mitigation te...Show More
A wireless communications system usually consists of a transmitter which transmits the information and a receiver which recovers the original information from the received distorted signal. Deep learning (DL) has been used to improve the performance of the receiver in complicated channel environments and state-of-the-art (SOTA) performance has been achieved. However, its robustness has not been in...Show More
In the past decade, deep neural network (DNN) based radio modulation classifications (RMCs) have outperformed traditional techniques. However, the black-box nature of DNN has raised concerns about their interpretability and vulnerability towards adversarial attacks. To address these issues, we propose RobustRMC, a method for robustness interpretation of DNN for RMCs. Our approach differs from prev...Show More
With the development of deep learning processors and accelerators, deep learning models have been widely deployed on edge devices as part of the Internet of Things. Edge device models are generally considered as valuable intellectual properties that are worth for careful protection. Unfortunately, these models have a great risk of being stolen or illegally copied. The existing model protections us...Show More
Inspired by the successful application of dealing with graph-structured data, graph neural networks (GNNs) have captured significant research attention. Considering the privacy protection of the locally collected user data, federated graph learning (FGL) which shares graph embeddings or local models' gradient is proposed to decentralize GNNs training. While sharing the embedding or gradient in FGL...Show More
Recently, phishing scams have posed a significant threat to blockchains. Phishing detectors direct their efforts in hunting phishing addresses. Most of the detectors extract target addresses’ transaction behavior features by random walking or constructing static subgraphs. The random walking methods, unfortunately, usually miss structural information due to limited sampling sequence length, while ...Show More
Dynamic link prediction (DLP) makes graph prediction based on historical information. Since most DLP methods are highly dependent on the training data to achieve satisfying prediction performance, the quality of the training data is crucial. Backdoor attacks induce the DLP methods to make wrong prediction by the malicious training data, i.e., generating a subgraph sequence as the trigger and embed...Show More
Graph neural network (GNN) with a powerful representation capability has been widely applied to various areas. Recent works have exposed that GNN is vulnerable to the backdoor attack, i.e., models trained with maliciously crafted training samples are easily fooled by patched samples. Most of the proposed studies launch the backdoor attack using a trigger that is either the randomly generated subgr...Show More
Link prediction, inferring the undiscovered or potential links of the graph, is widely applied in the real world. By facilitating labeled links of the graph as the training data, numerous deep learning-based link prediction methods have been studied, which have dominant prediction accuracy compared with nondeep methods. However, the threats of maliciously crafted training graphs will leave a speci...Show More
During the last decade, the new brand of tobacco is designed in personal demand, thus how to make the proper sale strategy for the new production of tobacco has captured the main attention of enterprises. Constructing the correlation between the new brand and the old tobacco can help to make a sale strategy, however, the existing methods are still challenged by effectiveness and efficiency when de...Show More
Deep neural networks (DNNs) have demonstrated their outperformance in various software systems, but also exhibit misbehavior and even result in irreversible disasters. Therefore, it is crucial to identify the misbehavior of DNN-based software and improve DNNs' quality. Test input prioritization is one of the most appealing ways to guarantee DNNs' quality, which prioritizes test inputs so that more...Show More
In federated learning (FL), poisoning attack invades the whole system by manipulating the client data, tampering with the training target, and performing any desired behaviors. Until now, numerous poisoning attacks have been carefully studied, however they are still practically challenged in real-world scenarios from two aspects: (i) multiple malicious client selections- poisoning attacks are only...Show More
The proliferation of fake news and its serious negative social influence push fake news detection methods to become necessary tools for web managers. Meanwhile, the multi-media nature of social media makes multi-modal fake news detection popular for its ability to capture more modal features than uni-modal detection methods. However, current literature on multi-modal detection is more likely to pu...Show More
Graph embedding learns low-dimensional representations for nodes or edges on the graph, which is widely applied in many real-world applications. Excessive graph mining promotes the research of attack methods on graph embedding. Most attack methods generate perturbations that maximize the deviation of the prediction confidence. They are difficult to accurately misclassify the instances into the tar...Show More
Graph neural network (GNN) has achieved great success on graph representation learning. Challenged by large-scale private data collected from user side, GNN may not be able to reflect the excellent performance, without rich features and complete adjacent relationships. Addressing the problem, vertical federated learning (VFL) is proposed to implement local data protection through training a global...Show More
With the widespread application of deep learning technology, its security issue is also gradually paid attention to. To improve the security and reliability of deep learning technology in practical applications, we focus on the vulnerability of deep neural networks against adversarial attacks and address the problems of existing adversarial example detection algorithms that rely on pre-known attac...Show More
Deep neural networks (DNNs) have demonstrated their outper-formance in various domains. However, it raises a social concern whether DNNs can produce reliable and fair decisions especially when they are applied to sensitive domains involving valuable re-source allocation, such as education, loan, and employment. It is crucial to conduct fairness testing before DNNs are reliably de-ployed to such se...Show More
Despite of its tremendous popularity and success in computer vision (CV) and natural language processing, deep learning is inherently vulnerable to adversarial attacks in which adversarial examples (AEs) are carefully crafted by imposing imperceptible perturbations on the clean examples to deceive the target deep neural networks (DNNs). Many defense solutions in CV have been proposed. However, mos...Show More
Deep learning models show the vulnerability to adversarial examples, which can be attacked to make incorrect predictions. As one of the key image preprocessing techniques, interpolation may weaken the robustness of the attack and even disable the perturbation. Thus, the malicious images cannot achieve the attack purpose. In this work, anti-interpolation, as an attack facilitator, is proposed, wher...Show More
Deep neural networks are susceptible to poisoning attacks by purposely polluted training data with specific triggers. As existing episodes mainly focused on attack success rate with patch-based samples, defense algorithms can easily detect these poisoning samples. We propose DeepPoison, a novel adversarial network of one generator and two discriminators, to address this problem. Specifically, the ...Show More
Recently, a graph neural network (GNN) was proposed to analyze various graphs/networks, which has been proven to outperform many other network analysis methods. However, it is also shown that such state-of-the-art methods suffer from adversarial attacks, i.e., carefully crafted adversarial networks with slight perturbation on clean one may invalid these methods on lots of applications, such as net...Show More
The adversarial attack methods based on gradient information can adequately find the perturbations, that is, the combinations of rewired links, thereby reducing the effectiveness of the deep learning model-based graph embedding algorithms, but it is also easy to fall into a local optimum. Therefore, this article proposes a momentum gradient attack (MGA) against the graph convolutional network (GCN...Show More