Adversarial Attack on Large Scale Graph | IEEE Journals & Magazine | IEEE Xplore

Adversarial Attack on Large Scale Graph


Abstract:

Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness and can therefore be easily fooled. Currently, ...Show More

Abstract:

Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness and can therefore be easily fooled. Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance. However, the high complexity of time and space makes them unmanageable for large scale graphs and becomes the major bottleneck that prevents the practical usage. We argue that the main reason is that they have to use the whole graph for attacks, resulting in the increasing time and space complexity as the data scale grows. In this work, we propose an efficient Simplified Gradient-based Attack (SGA) method to bridge this gap. SGA can cause the GNNs to misclassify specific target nodes through a multi-stage attack framework, which needs only a much smaller subgraph. In addition, we present a practical metric named Degree Assortativity Change (DAC) to measure the impacts of adversarial attacks on graph data. We evaluate our attack method on four real-world graph networks by attacking several commonly used GNNs. The experimental results demonstrate that SGA can achieve significant time and memory efficiency improvements while maintaining competitive attack performance compared to state-of-art attack techniques.
Published in: IEEE Transactions on Knowledge and Data Engineering ( Volume: 35, Issue: 1, 01 January 2023)
Page(s): 82 - 95
Date of Publication: 11 May 2021

ISSN Information:

Funding Agency:


1 Introduction

Recently, with the enormous advancement of deep learning, many domains like speech recognition [1] and visual object recognition [2], have achieved a dramatic improvement out of the state-of-the-art methods. Despite the great success, deep learning models have been proved vulnerable against perturbations. Specifically, Szegedy et al. [3] and Goodfellow et al. [4] have found that deep learning models may be easily fooled when a small perturbation (usually unnoticeable for humans) is applied to the images. The perturbed examples are also termed as “adversarial examples” [4].

References

References is not available for this document.