Loading [MathJax]/extensions/MathMenu.js
Multi-Grained Semantics-Aware Graph Neural Networks (Extended abstract) | IEEE Conference Publication | IEEE Xplore

Multi-Grained Semantics-Aware Graph Neural Networks (Extended abstract)


Abstract:

Graph Neural Networks (GNNs) are powerful techniques in representation learning for graphs and have been increasingly deployed in a multitude of different applications th...Show More

Abstract:

Graph Neural Networks (GNNs) are powerful techniques in representation learning for graphs and have been increasingly deployed in a multitude of different applications that involve node- and graph-wise tasks. Most existing studies solve either the node-wise task or the graph-wise task independently while they are inherently correlated. This work proposes a unified model, AdamGNN, to interactively learn node and graph representations in a mutual-optimisation manner. Compared with existing GNN models and graph pooling methods, AdamGNN enhances the node representation with the learned multi-grained semantics and avoids losing node features and graph structure information during pooling. Experiments on 14 real-world graph datasets show that AdamGNN can significantly outperform 17 competing models on both node- and graph-wise tasks. The ablation studies confirm the effectiveness of AdamGNN's components, and the last empirical analysis further reveals the ingenious ability of AdamGNN in capturing long-range interactions. This work was published at IEEE TKDE11Full paper is available at https://ieeexplore.ieee.org/document/9844866/.
Date of Conference: 13-16 May 2024
Date Added to IEEE Xplore: 23 July 2024
ISBN Information:

ISSN Information:

Conference Location: Utrecht, Netherlands

I. Introduction

Existing Graph Neural Network (GNN) models on learning node representations rely on a similar methodology that utilises a GNN layer to aggregate the sampled neighbouring nodes' features in a number of iterations, via non-linear transformation and aggregation functions. Its effectiveness has been widely proved, however, a major limitation of these GNN models is that they are inherently flat as they only propagate information across the observed edges in the original graph. Thus, they lack the capacity to encode features in the high-order neighbourhood in the graphs [1], [17]. For example, in an academic collaboration network, flat GNN models could capture the micro semantic (e.g., co-authorships) between authors, but neglect their macro semantics (e.g., belonging to different research institutes).

Contact IEEE to Subscribe

References

References is not available for this document.