Loading [MathJax]/extensions/MathMenu.js
Graph Fine-Grained Contrastive Representation Learning | IEEE Conference Publication | IEEE Xplore

Graph Fine-Grained Contrastive Representation Learning


Abstract:

Existing graph contrastive methods have benefited from ingenious data augmantations and mutual information estimation operations that are carefully designated to augment ...Show More

Abstract:

Existing graph contrastive methods have benefited from ingenious data augmantations and mutual information estimation operations that are carefully designated to augment graph views and maximize the agreement between representations produced at the aftermost layer of two view networks. However, the design of graph CL schemes is coarse-grained and difficult to capture the universal and intrinsic properties across intermediate layers. To address this problem, we propose a novel fine-grained graph contrastive learning model (FGCL), which decomposes graph CL into global-to-local levels and disentangles the two graph views into hierarchical graphs by pooling operation to capture both global and local dependencies across views and across layers. To prevent layers mismatch and automatically assign proper hierarchical representations of the augmented graph (Key view) for each pooling layer of the original graph (Query view), we propose a sematic-aware layer allocation strategy to integrate positive guidance from diverse representations rather than a fixed layer manually. Experimental results demonstrate the advantages of our model on graph classification task. This suggests that the proposed fine-grained graph CL presents great potential for graph representation learning.
Date of Conference: 23-27 May 2022
Date Added to IEEE Xplore: 27 April 2022
ISBN Information:

ISSN Information:

Conference Location: Singapore, Singapore

Funding Agency:


1. Introduction

Graph Neural Networks (GNNs) has emerged as a powerful tool for analyzing graph related tasks, such as node classification [1], graph classification [2] and link prediction [3]. However, existing GNN models are mostly trained under supervision and require abundant labeled nodes. Contrastive learning (CL) as an important renaissance member of self-supervised learning (SSL), reduces the dependency on excessive annotated labels and achieves great success in many fields. These CL methods leverage the classical Information Maximization principle and seek to maximize the Mutual Information (MI) by contrasting positive and negative pairs.

Contact IEEE to Subscribe

References

References is not available for this document.