Loading [MathJax]/extensions/MathMenu.js
Select The Best: Enhancing Graph Representation with Adaptive Negative Sample Selection | IEEE Conference Publication | IEEE Xplore

Select The Best: Enhancing Graph Representation with Adaptive Negative Sample Selection


Abstract:

Graph contrastive learning (GCL) has emerged as a powerful tool to address real-world widespread label scarcity problems and has achieved impressive success in the graph ...Show More

Abstract:

Graph contrastive learning (GCL) has emerged as a powerful tool to address real-world widespread label scarcity problems and has achieved impressive success in the graph learning domain. Albeit their remarkable performance, most current works mainly focus on designing sample augmentation methods, while the effect of negative sample selection strategy is largely ignored by previous works but rather practical and significant for graph contrastive learning. In this paper, we study the impact of negative samples on learning graph-level representations, and innovatively propose a Reinforcement Graph Contrastive Learning (ReinGCL) for negative sample selection. To be concrete, our model consists of two major components: a graph contrastive learning framework (GCLF), and a selection distribution generator (SDG) for producing the selection probabilities based on RL. The key insight is that Re-inGCL attempts to leverage SDG to guide GCLF and narrow the divergence between the augmented positive pairs, so as to further improve graph representation learning. Extensive experiments demonstrate that our approach significantly yields superior performance compared to the state-of-the-art.
Date of Conference: 04-10 June 2023
Date Added to IEEE Xplore: 05 May 2023
ISBN Information:

ISSN Information:

Conference Location: Rhodes Island, Greece

Funding Agency:


1. INTRODUCTION

Graph Neural Networks (GNNs), inheriting the power of neural networks and leveraging the expressive structure of graph data simultaneously [1], have achieved overwhelming accomplishments in various graph-based tasks. However, traditional GNN learning methods [2], [3], [4], [5] demand abundant labeled graph data of high quality for training, while such data is too expensive to obtain and sometimes even unavailable due to privacy and fairness concerns [6]. Recently, contrastive learning (CL) tackled the label scarcity problems and has revolutionized representation learning in the graph domain by enabling unsupervised models to perform on par with their supervised counterparts on several tasks [7].

Contact IEEE to Subscribe

References

References is not available for this document.