Select The Best: Enhancing Graph Representation with Adaptive Negative Sample Selection | IEEE Conference Publication | IEEE Xplore

Select The Best: Enhancing Graph Representation with Adaptive Negative Sample Selection


Abstract:

Graph contrastive learning (GCL) has emerged as a powerful tool to address real-world widespread label scarcity problems and has achieved impressive success in the graph ...Show More

Abstract:

Graph contrastive learning (GCL) has emerged as a powerful tool to address real-world widespread label scarcity problems and has achieved impressive success in the graph learning domain. Albeit their remarkable performance, most current works mainly focus on designing sample augmentation methods, while the effect of negative sample selection strategy is largely ignored by previous works but rather practical and significant for graph contrastive learning. In this paper, we study the impact of negative samples on learning graph-level representations, and innovatively propose a Reinforcement Graph Contrastive Learning (ReinGCL) for negative sample selection. To be concrete, our model consists of two major components: a graph contrastive learning framework (GCLF), and a selection distribution generator (SDG) for producing the selection probabilities based on RL. The key insight is that Re-inGCL attempts to leverage SDG to guide GCLF and narrow the divergence between the augmented positive pairs, so as to further improve graph representation learning. Extensive experiments demonstrate that our approach significantly yields superior performance compared to the state-of-the-art.
Date of Conference: 04-10 June 2023
Date Added to IEEE Xplore: 05 May 2023
ISBN Information:

ISSN Information:

Conference Location: Rhodes Island, Greece

Funding Agency:

References is not available for this document.

1. INTRODUCTION

Graph Neural Networks (GNNs), inheriting the power of neural networks and leveraging the expressive structure of graph data simultaneously [1], have achieved overwhelming accomplishments in various graph-based tasks. However, traditional GNN learning methods [2], [3], [4], [5] demand abundant labeled graph data of high quality for training, while such data is too expensive to obtain and sometimes even unavailable due to privacy and fairness concerns [6]. Recently, contrastive learning (CL) tackled the label scarcity problems and has revolutionized representation learning in the graph domain by enabling unsupervised models to perform on par with their supervised counterparts on several tasks [7].

Select All
1.
Jun Xia, Lirong Wu, Jintao Chen, Bozhen Hu and Stan Z. Li, "Simgrace: A simple framework for graph contrastive learning without data augmentation", WWW, pp. 1070-1079, 2022.
2.
Thomas N. Kipf and Max Welling, "Semi-supervised classification with graph convolutional networks", ICLR, 2017.
3.
Keyulu Xu, Weihua Hu, Jure Leskovec and Stefanie Jegelka, "How powerful are graph neural networks?", ICLR, 2019, [online] Available: OpenReview.net.
4.
Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò and Yoshua Bengio, "Graph attention networks", CoRR, vol. abs/1710.10903, 2017.
5.
William L. Hamilton, Zhitao Ying and Jure Leskovec, "Inductive representation learning on large graphs", NIPS, pp. 1024-1034, 2017.
6.
Bolian Li, Baoyu Jing and Hanghang Tong, "Graph communal contrastive learning", WWW, pp. 1203-1213, 2022.
7.
Puja Trivedi, Ekdeep Singh Lubana, Yujun Yan, Yaoqing Yang and Danai Koutra, "Augmentations in graph contrastive learning: Current methodological flaws & towards better practices", WWW, pp. 1538-1549, 2022.
8.
Weihua Hu, Bowen Liu, Joseph Gomes, Marinka Zitnik, Percy Liang, Vijay S. Pande, et al., "Strategies for pre-training graph neural networks", ICLR, 2020, [online] Available: OpenReview.net.
9.
Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang and Yang Shen, "Graph contrastive learning with augmentations", NeurIPS, 2020.
10.
Jinsung Yoon, Sercan Ömer Arik and Tomas Pfister, "Data valuation using reinforcement learning", ICML, vol. 119, pp. 10842-10851, 2020.
11.
Joshua David Robinson, Ching-Yao Chuang, Suvrit Sra and Stefanie Jegelka, "Contrastive learning with hard negative samples", 9th International Conference on Learning Representations ICLR 2021 Virtual Event Austria May 3-7 2021, 2021, [online] Available: OpenReview.net.
12.
Guanyi Chu, Xiao Wang, Chuan Shi and Xunqiang Jiang, "Cuco: Graph representation with curriculum contrastive learning", IJCAI, pp. 2300-2306, 2021, [online] Available: ijcai.org.
13.
Miaofeng Liu, Yan Song, Hongbin Zou and Tong Zhang, "Reinforced training data selection for domain adaptation" in ACL, Association for Computational Linguistics, pp. 1957-1968, 2019.
14.
Jun Feng, Minlie Huang, Li Zhao, Yang Yang and Xiaoyan Zhu, "Reinforcement learning for relation classification from noisy data", AAAI, pp. 5779-5786, 2018.
15.
Gongfan Fang, Jie Song, Xinchao Wang, Chengchao Shen, Xingen Wang and Mingli Song, "Contrastive model inversion for data-free knowledge distillation", CoRR, vol. abs/2105.08584, 2021.
16.
Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu and Liang Wang, "Graph contrastive learning with adaptive augmentation" in WWW, ACM / IW3C2, pp. 2069-2080, 2021.
17.
Kaveh Hassani and Amir Hosein Khas Ahmadi, "Contrastive multi-view representation learning on graphs", ICML, vol. 119, pp. 4116-4126, 2020.
18.
Yizhu Jiao, Yun Xiong, Jiawei Zhang, Yao Zhang, Tianqi Zhang and Yangyong Zhu, "Sub-graph contrast for scalable self-supervised graph representation learning", ICDM, pp. 222-231, 2020.
19.
Alfréd Rényi, "On measures of entropy and information", Proceedings of the Fourth Berkeley Symposium on Mathematical Statistics and Probability Volume 1: Contributions to the Theory of Statistics, pp. 547-561, 1961.
20.
Luchen Li and A. Aldo Faisal, "Bayesian distributional policy gradients" in AAAI, AAAI Press, pp. 8429-8437, 2021.
21.
Aäron van den Oord, Yazhe Li and Oriol Vinyals, "Representation learning with contrastive predictive coding", CoRR, vol. abs/1807.03748, 2018.
22.
Fan-Yun Sun, Jordan Hoffmann, Vikas Verma and Jian Tang, "Infograph: Unsupervised and semi-supervised graph-level representation learning via mutual information maximization", ICLR, 2020, [online] Available: OpenReview.net.
23.
Yuning You, Tianlong Chen, Yang Shen and Zhangyang Wang, "Graph contrastive learning auto-mated", ICML, vol. 139, pp. 12121-12132, 2021.
24.
Kaveh Hassani and Amir Hosein Khas Ahmadi, "Learning graph augmentations to learn graph representations", CoRR, vol. abs/2201.09830, 2022.
Contact IEEE to Subscribe

References

References is not available for this document.