1. INTRODUCTION
Graph Neural Networks (GNNs), inheriting the power of neural networks and leveraging the expressive structure of graph data simultaneously [1], have achieved overwhelming accomplishments in various graph-based tasks. However, traditional GNN learning methods [2], [3], [4], [5] demand abundant labeled graph data of high quality for training, while such data is too expensive to obtain and sometimes even unavailable due to privacy and fairness concerns [6]. Recently, contrastive learning (CL) tackled the label scarcity problems and has revolutionized representation learning in the graph domain by enabling unsupervised models to perform on par with their supervised counterparts on several tasks [7].