1. Introduction
Graph Neural Networks (GNNs) has emerged as a powerful tool for analyzing graph related tasks, such as node classification [1], graph classification [2] and link prediction [3]. However, existing GNN models are mostly trained under supervision and require abundant labeled nodes. Contrastive learning (CL) as an important renaissance member of self-supervised learning (SSL), reduces the dependency on excessive annotated labels and achieves great success in many fields. These CL methods leverage the classical Information Maximization principle and seek to maximize the Mutual Information (MI) by contrasting positive and negative pairs.