Abstract:
Contrastive learning, a powerful self-supervised learning paradigm, has shown its efficacy in learning embed dings from independent and identically distributed (IID) as w...Show MoreMetadata
Abstract:
Contrastive learning, a powerful self-supervised learning paradigm, has shown its efficacy in learning embed dings from independent and identically distributed (IID) as well as non-IID data without relying on label information. Since high-quality discriminative embeddings form a rich embedding space, which benefits model performance on downstream tasks, it is necessary to study how to improve the quality of contrastive node embeddings in graph contrastive learning. However, there has been limited research on this area. In this paper, we investigate how to generate high-quality contrastive node embeddings based on an in-depth analysis of graph contrastive losses. Firstly, we propose a novel and effective method, GLATE, for estimating the temperatures in three mainstream graph contrastive losses during the training phase. Secondly, we conduct the derivation of GLATE, and the derivation results reveal the specific relationship between the quality of contrastive node embeddings and tem-peratures. Finally, the extensive experiments on 16 benchmark datasets demonstrate that GLATE consistently outperforms the state-of-the-art graph contrastive learning models in terms of both model performance and training efficiency.
Date of Conference: 13-16 May 2024
Date Added to IEEE Xplore: 23 July 2024
ISBN Information: