I. Introduction
Graph Neural Networks (GNNs) have made rapid developments in recent years, enabling effective modeling of graph-structured data. Due to their great potential, GNNs are widely adopted in various high-stakes applications, such as financial analysis [1], recommender systems [2], and drug discovery [3]. However, recent studies have uncovered significant challenges that undermine the trustworthiness of GNN models. These challenges include sensitivity to noisy data, the perpetuation of societal biases, and lack of interpretability - all of which carry the risk of causing unintended harm to users and society [4]. For example, GNNs trained on social network data have been shown to embed discriminatory decision-making, amplifying undesirable biases [5]. Consequently, the development of trustworthy GNNs has become essential to mitigate these harms and enhance user trust. The overarching goal of my research is to tackle these issues and foster trustworthy graph learning from the perspective of reliability, explainability, and fairness, as illustrated in Figure 1. To be specific, my research addresses the following challenges. (C1) Trust Building. Stakeholders are more likely to trust graph models if they can understand the underlying decision process, especially in applications where human intervention is essential. Reduced trust can result in lower user satisfaction and reluctance to rely on the system for decision-making. (C2) Uncertainty. Uncertainty in graph learning stems from noisy, incomplete, or ambiguous data and inherent model limitations, which can degrade the accuracy and reliability of predictions. This uncertainty can degrade prediction accuracy and reliability, undermining user trust and the overall effectiveness of graph-based systems in real-world applications. (C3) Rarity. Rarity in graph learning involves accurately representing and modeling infrequent or sensitive groups that are often underrepresented in data. This imbalance can lead to biased outcomes and reduced fairness in model predictions. (C4) Bias. Bias in graph learning arises from skewed or unrepresentative data, leading to unfair or discriminatory outcomes in model predictions. This bias can perpetuate existing inequalities and erode user trust in the system’s fairness and reliability. Up to date, my research addresses these challenges from the following three pillars. Reliability aims to ensure that GNN models consistently produce accurate and reliable predictions. Explainability focuses on making the decision-making processes of models transparent and interpretable to users and stakeholders. By providing clear and effective explanations for predictions, users can make well-informed decisions, leading to better engagement, increased trust, and more effective decision-making processes. Fairness ensures that GNN models treat all demographic groups equitably, preventing biases that could lead to discriminatory outcomes. By promoting unbiased predictions, fairness aims to uphold equality, enhance user trust, and avoid the reinforcement of societal inequalities.