Xiangnan He - IEEE Xplore Author Profile

Showing 1-25 of 62 results

Filter Results

Show

Results

Data hallucination or augmentation is a straightforward solution for few-shot learning (FSL), where FSL is proposed to classify a novel object under limited training samples. Common hallucination strategies use visual or textual knowledge to simulate the distribution of a given novel category and generate more samples for training. However, the diversity and capacity of generated samples through t...Show More
Text-to-image (T2I) generative models have recently emerged as a powerful tool, enabling the creation of photo-realistic images and giving rise to a multitude of applications. However, the effective integration of T2I models into fundamental image classification tasks remains an open question. A prevalent strategy to bolster image classification performance is through augmenting the training set w...Show More
Knowledge Graphs (KGs) are becoming increasingly essential infrastructures in many applications while suffering from incompleteness issues. The KG Completion (KGC) task automatically predicts missing facts based on an incomplete KG. However, existing methods perform unsatisfactorily in real-world scenarios. On the one hand, their performance will dramatically degrade along with the increasing spar...Show More
Few-shot learning (FSL) aims at recognizing a novel object under limited training samples. A robust feature extractor (backbone) can significantly improve the recognition performance of the FSL model. However, training an effective backbone is a challenging issue since 1) designing and validating structures of backbones are time-consuming and expensive processes, and 2) a backbone trained on the k...Show More
Graph anomaly detection (GAD) under semi-supervised setting poses a significant challenge due to the distinct structural distribution between anomalous and normal nodes. Specifically, anomalous nodes constitute a minority and exhibit high heterophily and low homophily compared to normal nodes. The distribution of neighbors of the two types of nodes is close, making them difficult to distinguish du...Show More
Recommender systems suffer from confounding biases when there exist confounders affecting both item features and user feedback (e.g., like or not). Existing causal recommendation methods typically assume confounders are fully observed and measured, forgoing the possible existence of hidden confounders in real applications. For instance, product quality is a confounder since it affects both item pr...Show More
The present machine learning schema typically uses a one-pass model inference (e.g., forward propagation) to make predictions in the testing phase. It is inherently different from human students who double-check the answer during examinations especially when the confidence is low. To bridge this gap, we propose a learning to double-check (L2D) framework, which formulates double check as a learnabl...Show More
The real-world recommender system needs to be regularly retrained to keep with the new data. In this work, we consider how to efficiently retrain graph convolution network (GCN)-based recommender models that are state-of-the-art techniques for the collaborative recommendation. To pursue high efficiency, we set the target as using only new data for model updating, meanwhile not sacrificing the reco...Show More
Zero-shot learning (ZSL) suffers intensely from the domain shift issue, i.e., the mismatch (or misalignment) between the true and learned data distributions for classes without training data (unseen classes). By learning additionally from unlabelled data collected for the unseen classes, transductive ZSL (TZSL) could reduce the shift but only to a certain extent. To improve TZSL, we propose a nove...Show More
Historical interactions are the default choice for recommender model training, which typically exhibit high sparsity, i.e., most user-item pairs are unobserved missing data. A standard choice is treating the missing data as negative training samples and estimating interaction likelihood between user-item pairs along with the observed interactions. In this way, some potential interactions are inevi...Show More
Machine learning models are increasingly applied to loan default prediction to reduce the labor cost of financial institutions and the waiting time of lenders. We find that existing loan default prediction models remain lack minimax fairness, i.e., encountering significant performance drops on underrepresented subpopulations. The main cause of this trustworthy issue is pursuing Empirical Risk Mini...Show More
Recommender systems aim at helping users to discover interesting items and assisting business owners to obtain more profits. Nonetheless, traditional recommendations fail to explore the varying importance of product characteristics for different product domains. In light of this, we propose a novel probabilistic model for recommendation, which could learn products’ characteristics in a fine-graine...Show More
Personalized recommendation is becoming increasingly important in online information systems in the current era of information explosion. In real-world scenarios, when a user considers which items to consume, the decision choice may be affected by her friends. For example, she may ask her friends for suggestions or be attracted by products purchased by one friend. As such, to provide satisfactory ...Show More
A sequential recommendation has become a hot research topic, which seeks to predict the next interesting item for each user based on his action sequence. While previous methods have made many efforts to capture the dynamics of sequential patterns, we contend that they still suffer from two inherent limitations: 1) they fail to model item transition patterns in an efficient and time-sensitive manne...Show More
Recommender system usually suffers from severe popularity bias — the collected interaction data usually exhibits quite imbalanced or even long-tailed distribution over items. Such skewed distribution may result from the users’ conformity to the group, which deviates from reflecting users’ true preference. Existing efforts for tackling this issue mainly focus on completely eliminating popularity bi...Show More
Explainability is crucial for probing graph neural networks (GNNs), answering questions like “Why the GNN model makes a certain prediction?”. Feature attribution is a prevalent technique of highlighting the explanatory subgraph in the input graph, which plausibly leads the GNN model to make its prediction. Various attribution methods have been proposed to exploit gradient-like or attention scores ...Show More
Next-item recommendation has been a hot research, which aims at predicting the next action by modeling users’ behavior sequences. While previous efforts toward this task have been made in capturing complex item transition patterns, we argue that they still suffer from three limitations: 1) they have difficulty in explicitly capturing the impact of inherent order of item transition patterns; 2) onl...Show More
Graph convolution networks (GCNs), with their efficient ability to capture high-order connectivity in graphs, have been widely applied in recommender systems. Stacking multiple neighbor aggregation is the major operation in GCNs. It implicitly captures popularity features because the number of neighbor nodes reflects the popularity of a node. However, existing GCN-based methods ignore a universal ...Show More
Generating recommendations based on user-item interactions and user-user social relations is a common use case in web-based systems. These connections can be naturally represented as graph-structured data and thus utilizing graph neural networks (GNNs) for social recommendation has become a promising research direction. However, existing graph-based methods fails to consider the bias offsets of us...Show More
Influenced by the great success of deep learning in computer vision and language understanding, research in recommendation has shifted to inventing new recommender models based on neural networks. In recent years, we have witnessed significant progress in developing neural recommender models, which generalize and surpass traditional recommender models owing to the strong representation power of ne...Show More
Recent studies on Graph Convolutional Networks (GCNs) reveal that the initial node representations (i.e., the node representations before the first-time graph convolution) largely affect the final model performance. However, when learning the initial representation for a node, most existing work linearly combines the embeddings of node features, without considering the interactions among the featu...Show More
Making accurate recommendations for cold-start users has been a longstanding and critical challenge for recommender systems (RS). Cross-domain recommendations (CDR) offer a solution to tackle such a cold-start problem when there is no sufficient data for the users who have rarely used the system. An effective approach in CDR is to leverage the knowledge (e.g., user representations) learned from a ...Show More
Bundle recommendation aims to recommend a bundle of items for a user to consume as a whole. Related work can be divided into two categories: 1) to recommend the platform's prebuilt bundles to users; 2) generate personalized bundles for users. In this work, we propose two graph neural network models, a BGCN model (short for Bundle Graph Convolutional Network) for prebuilt bundle recommendation, and...Show More
In recent years, much research effort on recommendation has been devoted to mining user behaviors, i.e., collaborative filtering, along with the general information which describes users or items, e.g., textual attributes, categorical demographics, product images, and so on. Price, an important factor in marketing — which determines whether a user will make the final purchase decision on an item —...Show More
Recent studies have shown that graph neural networks (GNNs) are vulnerable against perturbations due to lack of robustness and can therefore be easily fooled. Currently, most works on attacking GNNs are mainly using gradient information to guide the attack and achieve outstanding performance. However, the high complexity of time and space makes them unmanageable for large scale graphs and becomes ...Show More