I. Introduction
In recent years, the proliferation of recommender systems has raised concerns that the learned recommendation models may be discriminatory with respect to sensitive attributes such as gender, occupation, age, i.e., the issue of group unfairness in recommendation [1]. Group unfairness occurs when recommender systems deliver varying levels of recommendation quality to different user groups defined by these sensitive attributes, leading to potential biases and inequalities. Although deep learning methods for recommender systems can extract abstract representation into embeddings for accurate prediction [2], [3], [4], the user embeddings learned by deep neural networks often contain or are related to sensitive attributes, ultimately compromising group fairness in predicting users’ feedback.