FairSort: Learning to Fair Rank for Personalized Recommendations in Two-Sided Platforms | IEEE Journals & Magazine | IEEE Xplore

FairSort: Learning to Fair Rank for Personalized Recommendations in Two-Sided Platforms


Abstract:

Traditional recommendation systems focus on maximizing user satisfaction by suggesting their favorite items. This user-centric approach may lead to unfair exposure distri...Show More

Abstract:

Traditional recommendation systems focus on maximizing user satisfaction by suggesting their favorite items. This user-centric approach may lead to unfair exposure distribution among the providers. On the contrary, a provider-centric design might become unfair to the users. Therefore, this paper proposes a re-ranking model FairSort1 to find a trade-off solution among user-side fairness, provider-side fairness, and personalized recommendations utility. Previous works habitually treat this issue as a knapsack problem, incorporating both-side fairness as constraints. In this paper, we adopt a novel perspective, treating each recommendation list as a runway rather than a knapsack. In this perspective, each item on the runway gains a velocity and runs within a specific time, achieving re-ranking for both-side fairness. Meanwhile, we ensure the Minimum Utility Guarantee for personalized recommendations by designing a Binary Search approach. This can provide more reliable recommendations compared to the conventional greedy strategy based on the knapsack problem. We further broaden the applicability of FairSort, designing two versions for online and offline recommendation scenarios. Theoretical analysis and extensive experiments on real-world datasets indicate that FairSort can ensure more reliable personalized recommendations while considering fairness for both the provider and user.
Published in: IEEE Transactions on Knowledge and Data Engineering ( Volume: 37, Issue: 2, February 2025)
Page(s): 641 - 654
Date of Publication: 02 December 2024

ISSN Information:

Funding Agency:

References is not available for this document.

I. Introduction

Although bringing great convenience for users by mining their interests to provide personalized recommendations, the recommender system also introduces certain fairness problems. This is because recommender systems typically have inherent biases, such as data and algorithmic bias [1], [2], [3], [4], [5], resulting in unfairness in both the recommendation process and its outcomes. During the process, models may embed biased information (e.g., sensitive attributes like race, and gender), resulting in discriminatory practices like offering more technical job opportunities to men over women [6], [7]. Regarding outcomes, models may generate biased recommendations for different user groups like the unfair allocation of exposure opportunities among various providers [8], [9].

Select All
1.
A. Castelnovo, R. Crupi, G. Greco, D. Regoli, I. G. Penco and A. C. Cosentini, "A clarification of the nuances in the fairness metrics landscape", Sci. Rep., vol. 12, no. 1, 2022.
2.
N. Mehrabi, F. Morstatter, N. Saxena, K. Lerman and A. Galstyan, "A survey on bias and fairness in machine learning", ACM Comput. Surv., vol. 54, no. 6, pp. 1-35, 2021.
3.
T. Schnabel, A. Swaminathan, A. Singh, N. Chandak and T. Joachims, "Recommendations as treatments: Debiasing learning and evaluation", Proc. Int. Conf. Mach. Learn., pp. 1670-1679, 2016.
4.
J. Ding, Y. Quan, X. He, Y. Li and D. Jin, "Reinforced negative sampling for recommendation with exposure data", Proc. Int. Joint Conf. Artif. Intell., pp. 2230-2236, 2019.
5.
A. Vardasbi, M. de Rijke and I. Markov, "Cascade model-based propensity estimation for counterfactual learning to rank", Proc. 43rd Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, pp. 2089-2092, 2020.
6.
C. Zhao, L. Wu, P. Shao, K. Zhang, R. Hong and M. Wang, "Fair representation learning for recommendation: A mutual information perspective", Proc. AAAI Conf. Artif. Intell., pp. 4911-4919, 2023.
7.
Z. Zeng, R. Islam, K. N. Keya, J. Foulds, Y. Song and S. Pan, "Fair representation learning for heterogeneous information networks", Proc. Int. AAAI Conf. Web Social Media, pp. 877-887, 2021.
8.
Y. Ge et al., "Towards long-term fairness in recommendation", Proc. 14th ACM Int. Conf. Web Search Data Mining, pp. 445-453, 2021.
9.
Y. Li, H. Chen, Z. Fu, Y. Ge and Y. Zhang, "User-oriented fairness in recommendation", Proc. Web Conf., pp. 624-632, 2021.
10.
Z. Fu et al., "Fairness-aware explainable recommendation over knowledge graphs", Proc. 43rd Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, pp. 69-78, 2020.
11.
Z. Zhu, J. Kim, T. Nguyen, A. Fenton and J. Caverlee, "Fairness among new items in cold start recommender systems", Proc. 44th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, pp. 767-776, 2021.
12.
A. Beutel et al., "Fairness in recommendation ranking through pairwise comparisons", Proc. 25th ACM SIGKDD Int. Conf. Knowl. Discov. Data Mining, pp. 2212-2220, 2019.
13.
C. Xu et al., "P-MMF: Provider max-min fairness re-ranking in recommender system", Proc. ACM Web Conf., pp. 3701-3711, 2023.
14.
G. K. Patro, A. Biswas, N. Ganguly, K. P. Gummadi and A. Chakraborty, "FairRec: Two-sided fairness for personalized recommendations in two-sided platforms", Proc. Web Conf., pp. 1194-1204, 2020.
15.
Y. Wu, J. Cao, G. Xu and Y. Tan, "TFROM: A two-sided fairness-aware recommendation model for both customers and providers", Proc. 44th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, pp. 1013-1022, 2021.
16.
M. Naghiaei, H. A. Rahmani and Y. Deldjoo, "CPFair: Personalized consumer and producer fairness re-ranking for recommender systems", Proc. 45th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, pp. 770-779, 2022.
17.
R. M. Karp, Reducibility Among Combinatorial Problems, Boston, MA, USA:Springer, pp. 85-103, 1972.
18.
J. Li, Y. Ren and K. Deng, "FairGAN: GANs-based fairness-aware learning for recommendations with implicit feedback", Proc. ACM Web Conf., pp. 297-307, 2022.
19.
L. Wu, L. Chen, P. Shao, R. Hong, X. Wang and M. Wang, "Learning fair representations for recommendation: A graph-based perspective", Proc. Web Conf., pp. 2198-2208, 2021.
20.
Y. Wu et al., "Selective fairness in recommendation via prompts", Proc. 45th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, pp. 2657-2662, 2022.
21.
J. Wang et al., "Make fairness more fair: Fair item utility estimation and exposure re-distribution", Proc. 28th ACM SIGKDD Conf. Knowl. Discov. Data Mining, pp. 1868-1877, 2022.
22.
M. Heuss, F. Sarvi and M. de Rijke, "Fairness of exposure in light of incomplete exposure estimation", Proc. 45th Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, pp. 759-769, 2022.
23.
J. Liu, "Toward a two-sided fairness framework in search and recommendation", Proc. 2023 Conf. Hum. Inf. Interact. Retrieval, pp. 236-246, 2023.
24.
Y. Wang et al., "Intersectional two-sided fairness in recommendation", Proc. ACM Web Conf., pp. 3609-3620, 2024.
25.
C. Wang et al., "Two-sided calibration for quality-aware responsible recommendation", Proc. 17th ACM Conf. Recommender Syst., pp. 223-233, 2023.
26.
E. Budish, "The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes", J. Political Economy, vol. 119, no. 6, pp. 1061-1103, 2011.
27.
A. J. Biega, K. P. Gummadi and G. Weikum, "Equity of attention: Amortizing individual fairness in rankings", Proc. 41st Int. ACM SIGIR Conf. Res. Develop. Inf. Retrieval, pp. 405-414, 2018.
28.
L. Wang and T. Joachims, "User fairness item fairness and diversity for rankings in two-sided markets", Proc. 2021 ACM SIGIR Int. Conf. Theory Inf. Retrieval, pp. 23-41, 2021.
29.
M. B. Zafar, I. Valera, M. Gomez-Rodriguez and K. P. Gummadi, "Fairness constraints: A flexible approach for fair classification", J. Mach. Learn. Res., vol. 20, no. 1, pp. 2737-2778, 2019.
30.
G. Goh, A. Cotter, M. Gupta and M. P. Friedlander, "Satisfying real-world goals with dataset constraints", Proc. Adv. Neural Inf. Process. Syst., pp. 2423-2431, 2016.
Contact IEEE to Subscribe

References

References is not available for this document.