I. Introduction
Music is an artistic medium that brings joy to people and expresses human thoughts and feelings through sound. Owing to the development of the Internet and the growth of the digital music market, the need for a music search and recommendation system [1] that can easily and quickly access various types of music from large music datasets has emerged. In conventional music search and recommendation systems, music is automatically classified based on genre [2], [3], associated emotion [4], [5], artist [6], lyrics [7], album [8], emotion displayed in videos [9], user profiling [10]–[12], content-based features [13]–[17], and users’ social media interactions [18], and appropriate search and recommendation results are provided to users. Recent music recommendation models use variations of a hybrid system [19], [20] combining collaborative filtering [21], content-based filtering [22], context-based filtering [23]–[26], and metadata-based models [27] along with several other parameters. However, most music search and recommendation systems have been developed based on a system-centric rather than user-centric perspective; further, although emotions are an important factor in music selection, studies on emotions or expressions of music listeners remain insufficient.