1. Introduction
Millions of User Generated Content (UGC) videos have sprung up with the rapid development of multi-media and mobile camera technologies, which brings a crucial and promising challenge, i.e., how to measure the subjective quality of UGC videos accurately and properly. Different from other video datasets, UGC videos are usually acquisited and uploaded by amateur photographers. Therefore, the UGC videos are susceptible to extremely diverse and complicated degradations, i.e., the hybrid distortions, including underexposure, overexposure, jitter, noise, color shift, etc. Apart from this, the content of UGC videos is generally very diverse due to the low requirements for shooting locations, such as natural scenes, animations, games, screen content, etc. These two aspects (complicated distortions and diverse contents) severely hinder the application of existing video quality assessment (VQA) methods to UGC videos. It is urgent to investigate an effective UGC VQA method to overcome this challenge and achieve human-like quality assessment for UGC videos