I. Introduction
Content moderation is essential in television and new media to protect viewers from harmful and illegal content. At the same time, if malicious content appears on a media platform, its reputation will be severely damaged and it will also be punished by government regulators. In the traditional mode, media content is moderated by the professional teams within media providers to ensure content quality. With the rapid growth of content, especially online video such as user-generated content (UGC), media providers are struggling to cope with the sheer volume of video. As a result, third-party moderation has recently become an important and widely used way for media providers to reduce their workload. However, the quality of moderation varies from institution to institution depending on technical capabilities, moderator skills or responsibilities. In order to avoid illicit content, media providers have to double-check the content based on third-party moderation results before going online, which creates a heavy workload and does not seem to solve the problem. This is mainly due to the lack of a trustworthy quality management mechanism for third-party moderation that can fairly assess quality and provide appropriate rewards or penalties to moderation institutions.