I. Introduction
We consider a distributed multiuser system where individual entities possess observations or perceptions of one another, while the truth is only known to themselves and they might have an interest in withholding or distorting the truth. We ask the question whether it is possible for the system as a whole to arrive at the correct perceptions or assessment of all users, referred to as their reputation, by encouraging or incentivizing the users to participate in a collective effort without violating private information and self-interest. In this paper, we investigate this problem using a mechanism design theoretic approach [9], [17]. We will construct a sequence of mechanisms and examine whether, under each, a user has incentive to participate, and if they do, what they would provide as input, and whether ultimately their participation benefits the system's (global) assessment of all individuals.