I. Introduction
In peer-to-peer (P2P) systems resources are highly distributed and stored by individual peers. In such environments where each peer may present a different self-interested entity, designing algorithms for sharing resources becomes a difficult task. Traditional distributed algorithms design faces only two types of behaviors: obedient (i.e. to follow the algorithm) or malicious (i.e. to “play” against the others). In contrast, in P2P systems users may have a strategic behavior which may be neither obedient nor malicious, just rational. It was established in [1] for example that in the absence of incentives a large fraction of peers have a priori no motivation to share while they are naturally incited to use system resources. The effects of “free-riding” (consuming resources without contributing) was extensively studied in the last years [1], [13]. This behavior can cause severe degradation of the system performances or even may lead to the system collapse [14]. As emphasized by Shneidman and al. [14], different approaches may be adopted to face rationality: i) to ignore it and to expect that system will do its best despite self-interested peers, ii) to limit the effect that a rational peer can have on the system by using trusted mechanisms, or iii) to adopt the fault tolerance techniques. However, none of these approaches benefits from resources that may be potentially offered by these rational and self-interested peers. Thus the system must motivate each peer to behave rationally in a system efficient way. Solutions come from economics and more precisely from the mechanism design theory that always stressed for algorithms that work correctly in the presence of predictable selfish behavior.