Reputation systems—by supporting users’ decision making in selecting “reliable strangers”—are examples of intelligent decision systems. We are often required to interact with or even trust strangers in online applications. Examples are in electronic commerce (e.g., eBay and Amazon), car pooling (e.g., Uber), swapping and bartering websites (e.g., Swapsity and SwapStyle), and accommodation for travelers (e.g., Airbnb and CouchSurfing). To minimize the risk of our interactions in such applications, we must identify reliable partners (e.g., sellers, swappers, drivers, or renters). Thus, computational models of trust and reputation, which learn how trustworthy a user is given her previous transactions, are instrumental in our decision making for selecting reliable partners. Their impact on marketplaces is significant as mistrust and uncertainty can cause a market failure. They also heavily impact the chance of these markets’ failure as mistrust and uncertainty can cause a market failure.
Focusing on the possibility that fraudulent users might exploit a trust model itself—to be perceived trustworthy and reliable by others—our research argues that exploitation resistance is a crucial feature of trust models, where they are impervious to the adversaries who aim to abuse them. We develop a
game-theoretic testbed for modelling attacks and evaluating trust model performance. We demonstrate how prominent trust models can be exploited, by the con-man attack—a simple cyclical behaviour where a period of good transactions would be followed by a single bad transaction (e.g., a fraudulent seller sells $1$ faulty product after every $9$ working products and still keep his reputation high at least $90\%$. We
propose new intelligent algorithms to prevent such exploitation, and demonstrate their utility.