Presentation is loading. Please wait.

Presentation is loading. Please wait.

Eliciting Honest Feedback 1.Eliciting Honest Feedback: The Peer-Prediction Model (Miller, Resnick, Zeckhauser) 2.Minimum Payments that Reward Honest Feedback.

Similar presentations


Presentation on theme: "Eliciting Honest Feedback 1.Eliciting Honest Feedback: The Peer-Prediction Model (Miller, Resnick, Zeckhauser) 2.Minimum Payments that Reward Honest Feedback."— Presentation transcript:

1 Eliciting Honest Feedback 1.Eliciting Honest Feedback: The Peer-Prediction Model (Miller, Resnick, Zeckhauser) 2.Minimum Payments that Reward Honest Feedback (Jurca, Faltings) Nikhil SrivastavaHao-Yuh Su

2 Eliciting Feedback Fundamental purpose of reputation systems Review: general setup

3 Eliciting Feedback Fundamental purpose of reputation systems Review: general setup –Information distributed among individuals about value of some item external goods (NetFlix, Amazon, ePinions, admissions) each other (eBay, PageRank) –Aggregated information valuable for individual or group decisions –How to gather and disseminate information?

4 Challenges Underprovision –inconvenience cost of contributing Dishonesty –niceness, fear of retaliation –conflicting motivations

5 Challenges Underprovision –inconvenience cost of contributing Dishonesty –niceness, fear of retaliation –conflicting motivations Reward systems –motivate participation, honest feedback –monetary (prestige, privilege, pure competition)

6 Overcoming Dishonesty Need to distinguish good from bad reports –explicit reward systems require objective outcome, public knowledge –stock, weather forecasting But what if … –subjective? (product quality/taste) –private? (breakdown frequency, seller reputability)

7 Overcoming Dishonesty Need to distinguish good from bad reports –explicit reward systems require objective outcome, public knowledge –stock, weather forecasting But what if … –subjective? (product quality/taste) –private? (breakdown frequency, seller reputability) Naive solution: reward peer agreement –Information cascade, herding

8 Peer Prediction Basic idea –reports determine probability distribution on other reports –reward based on predictive power of users report for a reference raters report –taking advantage of proper scoring rules, honest reporting is a Nash equilibrium

9 Outline Peer Prediction method –model, assumptions –result: –underlying intuition through example Extensions/Applications –primary practical concerns –(weak) assumptions: sequential reporting, continuous signals, risk aversion (Strong) assumptions Other limitations

10 Information Flow - Model PRODUCT type t CENTER (a) signal S announcement a transfer

11 Information Flow - Model PRODUCT type t CENTER (a) signal S transfer PRODUCT type t CENTER (a) signal S transfer announcement a

12 Information Flow - Example PLUMBER type = {H, L} hhlhhl signal = {h (high), l (low)} (a) agreement hhlhhl $1 $0 hhlhhl

13 Assumptions - Model PRODUCT type t {1, …, T} f (s | t) - fixed type - finite T - common prior: distribution p(t) - common knowledge: distribution f(s|t) - linear utility - stochastic relevance

14 Stochastic Relevance Informally –same product, so signals dependent certain observation (realization) should change posterior on type p(t), and thus on signal distribution f(s | t) Rolex v. Faux-lex –generically satisfied if different types yield different signal distributions

15 Stochastic Relevance Informally –same product, so signals dependent certain observation (realization) should change posterior on type p(t), and thus on signal distribution f(s) Rolex v. Faux-lex –generically satisfied if different types yield different signal distributions Formally –S i stochastically relevant for S j iff: distribution (S i | S j ) different for different realizations of S j there is s j such that

16 Assumptions - Example finite T: plumber is either H or L fixed type: plumber quality does not change common prior: p(t) –p(H) = p(L) =.5 stochastic relevance: need good plumbers signal distribution to be different than bads common knowledge: f(s|t) –p(h | H) =.85, p(h | L) =.45 –note this gives p(h), p(l)

17 Definitions - Model (a) T types, M signals, N raters signals: S = (S 1, …, S N ), where S i = {s 1, …, s M } announcements: a = (a 1, …, a N ), where a i = {s 1, …, s M } transfers: (a) = ( 1 (a), …, N (a)) announcement strategy for player i: a i = (a i 1, … a i M )

18 Definitions - Example PLUMBER type = {H, L} hhlhhl signal = {h, l} T(a) $1 $0 hhlhhl 2 types, 2 signals, 3 raters signals: S = (h, h, l) announcements: a = (h, h, l) transfers: (a) = ( 1 (a), 2 (a), 3 (a)) announcement strategy for player 2: a 2 = (a 2 h, a 2 l ) –total set of strategies: (h, h), (h, l), (l, h), (l, l)

19 Best Responses - Model Each player decides announcement strategy a i a i is a best-response to other strategies a -i if: Best-response strategy maximizes raters expected transfer with respect to other raters signals … conditional on S i = s m Nash equilibrium if equation holds for all i T(a)

20 Best Responses - Example PLUMBER S 1 = h S 2 = ? T(a) t 1 ( a 1, a 2 ) t 2 ( a 1, a 2 ) h or l a 2 Player 1 receives signal h Player 2s strategy is to report a 2 Player 1 reporting signal h is a best-response if

21 Peer Prediction Find reward mechanism that induces honest reporting –where a i = S i for all i is a Nash equilibrium Will need Proper Scoring Rules

22 Proper Scoring Rules Definition: –for two variables S i and S j, a scoring rule assigns to each announcement a i of S i a score for each realization of S j –R ( s j | a i ) –proper if score maximized by the announcement of the true realization of S i

23 Applying Scoring Rules Before: predictive markets (Hanson) –S i = S j = reality, a i = agent report –R ( reality | report ) –Proper scoring rules ensure honest reports: a i = S i stochastic relevance for private info and public signal automatically satisfied –What if theres no public signal?

24 Applying Scoring Rules Before: predictive markets (Hanson) –S i = S j = reality, a i = agent report –R ( reality | report ) –Proper scoring rules ensure honest reports: a i = S i stochastic relevance for private info and public signal automatically satisfied –What if theres no public signal? Use other reports Now: predictive peers –S i = my signal, S j = your signal, a i = my report –R ( your report | my report )

25 How it Works For each rater i, we choose a different reference rater r(i) Rater i is rewarded for predicting rater r(i)s announcement – * i (a i, a r(i) ) = R( a r(i), a i ) –based on updated beliefs about r(i)s announcement given is announcement Proposition: for any strictly proper scoring rule R, a reward system with transfers * i makes truthful reporting a strict Nash equilibrium

26 Proof of Proposition If player i receives signal s*, he seeks to maximize his expected transfer Since player r(i) is honestly reporting, his report a equals his signal s since R is proper, score is uniquely maximized by announcement of true realization of S i –uniquely maximized for a i = s* –S i is stochastically relevant for S r(i), and since r(i) reports honestly S i is stochastically relevant for a r(i) = s r(i)

27 Peer Prediction Example Player 1 observes low and must decide a 1 = {h, l} Assume logarithmic scoring –t 1 (a 1, a 2 ) = R(a 2 | a 1 ) = ln[ p(a 2 | a 1 )] What signal maximizes expected payoff? –Note that peer agreement would incentivize dishonesty (h) PLUMBER p(H) = p(L) =.5 S 1 = l S 2 = ? T(a) t 1 ( a 1, a 2 ) a 1 = {h, l} a 2 = s 2 p(h | H) =.85 p(h | L) =.45

28 Peer Prediction Example Player 1 observes low and must decide a 1 = {h, l} Assume logarithmic scoring –t 1 (a 1, a 2 ) = R(a 2 | a 1 ) = ln[ p(a 2 | a 1 )] PLUMBER p(H) = p(L) =.5 S 1 = l S 2 = ? T(a) t 1 ( a 1, a 2 ) p(h | H) =.85 p(h | L) =.45 a 1 = l (honest) yields expected transfer: -.69 a 1 = h (false) yields expected transfer: -.75 a 1 = {h, l} a 2 = s 2

29 Things to Note Players dont have to perform complicated Bayesian reasoning if they: –trust the center to accurately compute posteriors –believe other players will report honestly Not unique equilibrium –collusion

30 Primary Practical Concerns Examples –inducing effort: fixed cost c > 0 of reporting –better information: users seek multiple samples –participation constraints –budget balancing

31 Primary Practical Concerns Examples –inducing effort: fixed cost c > 0 of reporting –better information: users seek multiple samples –participation constraints –budget balancing Basic idea: –affine rescaling (a*x + b) to overcome obstacle –preserves honesty incentive –increases budget … see 2nd paper

32 Extensions to Model Sequential reporting –allows immediate use of reports –let rater i predict report of rater i + 1 –scoring rule must reflect changed beliefs about product type due to (1, …, i - 1) reports

33 Extensions to Model Sequential reporting –allows immediate use of reports –let rater i predict report of rater i + 1 –scoring rule must reflect changed beliefs about product type due to (1, …, i - 1) reports Continuous signals –analytic comparison of three common rules –eliciting coarse reports from exact information for two types, raters will choose closest bin (complicated)

34 Common Prior Assumption Practical concern - how do we know p(t),? –needed by center to compute payments –needed by raters to compute posteriors define types with respect to past products, signals types t = {1,…,9} where f(h | t) = t/10 for new product, p(t) based on past product signals update beliefs with new signals note f(s | t) given automatically

35 Common Prior Assumption Practical concern - how do we know p(t)? Theoretical concern - are p(t), f(s|t) public? –raters trust center to compute appropriate posterior distributions for reference raters signal –rater with private information has no guarantee center will not report true posterior beliefs rater might skew report to reflect appropriate posteriors report both private information and announcement two scoring mechanisms, one for distribution implied by private priors, another for distribution implied by announcement

36 Limitations Collusion –could a subset of raters gain higher transfers? higher balanced transfers? –can such strategies: overcome random pairings avoid suspicious patterns Multidimensional signals –economist with knowledge of computer science Understanding/trust in the system –complicated Bayesian reasoning, payoff rules rely on experts to ensure public confidence

37 Discussion Is the common priors assumption reasonable? –How might we relax it and keep some positive results? What are the most serious challenges to implementation? –Can you envision a(n online) system that rewards feedback? –How would the dynamics differ from a reward-less system? Is this paper necessary at all? –Predominance of honesty from fairness incentive cost of reporting, coarse reports, common priors, collusion, multidimensional signals, trust in system


Download ppt "Eliciting Honest Feedback 1.Eliciting Honest Feedback: The Peer-Prediction Model (Miller, Resnick, Zeckhauser) 2.Minimum Payments that Reward Honest Feedback."

Similar presentations


Ads by Google