Download presentation

Presentation is loading. Please wait.

Published byCali Frain Modified over 2 years ago

1
Computational Social Choice WINE-13 Tutorial Dec 11, 2013 Lirong Xia

2
The second nationwide referendum in UK –1 st was in 1975 Member of Parliament election: Plurality rule Alternative vote rule? 68% No vs. 32% Yes 1 2011 UK Referendum

3
Ordinal Preference Aggregation: Social Choice > social choice mechanism > 2 A profile Carol Alice Bob ABC AB C A C B A

4
3 AB C A B C Turker 1 Turker 2 Turker n … >> Ranking pictures [PGM+ AAAI-12]............................ > > AB > B C >

5
4 Social choice R1R1 R1*R1* Outcome R2R2 R2*R2* RnRn Rn*Rn* social choice mechanism …… Profile R i, R i *: full rankings over a set A of alternatives

6
Social Choice Computational thinking + optimization algorithms CS Social Choice 5 PLATO 4 th C. B.C. LULL 13 th C. BORDA 18 th C. CONDORCET 18 th C. ARROW 20 th C. TURING et al. 20 th C. 21 th Century and Computer Science PLATO et al. 4 th C. B.C.--- 20 th C. Strategic thinking + methods/principles of aggregation

7
Applications: real world People/agents often have conflicting preferences, yet they have to make a joint decision 6

8
Multi-agent systems [Ephrati and Rosenschein 91] Recommendation systems [Ghosh et al. 99] Meta-search engines [Dwork et al. 01] Belief merging [Everaere et al. 07] Human computation (crowdsourcing) [Mao et al. AAAI-13] etc. 7 Applications: academic world

9
A burgeoning area Recently has been drawing a lot of attention –IJCAI-11: 15 papers, best paper –AAAI-11: 6 papers, best paper –AAMAS-11: 10 full papers, best paper runner-up –AAMAS-129 full papers, best student paper –EC-12: 3 papers Workshop: COMSOC Workshop 06, 08, 10, 12, 14 Courses: –Technical University Munich (Felix Brandt) –Harvard (Yiling Chen) –U. of Amsterdam (Ulle Endriss) –RPI (Lirong Xia) Book in progress: Handbook of Computational Social Choice 8

10
How to design a good social choice mechanism? 9 What is being “good”?

11
Two goals for social choice mechanisms GOAL1: democracy 10 GOAL2: truth 1. Classical Social Choice 3. Statistical approaches 2. Computational aspects

12
11 Outline 1. Classical Social Choice 2.1 Computational aspects Part 1 3. Statistical approaches 45 min 55 min 75 min 2.2 Computational aspects Part 2 5 min 30 min NP- Hard NP- Hard NP- Hard

13
Common voting rules (what has been done in the past two centuries) Mathematically, a social choice mechanism (voting rule) is a mapping from {All profiles} to {outcomes} –an outcome is usually a winner, a set of winners, or a ranking – m : number of alternatives (candidates) – n : number of agents (voters) – D =( P 1,…, P n ) a profile Positional scoring rules A score vector s 1,...,s m –For each vote V, the alternative ranked in the i -th position gets s i points –The alternative with the most total points is the winner –Special cases Borda, with score vector ( m-1, m-2, …,0 ) Plurality, with score vector ( 1,0,…,0 ) [Used in the US]

14
An example Three alternatives { c 1, c 2, c 3 } Score vector ( 2,1,0 ) (=Borda) 3 votes, c 1 gets 2+1+1=4, c 2 gets 1+2+0=3, c 3 gets 0+0+2=2 The winner is c 1 2 1 0

15
The election has two rounds –In the first round, all alternatives except the two with the highest plurality score drop out –In the second round, the alternative that is preferred by more voters wins [used in Iran, France, North Carolina State] 14 Plurality with runoff a > b > c > da > d d > a > b > cd > a c > d > a >bd > a b > c > d >a d >a d

16
Also called instant run-off voting or alternative vote The election has m-1 rounds, in each round, –The alternative with the lowest plurality score drops out, and is removed from all votes –The last-remaining alternative is the winner [used in Australia and Ireland] 15 Single transferable vote (STV) a > b > c > da > c > d d > a > b > cd > a > c c > d > a >bc > d > a b > c > d >a a c > d >a a > c c > a

17
Kendall tau distance – K(V,W) = # {different pairwise comparisons} Kemeny( D )=argmin W K(D,W)= argmin W Σ P ∈ D K(P,W) For single winner, choose the top-ranked alternative in Kemeny( D ) [Has a statistical interpretation] 16 The Kemeny rule K( b ≻ c ≻ a, a ≻ b ≻ c ) = 11 2

18
Approval, Baldwin, Black, Bucklin, Coombs, Copeland, Dodgson, maximin, Nanson, Range voting, Schulze, Slater, ranked pairs, etc… 17 …and many others

19
18 Q: How to evaluate rules in terms of achieving democracy? A: Axiomatic approach

20
19 Axiomatic approach (what has been done in the past 50 years) Anonymity: names of the voters do not matter –Fairness for the voters Non-dictatorship: there is no dictator, whose top-ranked alternative is always the winner –Fairness for the voters Neutrality: names of the alternatives do not matter –Fairness for the alternatives Consistency: if r(D 1 )∩r(D 2 )≠ ϕ, then r(D 1 ∪ D 2 )=r(D 1 )∩r(D 2 ) Condorcet consistency: if there exists a Condorcet winner, then it must win –A Condorcet winner beats all other alternatives in pairwise elections Easy to compute: winner determination is in P –Computational efficiency of preference aggregation Hard to manipulate: computing a beneficial false vote is hard

21
20 Which axiom is more important? Some of these axiomatic properties are not compatible with others Food for thought: how to evaluate partial satisfaction of axioms? Condorcet consistency ConsistencyEasy to compute Positional scoring rules NYY KemenyYNN Ranked pairsYNY

22
21 An easy fact Theorem. For voting rules that selects a single winner, anonymity is not compatible with neutrality –proof: > > > > ≠ W.O.L.G. NeutralityAnonymity Alice Bob

23
Thm. No positional scoring rule is Condorcet consistent: –suppose s 1 > s 2 > s 3 22 Another easy fact [Fishburn APSR-74] > 3 Voters 2 Voters 1 Voter is the Condorcet winner : 3 s 1 + 2s 2 + 2s 3 : 3 s 1 + 3s 2 + 1s 3 <

24
23 Not-So-Easy facts Arrow’s impossibility theorem –Google it! Gibbard-Satterthwaite theorem –Next section Axiomatic characterization –Template: A voting rule satisfies axioms A1, A2, A2 if it is rule X –If you believe in A1 A2 A3 are the most desirable properties then X is optimal –(anonymity+neutrality+consistency+continuity) positional scoring rules [Young SIAMAM-75] –(neutrality+consistency+Condorcet consistency) Kemeny [Young&Levenglick SIAMAM-78]

25
Can we quantify a voting rule’s satisfiability of these axiomatic properties? –Tradeoffs between satisfiability of axioms –Use computational techniques to design new voting rules use AI techniques to automatically prove or discover new impossibility theorems [Tang&Lin AIJ-09] 24 Food for thought

26
25 Outline 1. Classical Social Choice 2.1 Computational aspects Part 1 3. Statistical approaches 45 min 55 min 75 min 2.2 Computational aspects Part 2 5 min 15 min 30 min

27
Easy to compute: –the winner can be computed in polynomial time Hard to manipulate: –computing a beneficial false vote is hard 26 Computational axioms

28
Easy to compute: –the winner can be computed in polynomial time Hard to manipulate: –computing a beneficial false vote is hard 27 Computational axioms

29
Almost all common voting rules, except –Kemeny: NP-hard [Bartholdi et al. 89], Θ 2 p -complete [Hemaspaandra et al. TCS-05] –Young: Θ 2 p -complete [Rothe et al. TCS-03] –Dodgson: Θ 2 p -complete [Hemaspaandra et al. JACM-97] –Slater: NP-complete [Hurdy EJOR-10] Practical algorithms for Kemeny (also for others) –ILP [Conitzer, Davenport, & Kalagnanam AAAI-06] –Approximation [Ailon, Charikar, & Newman STOC-05] –PTAS [Kenyon-Mathieu and W. Schudy STOC-07] –Fixed-parameter analysis [Betzler et al. TCS-09] 28 Which rule is easy to compute?

30
Easy to compute axiom: computing the winner takes polynomial time in the input size –input size: nm log m What if m is extremely large? 29 Really easy to compute?

31
Combinatorial domains (Multi-issue domains) The set of alternatives can be uniquely characterized by multiple issues Let I={x 1,...,x p } be the set of p issues Let D i be the set of values that the i -th issue can take, then A=D 1 ×... × D p Example: –Issues={ Main course, Wine } –Alternatives={ } × { } 30

32
In California, voters voted on 11 binary issues ( / ) – 211=2048 combinations in total – 5/11 are about budget and taxes 31 Multiple referenda Prop.30 Increase sales and some income tax for education Prop.38 Increase income tax on almost everyone for education

33
32 Overview Combinatorial voting Preference representation New voting rule Evaluation

34
Preference representation: CP-nets [Boutilier et al. JAIR-04] Variables: x,y,z. Graph CPTs This CP-net encodes the following partial order: x zy 33

35
Sequential voting rules [Lang IJCAI-07] Issues: main course, wine Order: main course > wine Local rules are majority rules V 1 : >, : >, : > V 2 : >, : >, : > V 3 : >, : >, : > Step 1: Step 2: given, is the winner for wine Winner: (, ) 34

36
How can we say that sequential voting is good? –computationally efficient –satisfies good axioms [Lang and Xia MSS-09] –need to worry about manipulation in the worst case [Xia, Conitzer &Lang EC-11] Other compact languages –GAI network [Gonzales et al. AIJ-11] –TCP-net [Li et al. AAMAS-11] –Soft constraints [Pozza et al. IJCAI-11] 35 Research topics

37
Belief merging [Gabbay et al. JLC-09] Judgment aggregation [List and Pettit EP-02] 36 Other combinatorial domains K1K1 merging operator K2K2 KnKn … Action PAction QLiable? (P ∧ Q) Judge 1YYY Judge 2YNN Judge 3NYN MajorityYYN

38
Easy to compute: –the winner can be computed in polynomial time Hard to manipulate: –computing a beneficial false vote is hard 37 Computational axioms

39
Strategic behavior (of the agents) Manipulation: an agent (manipulator) casts a vote that does not represent her true preferences, to make herself better off A voting rule is strategy-proof if there is never a (beneficial) manipulation under this rule How important strategy-proofness is as an desired axiomatic property? –compared to other axiomatic properties

40
Manipulation under plurality rule (ties are broken in favor of ) > > > Plurality rule Alice Bob Carol

41
Any strategy-proof voting rule? No reasonable voting rule is strategyproof Gibbard-Satterthwaite Theorem [Gibbard Econometrica-73, Satterthwaite JET-75] : When there are at least three alternatives, no voting rules except dictatorships satisfy –non-imposition: every alternative wins for some profile –unrestricted domain: voters can use any linear order as their votes –strategy-proofness Axiomatic characterization for dictatorships!

42
Relax non-dictatorship: use a dictatorship Restrict the number of alternatives to 2 Relax unrestricted domain: mainly pursued by economists –Single-peaked preferences: –Approval voting: A voter submit 0 or 1 for each alternative 41 A few ways out

43
Use a voting rule that is too complicated so that nobody can easily predict the winner –Dodgson –Kemeny –The randomized voting rule used in Venice Republic for more than 500 years [Walsh&Xia AAMAS-12] We want a voting rule where –Winner determination is easy –Manipulation is hard 42 Computational thinking

44
43 Overview Manipulation is inevitable (Gibbard-Satterthwaite Theorem) Yes No Limited information Limited communication Can we use computational complexity as a barrier? Is it a strong barrier? Other barriers? May lead to very undesirable outcomes Seems not very often Why prevent manipulation? How often?

45
If it is computationally too hard for a manipulator to compute a manipulation, she is best off voting truthfully –Similar as in cryptography For which common voting rules manipulation is computationally hard? 44 Manipulation: A computational complexity perspective NP- Hard

46
Initiated by [Bartholdi, Tovey, &Trick SCW-89b] Votes are weighted or unweighted Bounded number of alternatives [Conitzer, Sandholm, &Lang JACM-07] –Unweighted manipulation: easy for most common rules –Weighted manipulation: depends on the number of manipulators Unbounded number of alternatives (next few slides) Assuming the manipulators have complete information! 45 Computing a manipulation

47
Unweighted coalitional manipulation (UCM) problem Given –The voting rule r –The non-manipulators’ profile P NM –The number of manipulators n’ –The alternative c preferred by the manipulators We are asked whether or not there exists a profile P M (of the manipulators) such that c is the winner of P NM ∪ P M under r 46

48
47 The stunningly big table for UCM #manipulatorsOne manipulatorAt least two CopelandP[BTT SCW-89b]NPC[FHS AAMAS-08,10] STVNPC[BO SCW-91]NPC[BO SCW-91] VetoP[ZPR AIJ-09]P Plurality with runoffP[ZPR AIJ-09]P CupP[CSL JACM-07]P BordaP[BTT SCW-89b]NPC [DKN+ AAAI-11] [BNW IJCAI-11] MaximinP[BTT SCW-89b]NPC[XZP+ IJCAI-09] Ranked pairsNPC[XZP+ IJCAI-09]NPC[XZP+ IJCAI-09] BucklinP[XZP+ IJCAI-09]P Nanson’s ruleNPC[NWX AAA-11]NPC[NWX AAA-11] Baldwin’s ruleNPC[NWX AAA-11]NPC[NWX AAA-11]

49
For some common voting rules, computational complexity provides some protection against manipulation Is computational complexity a strong barrier? –NP-hardness is a worst-case concept 48 What can we conclude?

50
49 Probably NOT a strong barrier 1. Frequency of manipulability 2. Easiness of Approximation 3. Quantitative G-S AAMAS-14 workshop Computational Social Choice: Beyond the Worst Case

51
Non-manipulators’ votes are drawn i.i.d. –E.g. i.i.d. uniformly over all linear orders (the impartial culture assumption) How often can the manipulators make c win? –Specific voting rules [Peleg T&D-79, Baharad&Neeman RED-02, Slinko T&D-02, Slinko MSS-04, Procaccia and Rosenschein AAMAS-07] 50 A first angle: frequency of manipulability

52
Theorem. For any generalized scoring rule –Including many common voting rules Computational complexity is not a strong barrier against manipulation –UCM as a decision problem is easy to compute in most cases –The case of Θ(√ n ) has been studied experimentally in [Walsh IJCAI-09] 51 A general result [Xia&Conitzer EC-08a] # manipulators All-powerful No power Θ(√n)

53
Unweighted coalitional optimization (UCO): compute the smallest number of manipulators that can make c win –A greedy algorithm has additive error no more than 1 for Borda [Zuckerman, Procaccia, &Rosenschein AIJ-09] 52 A second angle: approximation

54
A polynomial-time approximation algorithm that works for all positional scoring rules –Additive error is no more than m-2 –Based on a new connection between UCO for positional scoring rules and a class of scheduling problems Computational complexity is not a strong barrier against manipulation –The cost of successful manipulation can be easily approximated (for positional scoring rules) 53 An approximation algorithm for positional scoring rules [Xia,Conitzer,& Procaccia EC-10]

55
The scheduling problems Q | pmtn | C max m * parallel uniform machines M 1,…, M m * –Machine i ’s speed is s i (the amount of work done in unit time) n * jobs J 1,…, J n * preemption: jobs are allowed to be interrupted (and resume later maybe on another machine) We are asked to compute the minimum makespan –the minimum time to complete all jobs 54

56
s2=s1-s3s2=s1-s3 s3=s1-s4s3=s1-s4 p1p1 p p2p2 p3p3 Thinking about UCO pos Let p, p 1,…, p m-1 be the total points that c, c 1,…, c m-1 obtain in the non-manipulators’ profile p c c1c1 c3c3 c2c2 ∨ ∨ ∨ P NM V1V1 = c c1c1 c2c2 c3c3 p 1 -pp 1 –p-(s 1 -s 2 ) pp 2 -pp 2 –p-(s 1 -s 4 ) pp 3 -pp 3 –p-(s 1 -s 3 ) s 1 -s 3 s 1 -s 4 s 1 -s 2 ∪{V1=[c>c1>c2>c3]}∪{V1=[c>c1>c2>c3]} s1=s1-s2s1=s1-s2 (J1)(J1) (J2)(J2) (J3)(J3) 55

57
56 The approximation algorithm Original UCO Scheduling problem Solution to the scheduling problem Solution to the UCO [Gonzalez&Sahni JACM 78] Rounding No more than OPT+ m-2

58
Manipulation of positional scoring rules = scheduling (preemptions at integer time points) –Borda manipulation corresponds to scheduling where the machines speeds are m-1, m-2, …, 0 NP-hard [Yu, Hoogeveen, & Lenstra J.Scheduling 2004] –UCM for Borda is NP-C for two manipulators [Davies et al. AAAI-11 best paper] [Betzler, Niedermeier, & Woeginger IJCAI-11 best paper] 57 Complexity of UCM for Borda

59
G-S theorem: for any reasonable voting rule there exists a manipulation Quantitative G-S: for any voting rule that is “far away” from dictatorships, the number of manipulable situations is non-negligible –First work: 3 alternatives, neutral rule [Friedgut, Kalai, &Nisan FOCS-08] –Extensions: [Dobzinski&Procaccia WINE-08, Xia&Conitzer EC-08b, Isaksson,Kindler,&Mossel FOCS-10] –Finally proved: [Mossel&Racz STOC-12] 58 A third angle: quantitative G-S

60
The first attempt seems to fail Can we obtain positive results for a restricted setting? –The manipulators has complete information about the non-manipulators’ votes –The manipulators can perfectly discuss their strategies 59 Next steps

61
Limiting the manipulator’s information can make dominating manipulation computationally harder, or even impossible [Conitzer,Walsh,&Xia AAAI-11] Bayesian information [Lu et al. UAI-12] 60 Limited information

62
The leader-follower model –The leader broadcast a vote W, and the potential followers decide whether to cast W or not The leader and followers have the same preferences –Safe manipulation [Slinko&White COMSOC-08] : a vote W that No matter how many followers there are, the leader/potential followers are not worse off Sometimes they are better off –Complexity: [Hazon&Elkind SAGT-10, Ianovski et al. IJCAI-11] 61 Limited communication among manipulators

63
62 Overview Manipulation is inevitable (Gibbard-Satterthwaite Theorem) Yes No Limited information Limited communication Can we use computational complexity as a barrier? Is it a strong barrier? Other barriers? May lead to very undesirable outcomes Seems not very often Why prevent manipulation? How often?

64
How to predict the outcome? –Game theory How to evaluate the outcome? Price of anarchy [Koutsoupias&Papadimitriou STACS-99] – –Not very applicable in the social choice setting Equilibrium selection problem Social welfare is not well defined Use best-response game to select an equilibrium and use scores as social welfare [ Brânzei et al. AAAI-13] 63 Research questions Worst welfare when agents are fully strategic Optimal welfare when agents are truthful

65
64 Simultaneous-move voting games Players: Voters 1,…,n Strategies / reports: Linear orders over alternatives Preferences: Linear orders over alternatives Rule: r ( P ’), where P ’ is the reported profile

66
65 Equilibrium selection problem > > > Plurality rule > > > > > Alice Bob Carol

67
66 Stackelberg voting games [Xia&Conitzer AAAI-10] Voters vote sequentially and strategically –voter 1 → voter 2 → voter 3 → … → voter n –any terminal state is associated with the winner under rule r At any stage, the current voter knows –the order of voters –previous voters’ votes –true preferences of the later voters (complete information) –rule r used in the end to select the winner Called a Stackelberg voting game –Unique winner in SPNE ( not unique SPNE) –Similar setting in [Desmedt&Elkind EC-10]

68
67 General paradoxes (ordinal PoA) Theorem. For any voting rule r that satisfies majority consistency and any n, there exists an n - profile P such that: –(many voters are miserable) SG r ( P ) is ranked somewhere in the bottom two positions in the true preferences of n-2 voters –(almost Condorcet loser) SG r ( P ) loses to all but one alternative in pairwise elections Strategic behavior of the voters is extremely harmful in the worst case

69
Simulation results Simulations for the plurality rule (25000 profiles uniformly at random) –x: #voters, y: percentage of voters –(a) percentage of voters who prefer SPNE winner to the truthful winner minus those who prefer truthful winner to the SPNE winner –(b) percentage of profiles where SPNE winner is the truthful winner SPNE winner is preferred to the truthful r winner by more voters than vice versa (a)(b) 68

70
Procedure control by –{adding, deleting} × { voters, alternatives} –partitioning voters/alternatives –introducing clones of alternatives –changing the agenda of voting –[Bartholdi, Tovey, &Trick MCM-92, Tideman SCW-07, Conitzer,Lang,&Xia IJCAI- 09] Bribery [Faliszewski, Hemaspaandra, &Hemaspaandra JAIR-09] See [Faliszewski, Hemaspaandra, &Hemaspaandra CACM-10] for a survey on their computational complexity See [Xia Axriv-12] for a framework for studying many of these for generalized scoring rules 69 Other types of strategic behavior (of the chairperson)

71
70 Food for thought The problem is still very open! –Shown to be connected to integer factorization [Hemaspaandra, Hemaspaandra, & Menton STACS-13] What is the role of computational complexity in analyzing human/self-interested agents’ behavior? –Explore information/communication assumptions –In general, why do we want to prevent strategic behavior? Practical ways to protect elections

72
71 Outline 1. Classical Social Choice 2.1 Computational aspects Part 1 3. Statistical approaches 45 min 55 min 75 min 2.2 Computational aspects Part 2 5 min 15 min 30 min

73
72 AB C A B C Turker 1 Turker 2 Turker n … >> Ranking pictures [PGM+ AAAI-12]............................ > > AB > B C >

74
Two goals for social choice mechanisms GOAL1: democracy 73 GOAL2: truth 1. Classical Social Choice 3. Statistical approaches 2. Computational aspects

75
Outline: statistical approaches 74 Condorcet’s MLE model (history) A General framework Why MLE? Why Condorcet’s model? Random Utility Models Model selection

76
The Condorcet Jury theorem. Given –two alternatives { a, b }. –0.5< p <1, Suppose –each agent’s preferences is generated i.i.d., such that –w/p p, the same as the ground truth –w/p 1 - p, different from the ground truth Then, as n →∞, the majority of agents’ preferences converges in probability to the ground truth 75 The Condorcet Jury theorem [Condorcet 1785]

77
Condorcet’s MLE approach Parametric ranking model M r : given a “ground truth” parameter Θ –each vote P is drawn i.i.d. conditioned on Θ, according to Pr ( P| Θ ) –Each P is a ranking For any profile D=(P 1,…,P n ), –The likelihood of Θ is L( Θ |D)=Pr(D| Θ )=∏ P ∈ D Pr(P| Θ ) –The MLE mechanism MLE (D)= argmax Θ L( Θ |D) –Break ties randomly What if Decision space ≠ Parameter space? “Ground truth” Θ P1P1 P2P2 PnPn … 76

78
Condorcet was not very clear how the Condorcet Jury theorem can be extended to m>2 Young had an interpretation [Young APSR-1988] Parameter space –all combinations of opinions: an opinion is a pairwise comparison between candidates (can be cyclic) – p < 1 Sample space –all combinations of opinions Given “ground truth” opinions W and p < 1, generate opinions V s.t. each opinion is i.i.d. 77 Condorcet’s model c ≻ d in W c ≻ d in V p d ≻ c in V 1-p

79
Parameter space –all rankings over candidates – ϕ < 1 Sample space –all rankings over candidates Given a “ground truth” ranking W and ϕ < 1, generate a ranking V w.p. – Pr ( V|W ) ∝ ϕ Kendall ( V,W ) MLE ranking is the Kemeny rule 78 Mallows model [Mallows 1957]

80
Learning [Lu and Boutilier ICML-11] Approximation by common voting rules [Caragiannis, Procaccia & Shah EC-13] 79 Recent studies on Condorcet/Mallows model

81
Outline: statistical approaches 80 Condorcet/Mallows model (history) Why MLE? Why Condorcet’s model? A General framework

82
81 Statistical decision framework [Azari, Parkes, and Xia WINE poster] ground truth Θ P1P1 PnPn …… MrMr Decision (winner, ranking, etc) Information about the ground truth P1P1 P2P2 PnPn … Step 1: statistical inference Data D Given M r Step 2: decision making

83
M r = Mallows model Step 1: MLE Step 2: top-alternative 82 Example: Kemeny Winner The most probable ranking P1P1 P2P2 PnPn … Step 1: MLE Data D Step 2: top-1 alternative Decision space: A unique winner

84
Likelihood reasoning –there is an unknown but fixed ground truth – p = 10/14=0.714 –Pr(2heads| p= 0.714 ) =(0.714) 2 =0.51>0.5 –Yes! 83 Likelihood reasoning vs. Bayesian in general Bayesian –the ground truth is captured by a belief distribution –Compute Pr( p |Data) assuming uniform prior –Compute Pr(2heads|Data)=0.485 <0.5 –No! Credit: Panos Ipeirotis & Roy Radner You have a biased coin: head w/p p –You observe 10 heads, 4 tails –Do you think the next two tosses will be two heads in a row?

85
M r = Mallows model 84 Kemeny = Likelihood approach Winner The most probable ranking P1P1 P2P2 PnPn … Step 1: MLE Data D Step 2: top-1 alternative This is the Kemeny rule (for single winner)!

86
M r = Condorcet model Step 1: compute the likelihood for all parameters (opinions) Step 2: choose the top- alternative of the most probable ranking 85 Kemeny = Likelihood approach (2) Winner The most probable ranking P1P1 P2P2 PnPn … Step 1: compute the likelihood Data D Step 2: top-1 alternative

87
86 Example: Bayesian [Young APSR-88] M r = Condorcet’ model Winner Posterior over rankings P1P1 P2P2 PnPn … Step 1: Bayesian update Data D Step 2: mostly likely top-1

88
Anonymity, neutrality, monotonicity ConsistencyCondorcet Easy to compute Likelihood (Mallows) YN YN Bayesian (Condorcet) NY 87 Likelihood vs. Bayesian [Azari, Parkes, and Xia WINE poster] Decision space: single winners Assume uniform prior in the Bayesian approach Principle: Statistical decision theory

89
Outline: statistical approaches 88 Condorcet’s MLE model (history) Why MLE? Why Condorcet’s model? A General framework

90
When the outcomes are winning alternatives –MLE rules must satisfy consistency: if r(D 1 )∩r(D 2 )≠ ϕ, then r(D 1 ∪ D 2 )=r(D 1 )∩r(D 2 ) –All classical voting rules except positional scoring rules are NOT MLEs Positional scoring rules are MLEs This is NOT a coincidence! –All MLE rules that outputs winners satisfy anonymity and consistency –Positional scoring rules are the only voting rules that satisfy anonymity, neutrality, and consistency! [Young SIAMAM-75] 89 Classical voting rules as MLEs [Conitzer&Sandholm UAI-05]

91
When the outcomes are winning rankings –MLE rules must satisfy reinforcement (the counterpart of consistency for rankings) –All classical voting rules except positional scoring rules and Kemeny are NOT MLEs This is not (completely) a coincidence! –Kemeny is the only preference function (that outputs rankings) that satisfies neutrality, reinforcement, and Condorcet consistency [Young&Levenglick SIAMAM-78] 90 Classical voting rules as MLEs [Conitzer&Sandholm UAI-05]

92
Condorcet’s model –not very natural –computationally hard Other classic voting rules –Most are not MLEs –Models are not very natural either 91 Are we happy?

93
New mechanisms via the statistical decision framework Model selection –How can we evaluate fitness? Likelihood or Bayesian? –Focus on MLE Computation –How can we compute MLE efficiently? 92 Decision Information about the ground truth Data D decision making inference

94
Closely related, but –We need economic insight to build the model –We care about satisfaction of traditional social choice criteria Also want to reach a compromise (achieve democracy) 93 Why not just a problem of machine learning or statistics?

95
Outline: statistical approaches 94 Condorcet’s MLE model (history) A General framework Why MLE? Why Condorcet’s model? Random Utility Models

96
Continuous parameters: Θ =( θ 1,…, θ m ) – m : number of alternatives –Each alternative is modeled by a utility distribution μ i – θ i : a vector that parameterizes μ i An agent’s perceived utility U i for alternative c i is generated independently according to μ i ( U i ) Agents rank alternatives according to their perceived utilities – Pr ( c 2 ≻ c 1 ≻ c 3 |θ 1, θ 2, θ 3 ) = Pr U i ∼ μ i ( U 2 >U 1 >U 3 ) 95 Random utility model (RUM) [Thurstone 27] U1U1 U2U2 U3U3 θ3θ3 θ2θ2 θ1θ1

97
Pr ( Data |θ 1, θ 2, θ 3 ) = ∏ R ∈ Data Pr(R |θ 1, θ 2, θ 3 ) 96 Generating a preference-profile Parameters P 1 = c 2 ≻ c 1 ≻ c 3 P n = c 1 ≻ c 2 ≻ c 3 … Agent 1 Agent n θ3θ3 θ2θ2 θ1θ1

98
μ i ’ s are Gumbel distributions –A.k.a. the Plackett-Luce (P-L) model [BM 60, Yellott 77] Equivalently, there exist positive numbers λ 1,…, λ m Pros: –Computationally tractable Analytical solution to the likelihood function –The only RUM that was known to be tractable Widely applied in Economics [McFadden 74], learning to rank [Liu 11], and analyzing elections [GM 06,07,08,09] Cons: does not seem to fit very well 97 RUMs with Gumbel distributions c 1 is the top choice in { c 1,…,c m } c 2 is the top choice in { c 2,…,c m } c m-1 is preferred to c m

99
μ i ’ s are normal distributions –Thurstone’s Case V [Thurstone 27] Pros: –Intuitive –Flexible Cons: believed to be computationally intractable –No analytical solution for the likelihood function Pr(P | Θ) is known 98 RUM with normal distributions U m : from - ∞ to ∞U m-1 : from U m to ∞ … U 1 : from U 2 to ∞

100
Location family: RUMs where each μ i is parameterized by its mean θ i –Normal distributions with fixed variance –P-L Theorem. For any RUM in the location family, if the PDF of each μ i is log-concave, then for any preference-profile D, the likelihood function Pr ( D | Θ ) is log-concave –Local optimality = global optimality –The set of global maxima solutions is convex 99 Unimodality of likelihood [APX. NIPS-12]

101
Utility distributions μ l ’s belong to the exponential family (EF) –Includes normal, Gamma, exponential, Binomial, Gumbel, etc. In each iteration t E-step, for any set of parameters Θ –Computes the expected log likelihood ( ELL ) ELL(Θ| Data, Θ t ) = f (Θ, g(Data, Θ t )) M-step –Choose Θ t+1 = argmax Θ ELL(Θ | Data, Θ t ) Until | Pr ( D | Θ t ) -Pr ( D | Θ t+ 1 )| < ε 100 Approximately computed by Gibbs sampling MC-EM algorithm for RUMs [APX NIPS-12]

102
Outline: statistical approaches 101 Condorcet’s MLE model (history) A General framework Why MLE? Why Condorcet’s model? Random Utility Models Model selection

103
102 Model selection Value(Normal) - Value(PL) LLPred. LLAICBIC 44.8(15.8)87.4(30.5)-79.6(31.6)-50.5(31.6) Compare RUMs with Normal distributions and PL for –log-likelihood –predictive log-likelihood, –Akaike information criterion (AIC), –Bayesian information criterion (BIC) Tested on an election dataset –9 alternatives, randomly chosen 50 voters Red: statistically significant with 95% confidence

104
Generalized RUM [APX UAI-13] –Learn the relationship between agent features and alternative features Preference elicitation based on experimental design [APX UAI-13] –c.f. active learning Faster algorithms [ACPX, NIPS-13] –Generalized Method of Moments (GMM) 103 Recent progress

105
Computational thinking + optimization algorithms CS Social Choice Strategic thinking + methods/principles of aggregation 2. Computational aspects3. Statistical approaches Easy-to-compute axiom Hard-to-manipulate axiom Computational thinking + game-theoretic analysis Framework based on statistical decision theory Model selection Condorcet/Mallows vs. RUM Thank you!

106
A pairwise scoring function is a function s:C×C×O →R that –Given o ∈ O, s scores each pairwise comparisons in the partial order independently, denoted by s ( d ≻ d′,o ) – s ( P PO,o ) = Σ V PO ∈ P PO Σ ( d ≻ d′ ) ∈ V PO s ( d ≻ d′,o ) A pairwise scoring rule r s select the outcome that maximizes s ( P PO,o ) 105 Pairwise scoring rules

107
A pairwise scoring function s is weakly neutral, if for any pair of outcomes o and o′, there exists a permutation M over C such that for any pair of alternatives ( d, d′ ) s ( d ≻ d′, o ) = s ( M(d) ≻ M(d′), o′ ) 106 Weakly neutral pairwise scoring functions

108
Kemeny Borda: q + > q - 107 Examples if d ≻ d′ in W if d′ ≻ d in W if d=c if d′=c Otherwise

109
Theorem. [Xia&Conitzer IJCAI-11] 108 Characterizing MLE rules Pairwise scoring rule with a weakly neutral PSF MLE of a weakly neutral pairwise-independent model =

110
109 A closer look perceived utilities (unobserved latent variables) parameters of utility dist. Profile (data) S i j, t+1 Bayes’ rule Independence among μ i ’s Plug in PDF for EF Estimated by Gibbs sampling Θ=(θ 1,…,θ m ) D M-step: for each i, compute θ i that maximizes η ( θ i ) ΣS i j, t+1 - nA(θ i ) Independence among agents E-step

111
Compute –generate perceived utility U i j given Θ t and P j Randomly choose (or round robin) an alternative i –generate U i j |U i-1 j,U i+1 j from truncated EF sampler 110 Gibbs sampling U i-1 j U i+1 j UijUij

112
111 Condorcet vs. RUMs CondorcetRUMs Ground trutha rankingutility distributions Likelihood function simple Usually do not have a closed- form formula Hardness of computation m ! ground truth rankings

113
Parameterized by a ranking Given a “ground truth” ranking W and p > 1/2, generate each pairwise comparison in V independently as follows (suppose c ≻ d in W ) MLE ranking is the Kemeny rule – Pr(P|W) = p nm(m-1)/2-K(P,W) (1-p) K(P,W) = –The winning rankings are insensitive to the choice of p (>1/2) 112 Mallows model Pr( b ≻ c ≻ a | a ≻ b ≻ c ) = (1-p) p (1-p)p (1-p) 2 Constant <1 c ≻ d in W c ≻ d in V p d ≻ c in V 1-p

Similar presentations

OK

Reshef Meir School of Computer Science and Engineering Hebrew University, Jerusalem, Israel Joint work with Maria Polukarov, Jeffery S. Rosenschein and.

Reshef Meir School of Computer Science and Engineering Hebrew University, Jerusalem, Israel Joint work with Maria Polukarov, Jeffery S. Rosenschein and.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on face recognition technology using matlab Esi tof ms ppt online Ppt on brand marketing resume Ppt on prevention of needlestick injury Can keynote read ppt on iphone Ppt on flowers for kindergarten Ppt on natural and artificial satellites examples Ppt on alternative communication system during disaster Ppt on creativity and innovation management Chemistry ppt on solid state