# Learning Voting Trees Ariel D. Procaccia, Aviv Zohar, Yoni Peleg, Jeffrey S. Rosenschein.

## Presentation on theme: "Learning Voting Trees Ariel D. Procaccia, Aviv Zohar, Yoni Peleg, Jeffrey S. Rosenschein."— Presentation transcript:

Learning Voting Trees Ariel D. Procaccia, Aviv Zohar, Yoni Peleg, Jeffrey S. Rosenschein

Lecture Outline Voting and Voting Trees PAC learning Results Limitations VotingPACResultsLimitations

Voting Election: set of voters N={1,...,n}, candidates C={a,b,c,...} Voters have linear preferences. Winner of the election determined according to a voting rule F. Plurality: each voter gives 1 point to first place. Copeland: –x 1 beats x 2 in a pairwise election if most voters prefer x 1 to x 2. –Candidate’s score is num of other candidates beaten in pairwise election. VotingPACResultsLimitations

Voting: example Example: N={1,2,3,4,5}, C={a,b,c,d} –Voter 1: a > b > c > d –Voter 2: a > b > c > d –Voter 3: b > c > d > a –Voter 4: c > b > d > a –Voter 5: d > b > c > a Plurality: a wins; Copeland: b wins. Copeland only cares about pairwise elections. VotingPACResultsLimitations

Tournaments A tournament over C is complete binary irreflexive relationship over C. Summarizes results of pairwise elections. Example: N={1,2,3}, C={a,b,c} –Voter 1: c > b > a –Voter 2: b > a > c –Voter 3: a > c > b –Overall: a < b, b < c, c < a (Condorcet paradox). (Pairwise) voting rule is a function from tournaments to candidates. VotingPACResultsLimitations

Voting Trees a c ? ? ? ? b ac a < b, b < c, c < a VotingPACResultsLimitations

Voting Trees Voting trees are everywhere! Concise representation of (pairwise) voting rules. –In gen., double exponential number  Exponential representation. –Capture many rules, such as Copeland. Given some (pairwise) voting rule, want to find accurate (as much as possible), concise representation by voting tree. Idea: use learning. Designer labels tournaments with winners, learning algorithm outputs a “good” voting tree. VotingPACResultsLimitations

PAC Learning Want to learn voting rule f (not necessarily tree). Training set consists of example pairs (T j,f(T j )). T j – tournaments drawn from fixed dist. D. err(h)=Pr D [h(T)  f(T)]. f * minimizes err(h) over all voting trees. Goal: given , find voting tree g such that err(g)  err(f * )+ . Q: How many examples are needed in order to guarantee that goal is achieved with prob. at least 1-  ? VotingPACResultsLimitations

Formulation of Theorems Theorem: An exponential training set is needed to learn voting trees. Restrict to the class of voting trees of polynomial size (at most k leaves). Lemma: If the size of this class is (only) exponential, the following alg achieves the goal with polynomial training set: return the tree which minimizes mistakes on the training set. Theorem: |Voting trees with  k leaves|  exp(m,k) Proof: –size  ( # possible structures  # assignments to leaves)  k = (# possible structures  m k )  k VotingPACResultsLimitations

Number of tree structures VotingPACResultsLimitations

Proof continued Theorem: |Trees with  k leaves|  m k  k! Proof: –size  k  # assignments to leaves  # possible structures –Lemma: Can obtain any tree structure w. k leaves by k-1 splits. –# structures  (k-1)! –Size  k  (k-1)!  m k  VotingPACResultsLimitations

Approximation by Voting Trees Voting rule g is a c-approximation of f iff f and g agree on a c-fraction of the tournaments. Theorem: Most voting rules can’t be approximated by small voting trees to a factor of better than ½. This result isn’t as negative as it sounds. VotingPACResultsLimitations

Closing Remarks Computational learning theory as a method to concisely represent voting rules. Other concisely representable families: Scoring rules –Defined by a vector  1,...,  m  –Efficiently PAC learnable Which voting rules can be approximated? Under which underlying distributions? VotingPACResultsLimitations

Encore: Computational Complexity So far were interested in sample complexity. Recall: If the size of this class is (only) exponential, the following alg achieves the goal with polynomial training set: return the tree which minimizes mistakes on the training set. Theorem: Finding such a tree is NP-hard. In practice, the complexity depends on the structure of the tree. VotingPACResultsLimitations

A Graph!! VotingPACResultsLimitations

Download ppt "Learning Voting Trees Ariel D. Procaccia, Aviv Zohar, Yoni Peleg, Jeffrey S. Rosenschein."

Similar presentations