Felix Fischer, Ariel D. Procaccia and Alex Samorodnitsky.

Slides:



Advertisements
Similar presentations
ST3236: Stochastic Process Tutorial 3 TA: Mar Choong Hock Exercises: 4.
Advertisements

Property testing of Tree Regular Languages Frédéric Magniez, LRI, CNRS Michel de Rougemont, LRI, University Paris II.
Learning Voting Trees Ariel D. Procaccia, Aviv Zohar, Yoni Peleg, Jeffrey S. Rosenschein.
Felix Fischer, Ariel D. Procaccia and Alex Samorodnitsky.
Complexity of manipulating elections with few candidates Vincent Conitzer and Tuomas Sandholm Carnegie Mellon University Computer Science Department.
Study Group Randomized Algorithms 21 st June 03. Topics Covered Game Tree Evaluation –its expected run time is better than the worst- case complexity.
1 The Monte Carlo method. 2 (0,0) (1,1) (-1,-1) (-1,1) (1,-1) 1 Z= 1 If  X 2 +Y 2  1 0 o/w (X,Y) is a point chosen uniformly at random in a 2  2 square.
Approximation Algorithms for Unique Games Luca Trevisan Slides by Avi Eyal.
Approximating Average Parameters of Graphs Oded Goldreich, Weizmann Institute Dana Ron, Tel Aviv University.
Interchanging distance and capacity in probabilistic mappings Uriel Feige Weizmann Institute.
The number of edge-disjoint transitive triples in a tournament.
11 - Markov Chains Jim Vallandingham.
BAYESIAN INFERENCE Sampling techniques
Analysis of Network Diffusion and Distributed Network Algorithms Rajmohan Rajaraman Northeastern University, Boston May 2012 Chennai Network Optimization.
CPSC 668Set 7: Mutual Exclusion with Read/Write Variables1 CPSC 668 Distributed Algorithms and Systems Fall 2009 Prof. Jennifer Welch.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 14 Strongly connected components Definition and motivation Algorithm Chapter 22.5.
Approximability and Inapproximability of Dodgson and Young Elections Ariel D. Procaccia, Michal Feldman and Jeffrey S. Rosenschein.
Graph Clustering. Why graph clustering is useful? Distance matrices are graphs  as useful as any other clustering Identification of communities in social.
Reshef Meir, Ariel D. Procaccia, and Jeffrey S. Rosenschein.
Ariel D. Procaccia (Microsoft)  Best advisor award goes to...  Thesis is about computational social choice Approximation Learning Manipulation BEST.
Graph Clustering. Why graph clustering is useful? Distance matrices are graphs  as useful as any other clustering Identification of communities in social.
1 Huffman Codes. 2 Introduction Huffman codes are a very effective technique for compressing data; savings of 20% to 90% are typical, depending on the.
Tirgul 10 Rehearsal about Universal Hashing Solving two problems from theoretical exercises: –T2 q. 1 –T3 q. 2.
CPSC 689: Discrete Algorithms for Mobile and Wireless Systems Spring 2009 Prof. Jennifer Welch.
Data Broadcast in Asymmetric Wireless Environments Nitin H. Vaidya Sohail Hameed.
From PET to SPLIT Yuri Kifer. PET: Polynomial Ergodic Theorem (Bergelson) preserving and weakly mixing is bounded measurable functions polynomials,integer.
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
Rooted Trees. More definitions parent of d child of c sibling of d ancestor of d descendants of g leaf internal vertex subtree root.
Automated Design of Voting Rules by Learning From Examples Ariel D. Procaccia, Aviv Zohar, Jeffrey S. Rosenschein.
1 02/09/05CS267 Lecture 7 CS 267 Tricks with Trees James Demmel
Preference Analysis Joachim Giesen and Eva Schuberth May 24, 2006.
Expanders Eliyahu Kiperwasser. What is it? Expanders are graphs with no small cuts. The later gives several unique traits to such graph, such as: – High.
1 On the Computation of the Permanent Dana Moshkovitz.
MATH 310, FALL 2003 (Combinatorial Problem Solving) Lecture 11, Wednesday, September 24.
Sampling and Approximate Counting for Weighted Matchings Roy Cagan.
Data Structures, Spring 2006 © L. Joskowicz 1 Data Structures – LECTURE 14 Strongly connected components Definition and motivation Algorithm Chapter 22.5.
Approximating The Permanent Amit Kagan Seminar in Complexity 04/06/2001.
Ramanujan Graphs of Every Degree Adam Marcus (Crisply, Yale) Daniel Spielman (Yale) Nikhil Srivastava (MSR India)
1 Biased card shuffling and the asymmetric exclusion process Elchanan Mossel, Microsoft Research Joint work with Itai Benjamini, Microsoft Research Noam.
Mixing Times of Markov Chains for Self-Organizing Lists and Biased Permutations Prateek Bhakta, Sarah Miracle, Dana Randall and Amanda Streib.
Mixing Times of Self-Organizing Lists and Biased Permutations Sarah Miracle Georgia Institute of Technology.
Ch. 8 & 9 – Linear Sorting and Order Statistics What do you trade for speed?
Efficient and Robust Query Processing in Dynamic Environments Using Random Walk Techniques Chen Avin Carlos Brito.
CS548 Advanced Information Security Presented by Gowun Jeong Mar. 9, 2010.
An Algorithmic Proof of the Lopsided Lovasz Local Lemma Nick Harvey University of British Columbia Jan Vondrak IBM Almaden TexPoint fonts used in EMF.
Chapter 1 Fundamental Concepts II Pao-Lien Lai 1.
Tree A connected graph that contains no simple circuits is called a tree. Because a tree cannot have a simple circuit, a tree cannot contain multiple.
Analysis of Algorithms CS 477/677
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
Hierarchical Well-Separated Trees (HST) Edges’ distances are uniform across a level of the tree Stretch  = factor by which distances decrease from root.
The bin packing problem. For n objects with sizes s 1, …, s n where 0 < s i ≤1, find the smallest number of bins with capacity one, such that n objects.
CS774. Markov Random Field : Theory and Application Lecture 15 Kyomin Jung KAIST Oct
The Poincaré Constant of a Random Walk in High- Dimensional Convex Bodies Ivona Bezáková Thesis Advisor: Prof. Eric Vigoda.
Sampling algorithms and Markov chains László Lovász Microsoft Research One Microsoft Way, Redmond, WA 98052
Markov Ciphers and Differential Cryptanalysis Jung Daejin Lee Sangho.
Daphne Koller Sampling Methods Metropolis- Hastings Algorithm Probabilistic Graphical Models Inference.
5.6 Prefix codes and optimal tree Definition 31: Codes with this property which the bit string for a letter never occurs as the first part of the bit string.
Krishnendu ChatterjeeFormal Methods Class1 MARKOV CHAINS.
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Markov Chains and Mixing Times
Sequential Algorithms for Generating Random Graphs
Markov Chains Mixing Times Lecture 5
Path Coupling And Approximate Counting
CS200: Algorithms Analysis
CSCE 668 DISTRIBUTED ALGORITHMS AND SYSTEMS
Seminar on Markov Chains and Mixing Times Elad Katz
Ilan Ben-Bassat Omri Weinstein
Switching Lemmas and Proof Complexity
Bin Packing Michael T. Goodrich Some slides adapted from slides from
Presentation transcript:

Felix Fischer, Ariel D. Procaccia and Alex Samorodnitsky

 A = {1,...,m}: set of alternatives  A tournament is a complete and asymmetric relation T on A. T (A) set of tournaments  The Copeland score of i in T is its outdegree  Copeland Winner: max Copeland score in T

? ? ? ? ? ? ? ?

 An alternative can appear multiple times in leaves of tree, or not appear (not surjective!)  Voting tree  implements f: T (A)  A if f(T)=  (T) for all T  Which functions f: T (A)  A can be implemented by voting trees?  [Moulin 86] Copeland cannot be implemented when m  8  [Srivastava and Trick 96]... can be when m  7  Can Copeland be approximated by trees? 4

 S i (T) = Copeland score of i in T  Deterministic model: a voting tree  has an  -approx ratio if  T, (S  (T) (T) / max i S i (T))    Randomized model:  Randomizations over voting trees  Dist.  over trees has an  -approx ratio if  T, ( E  [S  (T) (T)] / max i S i (T))    Randomization is admissible if its support contains only surjective trees 5

 C  A is a component of T if  i,j  C, k  C, iTk  jTk  Lemma [Moulin 86]: T and T’ differ only inside a component C,  a voting tree, then  (T)  A\C  (T)=  (T’)

 m = 3k  T is 3 cycle of regular components of size k   i, S i (T)  k + k/2  Let , choose  (T)  One component in T’ is transitive   i s.t. S i (T’)= k + (k-1), winner doesn’t change  The ratio tends to ¾ T ’ k = 5 7

 Can we do very well in the randomized model?  Theorem. No randomization over trees can achieve approx ratio better than 5/6 + O(1/m)  Proof by using similar ideas plus Yao’s minimax principle 8

 Main theorem.  admissible randomization over voting trees of polynomial size with an approximation ratio of ½-O(1/m)  Important to keep the trees small from CS point of view 9

 1-Caterpillar is a singleton tree  k-Caterpillar is a binary tree where left child of root is (k-1)-caterpillar, and right child is a leaf  Voting k-caterpillar is a k-caterpillar whose leaves are labeled by A ? ? ? ? ? ? ? ? ? ?

 k-RSC: uniform distribution over surjective voting k-caterpillars  Main theorem reformulated. k-RSC with k=poly(m) has approx ratio of ½-O(1/m)  Sketchiest proof ever:  k-RSC close to k-RC  k-RC identical to k steps of Markov chain  k = poly(m) steps of chain close to stationary dist. of chain (rapid mixing, via spectral gap + conductance)  Stationary distribution of chain gives ½-approx of Copeland 11

 Permutation trees give  (log(m)/m)-approx  Huge randomized balanced trees intuitively do very well  Theorem. Arbitrarily large random balanced voting trees give an approx ratio of at most O(1/m)

 Paper contains many additional results  Randomized model: gap between LB of ½ (admissible, small) and UB of 5/6 (even inadmissible and large)  Deterministic: enigmatic gap between LB of  (logm/m) and UB of ¾ 13