Majority and Minority games. Let G be a graph with all degrees odd. Each vertex is initially randomly assigned a colour (black or white), and at each.

Slides:



Advertisements
Similar presentations
Chapter 7 Hypothesis Testing
Advertisements

Math for Liberal Studies. This graph does not have an Euler circuit. This graph does have an Euler circuit.
Bart Jansen 1.  Problem definition  Instance: Connected graph G, positive integer k  Question: Is there a spanning tree for G with at least k leaves?
On the Density of a Graph and its Blowup Raphael Yuster Joint work with Asaf Shapira.
Bipartite Matching, Extremal Problems, Matrix Tree Theorem.
Midwestern State University Department of Computer Science Dr. Ranette Halverson CMPS 2433 – CHAPTER 4 GRAPHS 1.
Approximation, Chance and Networks Lecture Notes BISS 2005, Bertinoro March Alessandro Panconesi University La Sapienza of Rome.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 18 Sampling Distribution Models.
Gibbs sampler - simple properties It’s not hard to show that this MC chain is aperiodic. Often is reversible distribution. If in addition the chain is.
Techniques for Dealing with Hard Problems Backtrack: –Systematically enumerates all potential solutions by continually trying to extend a partial solution.
Lecture 22: April 18 Probabilistic Method. Why Randomness? Probabilistic method: Proving the existence of an object satisfying certain properties without.
Bart Jansen, Utrecht University. 2  Max Leaf  Instance: Connected graph G, positive integer k  Question: Is there a spanning tree for G with at least.
Playing Fair at Sudoku Joshua Cooper USC Department of Mathematics.
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
The Rate of Concentration of the stationary distribution of a Markov Chain on the Homogenous Populations. Boris Mitavskiy and Jonathan Rowe School of Computer.
Андрей Андреевич Марков. Markov Chains Graduate Seminar in Applied Statistics Presented by Matthias Theubert Never look behind you…
Entropy Rates of a Stochastic Process
Last time: terminology reminder w Simple graph Vertex = node Edge Degree Weight Neighbours Complete Dual Bipartite Planar Cycle Tree Path Circuit Components.
1 List Coloring and Euclidean Ramsey Theory TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A Noga Alon, Tel Aviv.
Online Graph Avoidance Games in Random Graphs Reto Spöhel Diploma Thesis Supervisors: Martin Marciniszyn, Angelika Steger.
Online Ramsey Games in Random Graphs Reto Spöhel Joint work with Martin Marciniszyn and Angelika Steger.
Chapter 4: Stochastic Processes Poisson Processes and Markov Chains
13. The Weak Law and the Strong Law of Large Numbers
Graph Colouring Lecture 20: Nov 25.
Lecture 11. Matching A set of edges which do not share a vertex is a matching. Application: Wireless Networks may consist of nodes with single radios,
8-2 Basics of Hypothesis Testing
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Lecture 11. Matching A set of edges which do not share a vertex is a matching. Application: Wireless Networks may consist of nodes with single radios,
LIMITS The Limit of a Function LIMITS Objectives: In this section, we will learn: Limit in general Two-sided limits and one-sided limits How to.
Definitions In statistics, a hypothesis is a claim or statement about a property of a population. A hypothesis test is a standard procedure for testing.
Online Ramsey Games in Random Graphs Reto Spöhel, ETH Zürich Joint work with Martin Marciniszyn and Angelika Steger.
SECTION 1.3 THE LIMIT OF A FUNCTION.
Discrete Probability Distributions
Lecture Slides Elementary Statistics Twelfth Edition
Entropy Rate of a Markov Chain
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
CHAPTER 2 LIMITS AND DERIVATIVES. 2.2 The Limit of a Function LIMITS AND DERIVATIVES In this section, we will learn: About limits in general and about.
 Jim has six children.  Chris fights with Bob,Faye, and Eve all the time; Eve fights (besides with Chris) with Al and Di all the time; and Al and Bob.
Programming with Alice Computing Institute for K-12 Teachers Summer 2011 Workshop.
7.1 Introduction to Graph Theory
10.4 How to Find a Perfect Matching We have a condition for the existence of a perfect matching in a graph that is necessary and sufficient. Does this.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Section 8.1 Estimating  When  is Known In this section, we develop techniques for estimating the population mean μ using sample data. We assume that.
CS4231 Parallel and Distributed Algorithms AY 2006/2007 Semester 2 Lecture 10 Instructor: Haifeng YU.
Lecture 9. If X is a discrete random variable, the mean (or expected value) of X is denoted μ X and defined as μ X = x 1 p 1 + x 2 p 2 + x 3 p 3 + ∙∙∙
Online Ramsey Games in Random Graphs Reto Spöhel, ETH Zürich Joint work with Martin Marciniszyn and Angelika Steger TexPoint fonts used in EMF. Read the.
Online Vertex-Coloring Games in Random Graphs Reto Spöhel (joint work with Martin Marciniszyn; appeared at SODA ’07)
Copyright © 2009 Pearson Education, Inc. Chapter 18 Sampling Distribution Models.
Incidentor coloring: methods and results A.V. Pyatkin "Graph Theory and Interactions" Durham, 2013.
Simulated Annealing.
Techniques for Proving NP-Completeness Show that a special case of the problem you are interested in is NP- complete. For example: The problem of finding.
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 18 Sampling Distribution Models.
Sampling Distribution Models Chapter 18. Toss a penny 20 times and record the number of heads. Calculate the proportion of heads & mark it on the dot.
Maximum density of copies of a graph in the n-cube John Goldwasser Ryan Hansen West Virginia University.
Graph Colouring L09: Oct 10. This Lecture Graph coloring is another important problem in graph theory. It also has many applications, including the famous.
Copyright ©2013 Pearson Education, Inc. publishing as Prentice Hall 9-1 σ σ.
Graph Colouring Lecture 20: Nov 25. This Lecture Graph coloring is another important problem in graph theory. It also has many applications, including.
Matching Lecture 19: Nov 23.
Introduction to Graph Theory
1 Finding a decomposition of a graph T into isomorphic copies of a graph G is a classical problem in Combinatorics. The G-decomposition of T is balanced.
6.3 Binomial and Geometric Random Variables
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
3/7/20161 Now it’s time to look at… Discrete Probability.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
Theory of Computational Complexity Probability and Computing Lee Minseon Iwama and Ito lab M1 1.
The Law of Averages. What does the law of average say? We know that, from the definition of probability, in the long run the frequency of some event will.
LIMITS The Limit of a Function LIMITS In this section, we will learn: About limits in general and about numerical and graphical methods for computing.
Log-linear Models Please read Chapter Two. We are interested in relationships between variables White VictimBlack Victim White Prisoner151 (151/160=0.94)
Theory of Computational Complexity M1 Takao Inoshita Iwama & Ito Lab Graduate School of Informatics, Kyoto University.
Haim Kaplan and Uri Zwick
Presentation transcript:

Majority and Minority games

Let G be a graph with all degrees odd. Each vertex is initially randomly assigned a colour (black or white), and at each time step every vertex updates to the majority colour among its neighbours. This always reaches a stable state or a 2-cycle (Goles and Olivos; Poljak and Surâ) and the number of steps required to do so is less than the number of edges (Poljak and Turzík).

Suppose each vertex is aiming to be the same colour as most of its neighbours. Say a vertex is happy at time t if this is the case. The process must reach a fixed point or 2-cycle, so for t sufficiently large each vertex is always happy or always unhappy. What proportion of vertices are ultimately happy on average? Call this h(G). If G is bipartite, h(G)=½. This is the worst case; in fact for any G every vertex has probability at least ½ of being happy. We have necessary and sufficient conditions for equality.

If G is 3-regular and contains a 3- or 5-cycle then h(G)>½. This is not true for longer odd cycles or for higher regularity.

What if some vertices use the opposite (minority) rule? If all vertices do this then the process will evolve in exactly the same way, except the colours are swapped when t is odd. In particular, again we always reach a fixed point or 2-cycle. If G is bipartite with one part playing the majority rule and the other part playing the minority rule, we always reach a 4-cycle.

Suppose that there is some set of special vertices X with no two vertices at distance less than 4, such that no triangle of G meets X. If every vertex not in X plays the majority rule and the vertices in X do anything that depends only on the colours of their neighbours, then we must eventually reach a 1-, 2- or 4-cycle. The same result applies if G is bipartite, and every vertex not in X plays the majority rule if it is in the first part and the minority rule if it is in the second. (We have the same conditions on X, but now the triangle condition is automatically satisfied.)

In particular, if all of the vertices play the majority rule apart from one vertex, v, which plays the minority rule, and v is not in a triangle, then we always reach a 1-, 2- or 4-cycle. The requirement that no triangle contains v is necessary.

What if vertices start off playing the majority rule but swap if that isn’t working well? One possible scheme is for every vertex which is unhappy at time t to swap rule. If the swap happens before the rule is applied to get the colour at t+1 we run through the same states but more slowly. So we should swap afterwards. This gives us a mapping from the colours and rules at time t to colours and rules at time t+1. We can show this is a bijection, so we will eventually return to the starting state. This makes it easy to compute the long-run average happiness for a given graph. The time taken can be surprisingly long (up to 2916 for one 10- vertex cubic graph).

This seems to generally do worse than the simple majority rule, but better than 50% for non-bipartite graphs. It does sometimes do better than simple majority. If G is bipartite we will still get exactly 50%; does it always do better if not? Can we prove any good upper bound? An alternative is to do the same thing with the restriction that no vertex swaps rule twice in a row. This seems to do much better, even for bipartite graphs – we can show that for K r,r there will be 3r/2 – O(√r) happy vertices on average. Again, can we prove that it always does better?

We also consider what happens with random errors. There are two obvious settings: vertices may make observation errors or play errors independently with some probability p<½. These both fit into a general scheme where a vertex v with k black neighbours plays black with probability π(v,k), and this is increasing in k for each v. White h(t) for the expected proportion of happy vertices at time t. We can show that in such a scheme h(t)≥½ for every t. These probabilities give a Markov chain on the possible states of the graph, and h(t) tends to a limit given by the expected happiness of the limit distribution, h ∞ (p).

Results for the Wagner graph generated using PRISM.

Perhaps surprisingly, h(t) does not have to be increasing.

For play errors it is not true that h ∞ (p)→1 as p→0. The graph shown is a counterexample. The majority game on this graph has six essentially different configurations which arise as fixed points or 2-cycles of the majority game.

2p22p2 2p2p 2p2p 2p2p 6p26p2 7p27p2 6p26p2 8p28p2 3p23p2 4p4p6p6p

For the same graph with observation errors, h ∞ (p)→1 as p→0. Is this always the case? Is it always the case for cubic graphs? Balister, Bollobás, Johnson and Walters studied “Random Majority Percolation”, play errors on the (4-regular) discrete torus. Using the current colour of a vertex to break ties gives strong links to Bootstrap Percolation. Bootstrap Percolation on a lattice: each vertex is on or off. Vertices switch on if enough of their neighbours are already on, never switch off. They showed that for very small p the torus spends almost all the time in a state with all vertices the same colour.

What if we limit the number of observation errors a vertex makes at any single time step? If a vertex of degree 2r+1 may make r errors then there is a positive probability of making either play unless its neighbours are unanimous. If G is not bipartite, that condition guarantees that it will eventually reach a state with all colours the same, then stay there. For cubic G there is just one parameter: the probability of playing incorrectly if the neighbours are split 2:1, p.

This can take a long time to reach a monochromatic state if p is close to 0 – it is of order p –k for some k, but k is unbounded.

If vertices of degree 2r+1 may only make r–1 errors then we may get stuck in a 2-cycle where some vertices are unhappy.

If we only make errors when the neighbours are split 3:2, we seem to be better off with more errors.