# The Complexity of Agreement A 100% Quantum-Free Talk Scott Aaronson MIT.

## Presentation on theme: "The Complexity of Agreement A 100% Quantum-Free Talk Scott Aaronson MIT."— Presentation transcript:

The Complexity of Agreement A 100% Quantum-Free Talk Scott Aaronson MIT

People disagree. Why? Because some people are idiots? Because people care about things other than truth (winning debates, not admitting error, etc.)? Because even given the same information, people would reach different honest conclusions? Can rational agents ever agree to disagree solely because of differing information?

Aumann 1976: No. Suppose Alice and Bob are Bayesians with a common prior, but different knowledge. Let E A be Alices estimate of (say) the chance of rain tomorrow, conditioned on all her knowledge. Let E B be Bobs estimate, conditioned all his knowledge. Suppose E A and E B are common knowledge (Alice and Bob both know their values, both know that they both know them, etc.) Theorem: E A = E B

Standard Protocol Alices initial expectation E A,0 Bobs new expectation E B,1 Alices new expectation E A,2 Bobs new expectation E B,3 Geanakoplos & Polemarchakis 1982: Provided the state space is finite, Alice and Bob will agree after a finite number of rounds

Whats Going On? 00 0110 00011011 Alice knows the first of 2 bits 11 Bob knows their sum We can represent an agents knowledge by a partition of the state space… If Alice and Bob dont agree with certainty, then one of them can learn something from the others messagemeaning its partition gets refined But the partitions can get refined only finitely many times MESSAGE

Problems Huge number of messages might be needed before Alice and Bob agree Messages are real numbers Conjecture For some function f of Alices input x {0,1} n and Bobs input y {0,1} n, and some distribution over (x,y), Alice and Bob will need to exchange (n) bits even to probably approximately agree about EX[f(x,y)] Intuition comes from communication complexity

Main Result: Conjecture Is False For all f:{0,1} n {0,1} n [0,1], and all prior distributions over (x,y) pairs, Alice and Bob can (, )-agree about the expectation of f, by exchanging only bits. Moral: Agreeing to disagree is problematic, even for agents subject to realistic communication constraints Independent of n Say Alice and Bob (, )-agree if they agree within with probability at least 1- over their prior

Intuition In the standard protocol, as long as Alice and Bob disagree by, their expectations E A and E B follow an unbiased random walk with step size. But since E A,E B [0,1], such a walk hits an absorbing barrier after ~1/ 2 steps. To prove, use (E A ) 2,(E B ) 2 as progress measures

A bit more formally… Let E A,t ( ), E B,t ( ) be Alice and Bobs expectations at time t, assuming the true state of the world is For any function F: [0,1], let ||F|| = EX D [F( ) 2 ] Let ={0,1} n {0,1} n be the state space, and let D be the shared prior distribution

Lemma: Suppose Alice sends the t th message. Then ||E B,t || - ||E A,t-1 || = ||E B,t - E A,t-1 || Proof: Crucial observation: if Alice has just sent Bob a message, then her expectation of Bobs expectation equals her expectation. Hence

Theorem: The standard protocol causes Alice and Bob to (, )-agree after 1/( 2 ) messages Proof: Suppose Alice sends the t th message, and suppose Then Likewise, after Bob sends Alice the (t+1) st message, ||E A,t+1 || > ||E B,t || + 2. But max{||E A,t ||,||E B,t ||} is initially 0 and can never exceed 1.

Werent we cheating, since messages in the standard protocol are unbounded-precision real numbers? Discretized standard protocol: Let Charlie (C) be a monkey in the middle who sees Alice and Bobs messages but doesnt know their inputs. At the t th step, Alice tells Bob whether E A,t >E C,t + /4, E A,t <E C,t - /4, or neither. Bob does likewise. Theorem: The discretized standard protocol causes Alice and Bob to (, )-agree after at most 3072/( 2 ) messages.

Two interesting questions: 1. Does the standard protocol ever need ~1/ 2 messages? 2. Is there ever a protocol that does better? Example answering both questions: Let Alices input x=x 1 …x n and Bobs input y=y 1 …y n be uniform over {-1,1} n. Then let

Theorem: The standard protocol requires messages before Alice and Bobs expectations of f agree within with constant probability. On the other hand, theres an attenuated protocol that causes them to agree within with constant probability after only O(1) messages

Three or More Agents Could a weak-willed Charlie fail to bring Alice and Bob into agreement with each other? Theorem: On a strongly-connected graph with N agents and diameter d, messages suffice for every pair of agents to (, )-agree

Computational Complexity Fine, few messages suffice for agreementbut what if it took Alice and Bob billions of years to calculate their messages? Result: Provided the agents can sample from their initial partitions, they can simulate Bayesian rationality using a number of samples independent of n (but alas, exponential in 1/ 6 ) Meaning: By examining their messages, you couldnt distinguish these Bayesian wannabes from true Bayesians with non-negligible bias

Computational Complexity Problem: An agents expectation could lie on a knife-edge between two messages 10 Pr[message] Solution: Have agents smooth their messages by adding random noise to them

Open Problems Do Alice and Bob need to exchange (1/ 2 ) bits to agree within with high probability? Best lower bound I can show is the trivial (log1/ ) Is there a scenario where Alice and Bob must exchange (n) bits to (,0)-agree? Can Bayesian wannabes agree within with high probability using poly(1/ ) samples, rather than 2 poly(1/ ) ?

Open Problems (cont) Suppose Alice and Bob (, )-agree about the expectation of f: [0,1]. Then do they also have approximate common knowledge? (Alice is pretty sure of Bobs expectation, Alice is pretty sure Bobs pretty sure of her expectation, Alice is pretty sure Bobs pretty sure shes pretty sure of his expectation, etc.)

Similar presentations