Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithmic Game Theory, Complexity and Learning

Similar presentations


Presentation on theme: "Algorithmic Game Theory, Complexity and Learning"— Presentation transcript:

1 Algorithmic Game Theory, Complexity and Learning
Constantinos (a.k.a. “Costis”) Daskalakis EECS, MIT

2 Algorithmic Game Theory, Complexity and Learning
- Complexity of Equilibria (part 1) - Mechanism Design (part 2) “reverse game theory” Game Theory: How to analyze economic institutions that are already in place? MD: How to design economic institutions?

3 Algorithmic Game Theory, Complexity and Learning
- Complexity of Equilibria (part 1) - Mechanism Design (part 2) “reverse game theory”

4 Economics equilibrium

5 ∃ prices ⇒ supply=demand
Smith 1776 Cournot 1838 Walras 1874 equilibrium ∃ prices ⇒ supply=demand von Neumann 1928 Nash 1950 Brouwer 1911 Arrow 1954 Debreu 1954 McKenzie 1954

6 [Myerson’99]: “Nash's theory of non-cooperative games should now be recognized as one of the outstanding intellectual advances of the twentieth century. The formulation of Nash equilibrium has had a fundamental and pervasive impact in Economics and the Social Sciences which is comparable to that of the discovery of the DNA double helix in the biological sciences.”

7 Economics equilibrium computation

8 How long to equilibrium?

9 [Irving Fisher 1891]: Hydraulic apparatus for calculating the equilibrium of a 3-person, 3-commodity exchange economy.

10 How long to equilibrium?
Universality requires tractability “If your laptop can’t find the equilibrium, how can they market?” [Kamal Jain]

11 want to study the computational features of these theorems
von Neumann 1928 Nash 1950 Brouwer 1911 want to study the computational features of these theorems

12 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

13 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

14 Games and Equilibria Kick Dive Left Right 1 , -1 -1 , 1 1, -1 1/2 1/2
Equilibrium: A pair (𝑥,𝑦) of randomized strategies so that no player has incentive to deviate if the other does not. Kick Dive Left Right 1 , -1 -1 , 1 1, -1 1/2 𝑥 𝑇 𝑅𝑦≥ 𝑥 ′𝑇 𝑅𝑦, ∀ 𝑥 ′ 𝑥 𝑇 𝐶𝑦≥ 𝑥 𝑇 𝐶 𝑦 ′ , ∀ 𝑦 ′ 1/2 Penalty Shot Game 𝑖,𝑗 𝐶 𝑖𝑗 ⋅ 𝑥 𝑖 ⋅ 𝑦 𝑗 [von Neumann ’28]: An equilibrium exists in every two-player zero-sum game (𝑅+𝐶=0) +[Dantzig’40s]: …in fact, this follows from strong LP duality +[Khachiyan’79]: …and is computable in polynomial-time +[Blackwell’56++]: …and distributed dynamics converges.

15 Games and Equilibria Kick Dive Left Right 2 , -1 -1 , 1 1, -1 2/5 3/5
Equilibrium: A pair (𝑥,𝑦) of randomized strategies so that no player has incentive to deviate if the other does not. Kick Dive Left Right 2 , -1 -1 , 1 1, -1 1/2 𝑥 𝑇 𝑅𝑦≥ 𝑥 ′𝑇 𝑅𝑦, ∀ 𝑥 ′ 𝑥 𝑇 𝐶𝑦≥ 𝑥 𝑇 𝐶 𝑦 ′ , ∀ 𝑦 ′ 1/2 Penalty Shot Game [Nash ’50/’51]: An equilibrium exists in every finite game. proof used Kakutani/Brouwer’s fixed point theorem, and no constructive proof has been found in 70+ years same is true for economic equilibria

16 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

17 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

18 von Neumann

19 The Minimax Theorem [von Neumann’28]: Suppose 𝑋 and 𝑌 are compact convex sets, and 𝑓:𝑋×𝑌→ℝ is a continuous function that is convex-concave, i.e. 𝑓(⋅,𝑦) is convex for all fixed 𝑦, and 𝑓(𝑥,⋅) is concave for all fixed 𝑥. Then: min 𝑥∈𝑋 max 𝑦∈𝑌 𝑓 𝑥,𝑦 = max 𝑦∈𝑌 min 𝑥∈𝑋 𝑓 𝑥,𝑦 In a zero-sum game, take 𝑓 𝑥,𝑦 ≡ 𝑥 𝑇 𝐶𝑦 how much row pays column Then ( 𝑥 ∗ , 𝑦 ∗ ) is an equilibrium, where 𝑥 ∗ ∈ argmin 𝑥∈𝑋 max 𝑦∈𝑌 𝑓 𝑥,𝑦 and 𝑦 ∗ ∈ argmax 𝑦∈𝑌 min 𝑥∈𝑋 𝑓 𝑥,𝑦 Proof that 𝑥 ∗ , 𝑦 ∗ is an equilibrium: 𝑥 ∗ 𝑇 𝐶 𝑦 ∗ ≥ min 𝑥 𝑥 𝑇 𝐶 𝑦 ∗ = max 𝑦 min 𝑥 𝑥 𝑇 𝐶𝑦 = min 𝑥 max 𝑦 𝑥 𝑇 𝐶𝑦 = max 𝑦 𝑥 ∗ 𝑇 𝐶𝑦

20 Presidential Elections
Morality Tax Cuts Economy +3, -3 -1, +1 Society -2, +2 1, -1 Suppose Clinton announces strategy (1/2,1/2). What would Trump do? A: focus on Tax Cuts with probability 1. indeed against (1/2, 1/2) strategy “Morality” gives expected expected payoff -1/2 while “Tax Cuts” gives 0

21 Presidential Elections
Morality Tax Cuts Economy +3, -3 -1, +1 Society -2, +2 1, -1 More generally, suppose Clinton commits to strategy (x1, x2). N.B.: Committing to a strategy in advance may not be optimal for Clinton since Trump may, in principle, exploit it. How? E[“Morality”]= - 3x1+2 x2 E[“Tax Cuts”]= x1- x2 So Trump’s payoff after best responding to (x1, x2) would be max(- 3x1+2 x2, x1- x2), resulting in the following payoff for Clinton: -max(- 3x1+2 x2, x1- x2) = min(3x1-2 x2, -x1+ x2). So the best strategy for Clinton to commit to is: (x1, x2)  argmax min(3x1-2 x2, -x1+x2)

22 Presidential Elections
Morality Tax Cuts Economy +3, -3 -1, +1 Society -2, +2 1, -1 So the best strategy for Clinton to commit to is: (x1, x2)  argmax min(3x1-2 x2, -x1+x2) To compute it Clinton writes the following Linear Program: solution: z = 1/7, (x1, x2)=(3/7,4/7) No matter what Trump does Clinton can guarantee 1/7 to himself by playing (3/7,4/7)

23 Presidential Elections
Morality Tax Cuts Economy +3, -3 -1, +1 Society -2, +2 1, -1 Conversely if Trump were forced to commit to a strategy (y1,y2) he would solve: solution: w = -1/7, (y1, y2)=(2/7,5/7) No matter what Clinton does Trump can guarantee -1/7 to himself by playing (2/7,5/7)

24 Presidential Elections “Miracle”
No matter what Trump does Clinton can guarantee 1/7 to himself by playing (3/7,4/7). No matter what Clinton does Trump can guarantee -1/7 to himself by playing (2/7,5/7). If Clinton plays (3/7,4/7) and Trump plays (2/7,5/7) then none of them can improve their payoff by changing their strategy (because their sum of irrevocable payoffs is 0 and the game is zero-sum). I.e. (3/7,4/7) is best response to (2/7,5/7) and vice versa. Hence they jointly comprise a Nash equilibrium! Why is it a “Miracle”? Because (3/7,4/7) was computed a priori for Clinton and (2/7,5/7) was computed a priori for Trump. Nevertheless these strategies magically comprise a Nash equilibrium!

25 De-mystifying the “Miracle”
Clinton’s LP Trump’s LP Why is it that the value of the left LP is equal to minus the value of the right LP?

26 De-mystifying the “Miracle”
Clinton’s LP Trump’s LP Why is it that the value of the left LP is equal to minus the value of the right LP?

27 De-mystifying the “Miracle”
Clinton’s LP Trump’s LP Why is it that the value of the left LP is equal to the value of the right LP?

28 De-mystifying the “Miracle”
Clinton’s LP Trump’s LP 𝐦𝐢𝐧 𝒙 𝐦𝐚𝐱 𝐲 𝒙 𝐓 𝑪 𝒚 = 𝐦𝐚𝐱 𝒚 𝐦𝐢𝐧 𝐱 𝒙 𝐓 𝑪 𝒚 Why is it that the value of the left LP is equal to the value of the right LP? Linear Programming Duality  Left LP is DUAL to Right LP, hence they have equal values!

29 Moral of the Story Morality Tax Cuts Economy +3, -3 -1, +1 Society -2, +2 1, -1 Existence of a Nash equilibrium in the Presidential Election game follows from Strong Linear Programming duality. This proof technique generalizes to any 2-player zero-sum game. Allows us to efficiently (i.e. in polynomial-time) compute Nash equilibria in these games. Moreover, a wide-class of distributed, online learning dynamics (namely no-regret learning) converge to equilibrium payoffs (tomorrow)

30

31 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

32 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

33 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

34 Brouwer

35 Brouwer’s Fixed Point Theorem
[Brouwer 1910]: Let f : D D be a continuous function from a convex and compact subset D of the Euclidean space to itself. Then there exists an x s.t. x = f (x) . closed and bounded Below we show a few examples, when D is the 2-dimensional disk. f D D N.B. All conditions in the statement of the theorem are necessary.

36 Brouwer’s Fixed Point Theorem

37 Brouwer’s Fixed Point Theorem

38 Brouwer’s Fixed Point Theorem

39 Brouwer ⇒ Nash

40 : [0,1]2[0,1]2, continuous such that fixed points  Nash eq.
Visualizing Nash’s Proof Kick Dive Left Right 1 , -1 -1 , 1 1, -1 : [0,1]2[0,1]2, continuous such that fixed points  Nash eq. Penalty Shot Game

41 Visualizing Nash’s Proof
Pr[Right] 1 Kick Dive Left Right 1 , -1 -1 , 1 1, -1 Pr[Right] Penalty Shot Game 1

42 Visualizing Nash’s Proof
Pr[Right] 1 Kick Dive Left Right 1 , -1 -1 , 1 1, -1 Pr[Right] Penalty Shot Game 1

43 Visualizing Nash’s Proof
Pr[Right] 1 Kick Dive Left Right 1 , -1 -1 , 1 1, -1 Pr[Right] Penalty Shot Game 1

44 : [0,1]2[0,1]2, cont. such that fixed point  Nash eq.
Visualizing Nash’s Proof Pr[Right] 1 Kick Dive Left Right 1 , -1 -1 , 1 1, -1 : [0,1]2[0,1]2, cont. such that fixed point  Nash eq. Pr[Right] Penalty Shot Game 1 fixed point Real proof: on the board

45 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer, Brouwer ⇒ Nash Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

46 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer, Brouwer ⇒ Nash Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

47 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Sperner ⇒ Brouwer, Brouwer ⇒ Nash Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

48 Sperner

49 Sperner’s Lemma (2-d)

50 Sperner’s Lemma (2-d) legal boundary coloring no yellow no blue no red
[Sperner 1928]: Color the boundary using three colors in a legal way.

51 Sperner’s Lemma (2-d) no yellow no blue no red
[Sperner 1928]: Color the boundary using three colors in a legal way. No matter how the internal nodes are colored, there exists a tri-chromatic triangle. In fact an odd number of those.

52 Sperner’s Lemma (2-d) [Sperner 1928]: Color the boundary using three colors in a legal way. No matter how the internal nodes are colored, there exists a tri-chromatic triangle. In fact an odd number of those.

53 Sperner ⇒ Brouwer

54 Sperner  Brouwer (High-Level)
Given f : [0,1]2  [0,1]2 1. For all ε, existence of approximate fixed point |f(x)-x| < ε, can be shown via Sperner’s lemma. 2. Then use compactness. For 1: Triangulate [0,1]2;

55 Sperner  Brouwer (High-Level)
Given f : [0,1]2  [0,1]2 1. For all ε, existence of approximate fixed point |f(x)-x| < ε, can be shown via Sperner’s lemma. 2. Then use compactness. For 1: Triangulate [0,1]2; then color points according to the direction of f (x)-x;

56 Sperner  Brouwer (High-Level)
Given f : [0,1]2  [0,1]2 1. For all ε, existence of approximate fixed point |f(x)-x| < ε, can be shown via Sperner’s lemma. 2. Then use compactness. For 1: Triangulate [0,1]2; then color points according to the direction of f (x)-x;

57 Sperner  Brouwer (High-Level)
Given f : [0,1]2  [0,1]2 1. For all ε, existence of approximate fixed point |f(x)-x| < ε, can be shown via Sperner’s lemma. 2. Then use compactness. For 1: Triangulate [0,1]2; then color points according to the direction of f (x)-x; then apply Sperner.

58 2D-Brouwer on the Square
Suppose : [0,1]2[0,1]2, continuous must be uniformly continuous (by the Heine-Cantor theorem) 1

59 2D-Brouwer on the Square
Suppose : [0,1]2[0,1]2, continuous must be uniformly continuous (by the Heine-Cantor theorem) 1 choose some and triangulate so that the diameter of cells is

60 2D-Brouwer on the Square
Suppose : [0,1]2[0,1]2, continuous must be uniformly continuous (by the Heine-Cantor theorem) 1 color the nodes of the triangulation according to the direction of choose some and triangulate so that the diameter of cells is

61 2D-Brouwer on the Square
Suppose : [0,1]2[0,1]2, continuous must be uniformly continuous (by the Heine-Cantor theorem) 1 color the nodes of the triangulation according to the direction of choose some and triangulate so that the diameter of cells is (tie-break at the boundary angles, so that the resulting coloring respects the boundary conditions required by Sperner’s lemma) find a trichromatic triangle, guaranteed by Sperner

62 2D-Brouwer on the Square
Suppose : [0,1]2[0,1]2, continuous must be uniformly continuous (by the Heine-Cantor theorem) 1 Claim: If zY is the yellow corner of a trichromatic triangle, then

63 Proof of Claim Claim: If zY is the yellow corner of a trichromatic triangle, then Proof: Let zY, zR , zB be the yellow/red/blue corners of a trichromatic triangle. By the definition of the coloring, observe that the product of 1 Hence: Similarly, we can show:

64 2D-Brouwer on the Square
Suppose : [0,1]2[0,1]2, continuous must be uniformly continuous (by the Heine-Cantor theorem) 1 Claim: If zY is the yellow corner of a trichromatic triangle, then Choosing

65 2D-Brouwer on the Square
Finishing the proof of Brouwer’s Theorem (Compactness): - pick a sequence of epsilons: - define a sequence of triangulations of diameter: - pick a trichromatic triangle in each triangulation, and call its yellow corner - by compactness, this sequence has a converging subsequence with limit point Claim: Define the function Clearly, is continuous since is continuous and so is . It follows from continuity that Proof: But Hence, It follows that Therefore,

66 Summary Sperner ⇒ Brouwer ⇒ Nash Easier Harder

67 SPERNER C INPUT: (i) n: specifies the size of a grid 2n 2n
(ii) Suppose boundary has standard coloring, and colors of internal vertices are given by a circuit: input: the coordinates of a point (n bits each) C x y OUTPUT: A tri-chromatic triangle.

68 BROUWER INPUT: a. an algorithm A that evaluates a function f : [0,1]n  [0,1] n: A b. an approximation requirement ; c. a Lipschitz constant c that the function is claimed to satisfy. More formally: A comes with a polynomial function p( ) purported to upper bound the running time of A, and whenever A does not return after p(|x|) steps it is assumed that f(x)=(0,0,…,0). Find x such that BROUWER: OR a pair of points x, y violating the Lipschitz constraint, i.e. OR a point that is mapped outside of [0,1]n.

69 NASH INPUT: (i) A Game defined by (ii) An approximation requirement ε
- the number of players n; - an enumeration of the strategy set Sp of every player p = 1,…, n; - the utility function of every player. (ii) An approximation requirement ε OUTPUT: An ε-Nash equilibrium of the game. i.e. the expected payoff of every player is within additive ε from the optimal expected payoff given the others’ strategies Intense effort for equilibrium algorithms following Nash’s work: e.g. Kuhn ’61, Mangasarian ’64, Lemke-Howson ’64, Rosenmüller ’71, Wilson ’71, Scarf ’67, Eaves ’72, Laan-Talman ’79, and others… Lemke-Howson: simplex-like, works with LCP formulation. All these algorithm require worst-case exponential time No efficient algorithm is known after 60+ years of research.

70 The Pavlovian reaction
Is it NP-complete to find a Nash equilibrium? and why should you care? NP-completeness is a standard complexity theoretic approach to prove that a problem is computationally intractable [Cook’71, Karp’72]. established by showing that problem is computationally equivalent to the Boolean function satisfiability problem: Given Boolean formula with AND, OR and NOT operations, can you set the variables to satisfy it, i.e. get 1 in the output? e.g. ¬ 𝑥 1 ∨ 𝑥 2 ∧ ¬ 𝑥 2 ∧ ¬ 𝑥 1 can be satisfied by setting 𝑥 1 = 𝑥 2 =0 but ¬ 𝑥 1 ∨ 𝑥 2 ∧ ¬ 𝑥 2 ∧ 𝑥 1 cannot be satisfied If Nash is NP-complete, then we cannot compute Nash equilibria efficiently, so we’re unable to predict player behavior in all games. Worse still, universality breaks down. If the best algorithmic machinery is unable to find Nash equilibria, how could players possibly discover them?

71 So: “Is it NP-complete to find a Nash equilibrium?”
the Pavlovian reaction (cont.) So: “Is it NP-complete to find a Nash equilibrium?” two answers: 1. probably not, since a solution is guaranteed to exist… 2. it is NP-complete to find a “tiny” bit more info than “just” a Nash equilibrium; e.g., the following are NP-complete: - find two Nash equilibria, if more than one exist - find a Nash equilibrium whose third bit is one, if any [Gilboa, Zemel ’89; Conitzer, Sandholm ’03] But let us look into NP-completeness more formally

72 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer, Brouwer ⇒ Nash Sperner, Sperner ⇒ Brouwer Complexity of Equilibria Total Search Problems in NP PPAD The World Beyond Complexity Online Learning

73 Function NP (FNP) SPERNER, NASH, BROUWER  FNP.
A search problem L is defined by a relation RL  {0,1}*  {0,1}* such that (x, y)  RL iff y is a solution to x A search problem is called total iff ∀ x. ∃ y such that (x, y)  RL. A search problem L  FNP iff there exists a poly-time algorithm AL(, ) and a polynomial function pL(  ) such that (i) ∀ x, y: AL(x, y)=1  (x, y)  RL (ii) ∀ x:  y s.t. (x, y)  RL   z with |z| ≤ pL(|x|) s.t. (x, z)  RL means polytime verifiable certificates means existence of poly-length certificates TFNP = {L  FNP | L is total} SPERNER, NASH, BROUWER  FNP.

74 can’t Karp reduce SAT to SPERNER, NASH or BROUWER
FNP-completeness A search problem L  FNP, associated with AL and pL , is poly-time (Karp) reducible to another problem L’  FNP, associated with AL’ and pL’, iff there exist efficiently computable functions f, g such that (i) f : {0,1}*  {0,1}* maps inputs x to L into inputs f(x) to L’ can’t Karp reduce SAT to SPERNER, NASH or BROUWER (ii) ∀ x,y: AL’ (f(x), y)=1  AL(x, g(y))=1 ∀ x: AL’ (f(x), y)=0,  y  AL(x, y)=0,  y How about Turing Reduction from SAT to SPERNER? Implies NP=co-NP A search problem L is FNP-complete iff e.g. SAT L  FNP L’ is poly-time reducible to L, for all L’  FNP

75 A Complexity Theory of Total Search Problems ?
??

76 A Complexity Theory of Total Search Problems ?
100-feet overview of our methodology: 1. identify the combinatorial argument of existence, responsible for making these problems total; 2. define a complexity class inspired by the argument of existence; 3. Litmus test: was complexity of underlying problem captured tightly? prove completeness results

77 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer, Brouwer ⇒ Nash Sperner, Sperner ⇒ Brouwer Complexity of Equilibria Total Search Problems in NP Proof of Sperner’s Lemma PPAD The World Beyond Complexity Online Learning

78 Proof of Sperner’s Lemma
no yellow no blue no red [Sperner 1928]: Color the boundary using three colors in a legal way. No matter how the internal nodes are colored, there exists a tri-chromatic triangle. In fact an odd number of those.

79 Proof of Sperner’s Lemma
For convenience we introduce an outer boundary, that does not create new tri-chromatic triangles. We also introduce an artificial tri-chromatic triangle. Next we define a directed walk starting from the artificial tri-chromatic triangle. [Sperner 1928]: Color the boundary using three colors in a legal way. No matter how the internal nodes are colored, there exists a tri-chromatic triangle. In fact an odd number of those.

80 Proof of Sperner’s Lemma
Space of Triangles Transition Rule: If  red - yellow door cross it with red on your left hand. ? 2 1 [Sperner 1928]: Color the boundary using three colors in a legal way. No matter how the internal nodes are colored, there exists a tri-chromatic triangle. In fact an odd number of those.

81 Proof of Sperner’s Lemma
For convenience we introduce an outer boundary, that does not create new tri-chromatic triangles. Claim: The walk cannot exit the square, nor can it loop into itself. Hence, it must stop somewhere inside. This can only happen at tri-chromatic triangle… We also introduce an artificial tri-chromatic triangle. ! Next we define a directed walk starting from the artificial tri-chromatic triangle. Starting from other triangles we do the same going forward or backward. [Sperner 1928]: Color the boundary using three colors in a legal way. No matter how the internal nodes are colored, there exists a tri-chromatic triangle. In fact an odd number of those.

82 Structure of Proof: A directed parity argument
Vertices of Graph  Triangles all vertices have in-degree, out-degree ≤ 1 ... Artificial Trichromatic degree 1 vertices: trichromatic triangles degree 2 vertices: no blue, non-trichromatic degree 0 vertices: all other triangles Proof:  at least one trichromatic (artificial one)   another trichromatic

83 The Non-Constructive Step
An easy parity lemma: A directed graph with an unbalanced node (a node with indegree  outdegree) must have another. But, wait, why is this non-constructive? Given a directed graph and an unbalanced node, isn’t it trivial to find another unbalanced node? The graph can be exponentially large, but has succinct description…

84 The PPAD Class [Papadimitriou ’94]
Suppose that an exponentially large graph with vertex set {0,1}n is defined by two circuits: possible previous P node id node id N node id node id possible next END OF THE LINE: Given P and N: If 0n is an unbalanced node, find another unbalanced node. Otherwise output 0n. PPAD = { Search problems in FNP reducible to END OF THE LINE }

85 END OF THE LINE {0,1}n ... 0n = solution

86 Problems in PPAD Litmus Test: Completeness [Previous Slides]
SPERNER ∈ PPAD [Previous Slides] [By Reduction to SPERNER-Scarf ’67] BROUWER ∈ PPAD [By Nash’s Proof, reducing to BROUWER] NASH ∈ PPAD Litmus Test: Completeness SPERNER is PPAD-Complete [Papadimitriou ’94; Chen-Deng ’05 for 2d] BROUWER is PPAD-Complete [Papadimitriou ’94] i.e. these problems are solvable via the directed parity argument and are at least as hard as any other problem in PPAD

87 Problems in PPAD NP-complete NP PPAD P
SPERNER, BROUWER are both PPAD-complete (i.e. as hard as any problem in PPAD) NASH P

88 The Complexity of Nash Equilibrium
[Daskalakis, Goldberg, Papadimitriou ’06]: Finding a Nash equilibrium is PPAD-complete. [Chen, Deng’06]: …even in two-player games. I.e. finding a Nash equilibrium is computationally intractable, exactly as intractable as the class PPAD, SPERNER, BROUWER [Condenotti et al’06,…,Chen et al’13]: Arrow-Debreu equilibria (in markets w/ complements) are also PPAD-hard. [Mehta’14]: Almost zero-sum games are PPAD-complete. [Chen et al’15]: Anonymous games are PPAD-complete.

89 Reductions following from existence proofs:
[Daskalakis-Goldberg-Papadimitriou’06]:

90 Menu Equilibria Existence proofs Complexity of Equilibria Minimax Nash
Brouwer, Brouwer ⇒ Nash Sperner, Sperner ⇒ Brouwer Complexity of Equilibria Total Search Problems in NP Proof of Sperner’s Lemma PPAD The World Beyond Complexity Online Learning

91 PPAD-Completeness of NASH
[Daskalakis, Goldberg, Papadimitriou’06] ... 0n Generic PPAD Embed PPAD graph in [0,1]3 := - a + xa > ArithmCircuitSAT NASH 3D-SPERNER 3D-BROUWER

92 PPAD-Completeness of NASH
[Daskalakis, Goldberg, Papadimitriou’06] ... 0n Generic PPAD := - a + xa > ArithmCircuitSAT NASH 2D-SPERNER 2D-BROUWER

93 PPAD-Completeness of NASH
[Daskalakis, Goldberg, Papadimitriou’06] ... 0n Generic PPAD := - a + xa > ArithmCircuitSAT NASH 2D-SPERNER 2D-BROUWER

94 PPAD-Completeness of NASH
[Daskalakis, Goldberg, Papadimitriou’06] ... 0n Generic PPAD := - a + xa > ArithmCircuitSAT NASH 2D-SPERNER 2D-BROUWER

95 ArithmCircuitSAT INPUT: A circuit comprising: OUTPUT:
[Daskalakis, Goldberg, Papadimitriou’06] INPUT: A circuit comprising: - variable nodes v1,…, vn - gate nodes g1,…, gm of types: , , , , , := - + a xa > 1 2 1 2 directed edges connecting variables to gates and gates to variables (loops are allowed); variable nodes have in-degree 1; gates have 0, 1, or 2 inputs depending on type as above; gates & nodes have arbitrary fan-out OUTPUT: Values v1,…, vn  [0,1] satisfying the gate constraints: assignment : c > a := b addition : subtraction : set equal to a constant : 1 2 multiply by constant :

96 Comparator Gate Constraints
any value is allowed

97 ArithmCircuitSAT (example)
> a := b 1 2 Proof on board Satisfying assignment? a = b = c = ½

98 ArithmCircuitSAT INPUT: A circuit comprising: OUTPUT:
[Daskalakis, Goldberg, Papadimitriou’06] INPUT: A circuit comprising: - variable nodes v1,…, vn - gate nodes g1,…, gm of types: , , , , , := - + a xa > 1 2 1 2 directed edges connecting variables to gates and gates to variables (loops are allowed); variable nodes have in-degree 1; gates have 0, 1, or 2 inputs depending on type as above; gates & nodes have arbitrary fan-out OUTPUT: An assignment of values v1,…, vn  [0,1] satisfying: [DGP’06]: Always exists satisfying assignment! := + [DGP’06]: but is PPAD-complete to find - a > xa

99 PPAD-Completeness of NASH
[Daskalakis, Goldberg, Papadimitriou’06] ... 0n Generic PPAD := - a + xa > ArithmCircuitSAT NASH 2D-SPERNER 2D-BROUWER

100 PPAD-Completeness of NASH
[Daskalakis, Goldberg, Papadimitriou’06] := - a + xa > ArithmCircuitSAT PolymatrixNash Nash

101 … Graphical Games Graphical Games [Kearns-Littman-Singh’01]
Defined to capture sparse player interactions, such as those arising under geographical, communication or other constraints. Players are nodes in a directed graph. Player’s payoff only depends on own strategy and the strategies of her in-neighbors in the graph (nodes pointing to her) -

102 Polymatrix Games Polymatrix Games [Janovskaya’68]: Graphical games with edge-wise separable utility functions.

103 PPAD-Completeness of NASH
[Daskalakis, Goldberg, Papadimitriou’06] := - a + xa > ArithmCircuitSAT PolymatrixNash Nash

104 Game Gadgets: Polymatrix games performing real arithmetic
Game Gadgets: Polymatrix games performing real arithmetic at their Nash equilibrium.

105 Addition Gadget … x w z y Suppose two strategies per player: {0,1}
then mixed strategy  a number in [0,1] (the probability of playing 1) e.g. addition game x y z w w is paid an expected: - $ Pr[x : 1] +Pr[y : 1] for playing 0 - $ Pr[z :1] for playing 1 z is paid to play the “opposite” of w Claim: In any Nash equilibrium of a game containing the above gadget

106 Subtraction Gadget … x w z y Suppose two strategies per player: {0,1}
then mixed strategy  a number in [0,1] (the probability of playing 1) e.g. subtraction x y z w w is paid an expected: - $ Pr[x : 1] - Pr[y : 1] for playing 0 - $ Pr[z :1] for playing 1 z is paid to play the “opposite” of w Claim: In any Nash equilibrium of a game containing the above gadget

107 Notational convention: Use the name of the player and the probability of that player playing 1 interchangeably.

108 List of Game Gadgets copy :
addition : subtraction : set equal to a constant : multiply by constant : If any of these gadgets is contained in a bigger polymatrix game, these conditions hold at any Nash eq. of that bigger game. Bigger game can only have edges into the “input players” and out of the “output players.” comparison : z: “output player” of the gadget x, y: “input players” of the gadget gadgets may have additional players; their graph can be made bipartite

109 PPAD-Completeness of PolymatrixNash
[Daskalakis, Goldberg, Papadimitriou’06] := - a + xa > ArithmCircuitSAT PolymatrixNash Given arbitrary instance of ArithmCircuitSAT can create polymatrix game by appropriately composing game gadgets corresponding to each of the gates. At any Nash equilibrium of resulting polymatrix game, the gate conditions are satisfied.

110 PPAD-Completeness of NASH
DGP=Daskalakis-Goldberg-Papadimitriou := - a + xa > ArithmCircuitSAT [DGP’06] PolymatrixNash 4-player Nash [DGP’06] 3-player Nash [Chen-Deng’06] 2-player Nash

111 PPAD-Completeness of NASH
[Daskalakis, Goldberg, Papadimitriou’06] ... 0n Generic PPAD Embed PPAD graph in [0,1]3 := - a + xa > ArithmCircuitSAT NASH 3D-SPERNER 3D-BROUWER

112 Classical Inclusions:
[Daskalakis-Goldberg-Papadimitriou’06]:

113 The Complexity of Nash Equilibrium
[Daskalakis, Goldberg, Papadimitriou ’06]: Finding a Nash equilibrium is PPAD-complete. [Chen, Deng’06]: …even in two-player games. I.e. finding a Nash equilibrium is computationally intractable, exactly as intractable as the class PPAD, SPERNER, BROUWER [Condenotti et al’06,…,Chen et al’13]: Arrow-Debreu equilibria (in markets w/ complements) are also PPAD-hard. [Mehta’14]: Almost zero-sum games are PPAD-complete. [Chen et al’15]: Anonymous games are PPAD-complete.

114 Markets Traffic Evolution Social networks
in other words, it is not plausible that large economic systems always converge to a Nash/market equilibrium. Social networks

115 Robert Aumann, 1987: ‘‘Two-player zero-sum games are one of the few areas in game theory, and indeed in the social sciences, where a fairly sharp, unique prediction is made.’’ Indeed equilibria of zero-sum games are efficiently computable, comprise a convex set, can be reached via dynamics efficiently While outside of zero-sum games equilibria are PPAD-complete, disconnected, and not reachable via dynamics

116 ? absolutely NOT!

117 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer, Brouwer ⇒ Nash Sperner, Sperner ⇒ Brouwer Complexity of Equilibria Total Search Problems in NP Proof of Sperner’s Lemma PPAD The World Beyond Complexity Online Learning

118 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer, Brouwer ⇒ Nash Sperner, Sperner ⇒ Brouwer Complexity of Equilibria Total Search Problems in NP Proof of Sperner’s Lemma PPAD The World Beyond Complexity 1: Beyond Nash Intractability Online Learning

119 Escape 1: Approximation
Maybe Nash equilibrium is hard to compute, but approximate equilibria are tractable. no player has no more than some small 𝜖 incentive to deviate. Absolute vs Relative Approximation? Relative (standard in CS): [Daskalakis’11,Rubinstein’15]: For some 𝜖>0, in 2-player games, computing a pair of mixed strategies s.t. no player can improve his current payoff by more than an 𝜖-fraction is PPAD-complete. Absolute Error (standard for fixed points-[Scarf’67])? We know that the problem is unlikely PPAD-hard [Lipton-Markakis-Mehta’03, Barman’15, Harrow et al’16]: finding 𝜖-Nash of 2-player 𝑛-strategy game in 𝑛 log 𝑛/ 𝜖 2 time Polynomial time algorithm is missing despite a long line of research [Kontogiannis et al ’06], [Daskalakis et al’06, ’07], [Bosse et al’07], [Tsaknakis, Spirakis’08],…? [Rubinstein’16] shows that LMM’03 cannot be improved unless PPAD ⊆Time( 2 𝑛 )

120 Escape 2: Games w/ Special Structure
Arbitrary normal form are hard, but 2-player zero-sum aren’t. Identify even broader families of games that are tractable. Upshot: Whenever game of interest has special structure, can be more confident on equilibrium predictions & can actually find them. When designing a new game, design it so that it has this good special structure. Two examples: Multi-player Zero-Sum Games Anonymous Games

121 Escape 2: Games w/ Special Structure
Arbitrary normal form are hard, but 2-player zero-sum aren’t. Identify even broader families of games that are tractable. Upshot: Whenever game of interest has special structure, can be more confident on equilibrium predictions & can actually find them. When designing a new game, design it so that it has this good special structure. Two examples: Multi-player Zero-Sum Games Anonymous Games

122 Multiplayer Zero-Sum…what?
Take an arbitrary two-player game, between Alice and Bob. Add a third player, Sam, who does not affect Alice or Bob’s payoffs, but receives payoff: − 𝑃 𝐴𝑙𝑖𝑐𝑒 𝜎 + 𝑃 𝐵𝑜𝑏 𝜎 , ∀ outcomes 𝜎 The game is zero-sum, but solving it is PPAD-complete. intractability even for 3 players, if three-way interactions allowed. What if only pairwise interactions are allowed?

123 Zero-Sum Polymatrix Games
players are nodes of a graph 𝐺 edges are 2-player games each player’s payoff is the sum of payoffs from all adjacent edges: 𝑖=1 3 𝑥 𝑢 𝑇 𝐴 𝑢, 𝑣 𝑖 𝑥 𝑣 𝑖 𝑢 N.B. If not zero-sum, then finding a Nash equilibrium is PPAD-complete [Daskalakis, Goldberg, Papadimitriou ’06]; even for constant approximation [Rubinstein’15] But what if the game is zero-sum, i.e. the sum of all players’ payoffs is 0?

124 Zero-Sum Polymatrix Games (cont.)
[Daskalakis, Papadimitriou ’09; Cai, Daskalakis’10; Cai, Candogan, Daskalakis, Papadimitriou’15]: In zero-sum polymatrix games: a Nash equilibrium can be found efficiently with linear-programming the Nash equilibria comprise a convex set if every node uses a no-regret learning algorithm (to be defined soon), the players’ behavior converges to a Nash equilibrium empirical strategies approach Nash equilibrium I.e. several good properties of two-player zero-sum games are inherited.

125 Escape 2: Games w/ Special Structure
Arbitrary normal form are hard, but 2-player zero-sum aren’t. Identify even broader families of games that are tractable. Upshot: Whenever game of interest has special structure, can be more confident on equilibrium predictions & can actually find them. When designing a new game, design it so that it has this good special structure. Two examples: Multi-player Zero-Sum Games Anonymous Games

126 Anonymous Games Anonymous Game: Every player might have a different payoff function, which only depends symmetrically on the other players’ actions. e.g. auction, traffic, social phenomena---see e.g. “The women of Cairo: Equilibria in Large Anonymous Games.’’ by Blonski, GEB’99. [Daskalakis-Papadimitriou’07-’09, Daskalakis-Kamath-Tzamos’15]: Arbitrarily good approximations are tractable if #strategies does not scale to infinity. (Recall exact equilibria are intractable) Interesting relation to limit theorems in probability. E.g. “∀𝜖,𝑛, the sum 𝑋 1 +…+ 𝑋 𝑛 of arbitrary independent Bernoulli 0/1 random variables is 𝜖-close in ℓ 1 distance to the sum of i.i.d. Bernoullis; or 𝑐+ 𝑖=1 1/ 𝜖 3 𝑌 𝑖 , for some constant 𝑐 and independent Bernoullis 𝑌 1 ,…, 𝑌 1/ 𝜖 3 ” Implies: “In every 𝑛-player 2-strategy anonymous game, there exists 𝜖-Nash equilibrium in which at most 1/ 𝜖 3 players randomize or all players who randomize use the same mixed strategy.”

127 Escape 3: Alternative Solution Concepts
If Nash equilibrium is intractable for a family of games, chances are it is not always discovered by players playing those games. So focus on alternatives that are tractable and thereby more plausible. Two canonical and plausible alternatives: Correlated equilibrium: generalizes Nash equilibrium, and is tractable No-regret learning behavior (more soon): Natural way to axiomatize player dynamical behavior Strong connection to learning, online optimization Generalizes correlated equilibrium (limits: coarse corr. Eq)

128 Correlated vs Nash Won’t give formal definition of correlated equilibrium. similar to Nash, except players’ randomization can be correlated “no player has incentive to deviate given own sampled pure action from the joint distribution” Equilibrium conditions expressible as linear constraints on the joint action distribution. hence solvable via linear program In normal form games, linear program has polynomial size in the game description: the LP maintains a variable for every pure strategy profile Same #variables as total #payoff entries required to specify game So correlated eq in P, while Nash is PPAD-complete.

129 Polymatrix Games Players are nodes of a graph 𝐺 Edges are 2-player games 𝑢 Each player’s payoff is the sum of payoffs from all adjacent edges; e.g. 𝑖=1 3 𝑥 𝑢 𝑇 𝐴 𝑢, 𝑣 𝑖 𝑥 𝑣 𝑖 Description size: 2⋅𝑚⋅ 𝑠 2 (two payoff tables/edge) ≤ 𝑛 2 ⋅ 𝑠 2 payoff entries Joint dist’n over players’ actions: 𝑠 𝑛 probabilities (one per pure strategy profile) So size of correlated equilibrium exponentially larger than size of game ⇒ computing it seems hopeless… [Papadimitriou,Roughgarden’05; Jiang,Leyton-Brown’10]: Poly-time algorithm. Crucial Idea: #correlated eq constraints ≤𝑛⋅𝑠 (one per player-action pair) Use dual LP, properties of Ellipsoid algorithm. Extends to any game where expected payoffs under independent mixed strategies can be computed in polynomial-time.

130 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer, Brouwer ⇒ Nash Sperner, Sperner ⇒ Brouwer Complexity of Equilibria Total Search Problems in NP Proof of Sperner’s Lemma PPAD The World Beyond Complexity Online Learning

131 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer, Brouwer ⇒ Nash Sperner, Sperner ⇒ Brouwer Complexity of Equilibria Total Search Problems in NP Proof of Sperner’s Lemma PPAD The World Beyond Complexity 2: The Complexity world beyond PPAD Online Learning

132 Other arguments of existence, and resulting complexity classes
“If a graph has a node of odd degree, then it must have another.” PPA “Every directed acyclic graph must have a sink.” PLS “If a function maps n elements to n-1 elements, then there is a collision.” PPP Formally?

133 The Class PPA [Papadimitriou ’94]
“If a graph has a node of odd degree, then it must have another.” Suppose that an exponentially large graph with vertex set {0,1}n is defined by one circuit: possible neighbors C node id { node id1 , node id2} OddDegreeNode: Given C: If 0n has odd degree, find another node with odd degree. Otherwise say “yes.” PPA = { Search problems in FNP reducible to OddDegreeNode}

134 OddDegreeNode {0,1}n ... 0n = solution

135 Smith  PPA Smith: Given Hamiltonian cycle in 3-regular graph and an edge that it uses, find another one. [Smith]: There must be another one.

136 The Class PLS [JPY ’89] “Every DAG has a sink.” Suppose that a DAG with vertex set {0,1}n is defined by two circuits: C node id {node id1, …, node idk} F node id FindSink: Given C, F: Find x s.t. F(x) ≥ F(y), for all y  C(x). PLS = { Search problems in FNP reducible to FindSink}

137 FindSink {0,1}n = solution

138 LocalMaxCut is PLS-complete
LocalMaxCut: Given weighted graph G=(V, E, w), find a partition V=V1V2 that is locally optimal (i.e. can’t move any single vertex to the other side to increase the cut size. [Schaffer-Yannakakis’91]: LocalMaxCut is PLS-complete. [Fabrikant-Papadimitriou-Talwar’04]: Pure Nash equilibria in potential games are PLS-complete.

139 The Class PPP [Papadimitriou ’94]
“If a function maps n elements to n-1 elements, then there is a collision.” Suppose that an exponentially large graph with vertex set {0,1}n is defined by one circuit: C node id node id Collision: Given C: Find x s.t. C( x )= 0n; or find x ≠ y s.t. C(x)=C(y). PPP = { Search problems in FNP reducible to Collision}

140

141 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer, Brouwer ⇒ Nash Sperner, Sperner ⇒ Brouwer Complexity of Equilibria Total Search Problems in NP Proof of Sperner’s Lemma PPAD The World Beyond Complexity Online Learning

142 Menu Equilibria Existence proofs Complexity of Equilibria
Minimax Nash Brouwer, Brouwer ⇒ Nash Sperner, Sperner ⇒ Brouwer Complexity of Equilibria Total Search Problems in NP Proof of Sperner’s Lemma PPAD The World Beyond Complexity Online Learning

143 A Temporary Change in Perspective
U V W X Y Z A 10 20 4 30 6 1 B 5 C 2 8 3 D 7 E Experiment with actions repeatedly to learn how to play optimally

144 Online learning: simplified example
You just moved to Boston (in the 90’s before Google Maps) You rented a place on Mass. Av and your work is downtown Every morning you debate whether to take the Longfellow or Harvard Bridge You have no idea which one is best and want to learn by doing Every morning for the next 365 days you decide which route to take based on past experience Your goal: at the end of the year do not regret not taking (either always the Harvard Br. or always the Longfellow Br.) Is there an algorithm that guarantees such a no-regret condition? (or at least a small regret condition)

145 Simplified example (cont’d)
If there is traffic jam on bridge you take, you incur a loss of 1; o.w. a loss of 0 Given how chaotic the Boston area is with construction we make no assumptions about how traffic jams are generated. Worst-case analysis: assume that an adversary picks the traffic jams given knowledge of your algorithm and the history of traffic jams and decisions you made in the past Traffic Jam Each day Total loss of always taking single bridge 2 3 H 1 L 1 2 3 4 5 Loss of your algorithm Actions − min Regret Goal if you play for T days: Regret 𝑇 → T→∞ 0 Clearly can’t achieve via deterministic algorithm Need to use randomized algorithm

146 More generally For a sequence of 𝑇 days, on each day 𝑡: Goal:
You choose a distribution 𝑥 𝑡 over 𝐾 possible actions An adversary chooses a loss ℓ 𝑡 𝑖 ∈[0,1] for each action 𝑖∈[𝐾] Then you play action 𝑖 𝑡 ∼ 𝑥 𝑡 You don’t know the loss vector before picking 𝑥 𝑡 Adversary picks ℓ 𝑡 = ℓ 𝑡 1 ,…, ℓ 𝑡 𝐾 knowing your algorithm, and all ℓ 𝜏 , x 𝜏 , 𝑖 𝜏 for 𝜏<𝑡 You incur an expected loss of 𝑥 𝑡 ⋅ ℓ 𝑡 (and an actual cost of ℓ 𝑡 𝑖 𝑡 ) At the end of day 𝑡 you learn ℓ 𝑡 Goal: Regret = Expected Loss of algorithm – Loss of Best Fixed Action in Hindsight = 𝑡 𝑥 𝑡 ⋅ ℓ 𝑡 − min 𝑖∈ 𝐾 𝑡 ℓ 𝑡 𝑖 =𝑜 𝑇 [Blackwell’56, Hannan’57, Littlestone-Warmuth’94, Freund-Schapire’97, ++]: Exist simple no-regret learning algorithms with regret 𝑂 log 𝐾 ⋅𝑇 .

147 No-Regret ⇒ Minimax Theorem
Let (𝑅,𝐶) be a two-player zero-sum game, i.e. 𝑅+𝐶=0 Suppose both row and column player use no-regret learning algorithm Let 𝑥 1 ,…, 𝑥 𝑇 be the sequence of strategies played by row player Let 𝑦 1 ,…, 𝑦 𝑇 be the sequence of strategies played by column player Assume both observe mixed strategy of other player at the end of each day 𝑡 also works if players only observe each other’s sampled actions 𝑖 𝑡 , 𝑗 𝑡 Then by no-regret: 1 𝑇 𝑡=1 𝑇 𝑥 𝑡 T 𝑅 𝑦 𝑡 ≥ max 𝑖 𝑒 𝑖 T 𝑅 1 𝑇 𝑡 𝑦 𝑡 −𝑜(1) (*) 1 𝑇 𝑡=1 𝑇 𝑥 𝑡 T 𝐶 𝑦 𝑡 ≥ max 𝑗 𝑇 𝑡 𝑥 𝑡 T 𝐶 𝑒 𝑗 −𝑜(1) (**) (**) ⇒ 1 𝑇 𝑡=1 𝑇 𝑥 𝑡 T 𝑅 𝑦 𝑡 ≤ mi n 𝑗 𝑇 𝑡 𝑥 𝑡 T 𝑅 𝑒 𝑗 +𝑜(1) ≤ 1 𝑇 𝑡 𝑥 𝑡 T 𝑅 1 𝑇 𝑡 𝑦 𝑡 +𝑜(1) (***) (*)+(***) ⇒ 1 𝑇 𝑡 𝑥 𝑡 T 𝑅 1 𝑇 𝑡 𝑦 𝑡 ≥ max 𝑖 𝑒 𝑖 T 𝑅 1 𝑇 𝑡 𝑦 𝑡 −𝑜(1) similarly can show: 𝑇 𝑡 𝑥 𝑡 T 𝐶 1 𝑇 𝑡 𝑦 𝑡 ≥ max 𝑗 𝑇 𝑡 𝑥 𝑡 T 𝐶 𝑒 𝑗 −𝑜(1) Hence 1 𝑇 𝑡 𝑥 𝑡 , 1 𝑇 𝑡 𝑦 𝑡 is Nash equilibrium  Can also use (*), (**) to derive minimax theorem

148 Summary: Equilibrium Complexity
Equilibria may be computationally intractable. Nash equilibria in normal form games are PPAD-complete Same is true of many equilibria in economics When intractable, their universality is questionable. Cannot hope that players always discover them. Analyst cannot count on always finding them. Important to identify game families where equilibria are tractable. several classes of tractable games have been found, e.g. polymatrix zero-sum, anonymous Consider alternative solution concepts with better computational properties, e.g. correlated eq, no-regret learning. Understand the complexity of approximate solution concepts. Investigating the complexity of equilibria offered complexity theory challenging problems that enriched the field and continue providing interesting challenges going forward.


Download ppt "Algorithmic Game Theory, Complexity and Learning"

Similar presentations


Ads by Google