A useful reduction (SAT -> game)

Slides:



Advertisements
Similar presentations
Chapter 17: Making Complex Decisions April 1, 2004.
Advertisements

Complexity ©D.Moshkovits 1 Where Can We Draw The Line? On the Hardness of Satisfiability Problems.
Complexity Results about Nash Equilibria Vincent Conitzer, Tuomas Sandholm International Joint Conferences on Artificial Intelligence 2003 (IJCAI’03) Presented.
Game Theory Assignment For all of these games, P1 chooses between the columns, and P2 chooses between the rows.
Theory of Computing Lecture 18 MAS 714 Hartmut Klauck.
This Segment: Computational game theory Lecture 1: Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie.
Mixed Strategies CMPT 882 Computational Game Theory Simon Fraser University Spring 2010 Instructor: Oliver Schulte.
COMP 553: Algorithmic Game Theory Fall 2014 Yang Cai Lecture 21.
CS Department Fireside Chat All are welcome! Wed Nov 18, 5-6pm, Ols 228e/236d Kim Hazelwood and Wes Weimer Meet and ask them questions in a non-academic.
Congestion Games with Player- Specific Payoff Functions Igal Milchtaich, Department of Mathematics, The Hebrew University of Jerusalem, 1993 Presentation.
The basics of Game Theory Understanding strategic behaviour.
1 Chapter 4: Minimax Equilibrium in Zero Sum Game SCIT1003 Chapter 4: Minimax Equilibrium in Zero Sum Game Prof. Tsang.
Part 3: The Minimax Theorem
Game-theoretic analysis tools Necessary for building nonmanipulable automated negotiation systems.
Vincent Conitzer CPS Normal-form games Vincent Conitzer
EC941 - Game Theory Prof. Francesco Squintani Lecture 8 1.
by Vincent Conitzer of Duke
Non-cooperative Game Theory Notes by Alberto Bressan.
Complexity Results about Nash Equilibria
Computability and Complexity 15-1 Computability and Complexity Andrei Bulatov NP-Completeness.
1 Computing Nash Equilibrium Presenter: Yishay Mansour.
1 Perfect Matchings in Bipartite Graphs An undirected graph G=(U  V,E) is bipartite if U  V=  and E  U  V. A 1-1 and onto function f:U  V is a perfect.
Segment: Computational game theory Lecture 1b: Complexity Tuomas Sandholm Computer Science Department Carnegie Mellon University.
Minimax strategies, Nash equilibria, correlated equilibria Vincent Conitzer
Computing Equilibria Christos H. Papadimitriou UC Berkeley “christos”
Game representations, solution concepts and complexity Tuomas Sandholm Computer Science Department Carnegie Mellon University.
1 Efficiency and Nash Equilibria in a Scrip System for P2P Networks Eric J. Friedman Joseph Y. Halpern Ian Kash.
Small clique detection and approximate Nash equilibria Danny Vilenchik UCLA Joint work with Lorenz Minder.
Standard and Extended Form Games A Lesson in Multiagent System Based on Jose Vidal’s book Fundamentals of Multiagent Systems Henry Hexmoor, SIUC.
Game-theoretic analysis tools Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University.
Additional NP-complete problems
Static Games of Incomplete Information
Ásbjörn H Kristbjörnsson1 The complexity of Finding Nash Equilibria Ásbjörn H Kristbjörnsson Algorithms, Logic and Complexity.
1 What is Game Theory About? r Analysis of situations where conflict of interests is present r Goal is to prescribe how conflicts can be resolved 2 2 r.
By: Donté Howell Game Theory in Sports. What is Game Theory? It is a tool used to analyze strategic behavior and trying to maximize his/her payoff of.
Constraint Satisfaction Problems and Games
A useful reduction (SAT -> game)
Game theory basics A Game describes situations of strategic interaction, where the payoff for one agent depends on its own actions as well as on the actions.
Nash Equilibrium: P or NP?
Chapter 10 NP-Complete Problems.
Chapter 15: Game Theory: The Mathematics Lesson Plan of Competition
The Duality Theorem Primal P: Maximize
Simultaneous Move Games: Discrete Strategies
Perfect Matchings in Bipartite Graphs
Communication Complexity as a Lower Bound for Learning in Games
Game Theory.
Convergence, Targeted Optimality, and Safety in Multiagent Learning
Structured Models for Multi-Agent Interactions
Where Can We Draw The Line?
Lecture 24 NP-Complete Problems
Multiagent Systems Extensive Form Games © Manfred Huber 2018.
Vincent Conitzer Normal-form games Vincent Conitzer
Presented By Aaron Roth
Additional NP-complete problems
More NP-Complete Problems
Multiagent Systems Repeated Games © Manfred Huber 2018.
Netzer & Miller 1990: On the Complexity of Event Ordering for Shared-Memory Parallel Program Executions.
Lecture 20 Linear Program Duality
9.3 Linear programming and 2 x 2 games : A geometric approach
Molly W. Dahl Georgetown University Econ 101 – Spring 2009
CS21 Decidability and Tractability
Chapter 15: Game Theory: The Mathematics Lesson Plan of Competition
Network Formation Games
Lecture Game Theory.
Bayes Nash Implementation
2-Nash and Leontief Economy
Normal Form (Matrix) Games
A Technique for Reducing Normal Form Games to Compute a Nash Equilibrium Vincent Conitzer and Tuomas Sandholm Carnegie Mellon University, Computer Science.
Presentation transcript:

A useful reduction (SAT -> game) Theorem. SAT-solutions correspond to mixed-strategy equilibria of the following game (each agent randomizes uniformly on support) SAT Formula: (x1 or -x2) and (-x1 or x2 ) Solutions: x1=true, x2=true x1=false,x2=false Game: x1 x2 +x1 -x1 +x2 -x2 (x1 or -x2) (-x1 or x2) default x1 -,- -,- 0,- 0,- 2,- 2,- -,- -,- - ,1 x2 -,- -,- 2,- 2,- 0,- 0,- -,- -,- - ,1 +x1 -,0 -,2 1,1 -,- 1,1 1,1 -,0 -,2 - ,1 -x1 -,0 -,2 -,- 1,1 1,1 1,1 -,2 -,0 - ,1 +x2 -,2 -,0 1,1 1,1 1,1 -,- -,2 -,0 - ,1 -x2 -,2 -,0 1,1 1,1 -,- 1,1 -,0 -,2 - ,1 (x1 or -x2) -,- -,- 0,- 2,- 2,- 0,- -,- -,- - ,1 (-x1 or x2) -,- -,- 2,- 0,- 0,- 2,- -,- -,- - ,1 default 1,- 1,- 1,- 1,- 1,- 1,- 1,- 1,- 0,0

Proof sketch: Playing opposite literals (with any probability) is unstable because the other player will prefer to play a variable move . For example, if the row player plays –x1 then the column player will play x2 which would prompt the first player to change his/her move to default. (Note as soon as one player plays default, the other player’s best response is to also play default, leading to the Nash equilibrium at (default, default).) If you play literals (with probabilities), you should make sure that for any clause, the probability of the literal being in that clause is high enough, and for any variable, the probability that the literal corresponds to that variable is high enough (otherwise the other player will play this clause/variable and hurt you) A player won’t play any literal with a probability greater than 1/n because the other player will prefer to play a variable strategy which has a large negative payoff for the first player. For example if the row player plays –x1 with a probability of 2/3 and –x2 with a probability of 1/3 then the column player would be motivated to play x2 which is a big hit to the row player’s payoff. So equilibria where both randomize over literals can only occur when both randomize over same SAT solution A player would not play a variable strategy because the other player would play the default strategy which results in a negative payoff for the first player . The first player would then decide to play the default move with a zero payoff for both players. Similarly for the clause stategies. The default move is needed because then it is true that there is a satisfying assignment for the SAT formula iff there is more than one mixed Nash equilibrium in the game. To see why, note that if there is no satisfying assignment, then any mixed strategy that one of the players uses would violate one of the clauses and would motivate the other player to play a clause move which leads to the default Nash; the default Nash is the only Nash and starting from any state, a sequence of best replies will lead us there. If there is a satisfying assignment, then not only will (default,default) be a Nash, but the mixed strategy profile where each player puts probability 1/n on each literal of the satisfying assignment will also be a Nash.

Complexity of mixed-strategy Nash equilibria with certain properties This reduction implies that there is an equilibrium where players get expected utility 1 each iff the SAT formula is satisfiable Any reasonable objective would prefer such equilibria to 0-payoff equilibrium Corollary. Deciding whether a “good” equilibrium exists is NP-hard: 1. equilibrium with high social welfare 2. Pareto-optimal equilibrium 3. equilibrium with high utility for a given player i 4. equilibrium with high minimal utility Also NP-hard (from the same reduction): 5. Does more than one equilibrium exists? 6. Is a given strategy ever played in any equilibrium? 7. Is there an equilibrium where a given strategy is never played? (5) & weaker versions of (4), (6), (7) were known [Gilboa, Zemel GEB-89] All these hold even for symmetric, 2-player games

Counting the number of mixed-strategy Nash equilibria Why count equilibria? If we cannot even count the equilibria, there is little hope of getting a good overview of the overall strategic structure of the game Unfortunately, our reduction implies: Corollary. Counting Nash equilibria is #P-hard! Proof. #SAT is #P-hard, and the number of equilibria is 1 + #SAT Corollary. Counting connected sets of equilibria is just as hard Proof. In our game, each equilibrium is alone in its connected set These results hold even for symmetric, 2-player games