Presentation is loading. Please wait.

Presentation is loading. Please wait.

Course: Applications of Information Theory to Computer Science CSG195, Fall 2008 CCIS Department, Northeastern University Dimitrios Kanoulas.

Similar presentations


Presentation on theme: "Course: Applications of Information Theory to Computer Science CSG195, Fall 2008 CCIS Department, Northeastern University Dimitrios Kanoulas."— Presentation transcript:

1 Course: Applications of Information Theory to Computer Science CSG195, Fall 2008 CCIS Department, Northeastern University Dimitrios Kanoulas

2

3 Information Theory

4 Algorithmic Game Theory

5

6 Game Theory: Studies the behavior of players in competitive and collaborative situations [Christos Papadimitriou in SODA 2001]

7 Problem (Game): Two cars, a red and a white one [players of the game] get to a road intersection without traffic light, at the same time. Each driver decides to stop (S) or go (G) [two pure strategies of the game] Payoffs for red/white car are defined from the matrix: red car white car SG S(1,1)(0,5) G(5,0)(-10,-10) GOAL for each player: Maximize his payoff

8 Equilibrium in a Game: Each player picks a strategy such that: no one wants to unilaterally deviate from this.

9 Payoff matrix red car white car SG S(1,1)(0,5) G(5,0)(-10,-10) Nash Equilibria: (1)White car stops && Red car goes (pure NE) (2)Red car stops and White car goes (pure NE) (3)Both cars with 1/2 go and 1/2 stop (mixed NE)

10 John Nash Movie: Beautiful Mind

11 There always exists a mixed strategy Nash Equilibrium.

12 red car white car SIGNALRed Light Green Light Red Light 00.5 Green Light 0.50 Correlated Equilibria: The suggestion to go if you see green light and stop if you see red. [Mixture of two NE. For each car: ½ to go and ½ to stop] There is a traffic light that suggest individually to the cars:

13 The general problem of equilibrium computation is fundamental in Computer Science [Christos Papadimitriou]

14 Game: Players A set of n players Pure Strategies Each player i has a set A i of pure strategies (actions) Joint-action space A = (a 1,..., a n ), where player i plays a i strategy. A -i is the joint-action space of all players except player i and a-i is the joint-action in A -i PayoffsPayoff matrix Mi: player i’s payoff Mi(a 1,..., a n ) A n -> Real Numbers Formally: payoff = Σ a -i P(a-i|ai) Mi(a i, a- i ) Mixed Strategies Each player i could have a probability distribution P over A i, which is called mixed strategy. P(a i ): the mixed strategy of player i P(a -i |a i ): the condinional joint mixed strategy of all players except i given the action of player i

15 Equilibrium: Every player is “happy” by playing a [pure or mixed] strategy, which means that he cannot increase his payoff by unilaterally deviate from his strategy. Correlated equilibrium (CE): A joint probability distribution P(a 1,..., a n ) such that: Every player individually receives “suggestion” from P Knowing P, players are happy with this “suggestion” and don’t want to deviate from this. Nash Equilibria (NE): Is a special case of CE: P a product distribution -> P = Π P(a i ) NE always exists but the problem of finding a NE is hard even for a 2-players game. [Chen & Deng]

16 Is the equilibrium “good” or “bad” ? What if I want to add some properties to my equilibrium ?

17

18 In a game we have at least one correlated equilibrium P. P is the joint mixed strategy Given P, let H(P) = Σ a in A P(a) ln(1/P(a)) be its (Shannon) entropy.

19 Changed Game: A player is willing to negotiate and agree to some form of “joint” strategy with the other players. BUT At the same time, the player wants to try to hide as much as he can his own behavior, by making it difficult to predict. OR We want to suggest a joint strategy that satisfies all the players but complicates their prediction of each others’ individual strategies

20 The conditional entropy in information theory provides a measure of the predictability of a random process from another The larger the conditional entropy, the harder the prediction. [Cover and Thomas] A i :the strategy of player i (random variable) A −i : the strategy of the rest of the players (random variable) P(a i |a −i ): the conditional mixed strategy where: player i picks a i given that the rest of the players pick a −I

21 H Ai|A−i (P) = − Σ a −i in A −i P(a −i ) Σ a i in A i P(a i |a −i ) logP(a i |a −i ) the conditional entropy of the strategy of player i given the strategies of the rest of the players SO the larger the conditional entropy, the harder the prediction.

22 MaxEnt CE: is the joint mixed strategy P* = arg max P in CE H Ai|A−i (P) [The probability distribution over the strategies which give a CE such that maximizes its entropy] MaxEnt CE satisfies all the players and maximizes the hardness of predictions.

23 There are some other interesting properties of MaxEnt CE, which have to do with the representation of this CE, which is much better than a arbitrary CE It is proposed two algorithms that converges to a MaxEnt CE and uses LP to solve the maximization problem we have There is also an other algorithm to compute MaxEnt CE which uses a method that in each iteration each player “learn” from the previous iteration and updates his payoff. That also converge to a MaxEnt CE [but not to a NE]

24 A mathematician is a device for turning coffee into theorems. ~Paul Erdos


Download ppt "Course: Applications of Information Theory to Computer Science CSG195, Fall 2008 CCIS Department, Northeastern University Dimitrios Kanoulas."

Similar presentations


Ads by Google