Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Duality Theorem Primal P: Maximize

Similar presentations


Presentation on theme: "The Duality Theorem Primal P: Maximize "β€” Presentation transcript:

1 The Duality Theorem Primal P: Maximize 𝑐 ⊺ π‘₯ subject to 𝐴π‘₯≀𝑏, π‘₯β‰₯0. Dual D: Minimize 𝑏 ⊺ 𝑦 subject to 𝐴 ⊺ 𝑦β‰₯𝑐, 𝑦β‰₯0. Weak Duality Theorem: If π‘₯ is feasible for P and 𝑦 is feasible for D, then: 𝑐 ⊺ π‘₯≀ 𝑏 ⊺ 𝑦 (Strong) Dualiy Theorem: If π‘₯ βˆ— is optimal for P and 𝑦 βˆ— is optimal for D, then: 𝑐 ⊺ π‘₯ βˆ— = 𝑏 ⊺ 𝑦 βˆ—

2 Generalized Duality Maximize Subject to Primal: Minimize Subject to
𝑐 1 π‘₯ 1 + 𝑐 2 π‘₯ 2 + 𝑐 3 π‘₯ 3 Subject to 𝑦 1 : π‘Ž 11 π‘₯ 1 + π‘Ž 12 π‘₯ 2 + π‘Ž 13 π‘₯ 3 ≀ 𝑏 1 𝑦 2 : π‘Ž 21 π‘₯ 1 + π‘Ž 22 π‘₯ 2 + π‘Ž 23 π‘₯ 3 β‰₯ 𝑏 2 𝑦 3 : π‘Ž 31 π‘₯ 1 + π‘Ž 32 π‘₯ 2 + π‘Ž 33 π‘₯ 3 = 𝑏 3 π‘₯ 1 β‰₯0, π‘₯ 2 ≀0, π‘₯ 3 free Primal: Minimize 𝑏 1 𝑦 1 + 𝑏 2 𝑦 2 + 𝑏 3 𝑦 3 Subject to π‘₯ 1 : π‘Ž 11 𝑦 1 + π‘Ž 21 𝑦 2 + π‘Ž 31 𝑦 3 β‰₯ 𝑐 1 π‘₯ 2 : π‘Ž 12 𝑦 1 + π‘Ž 22 𝑦 2 + π‘Ž 32 𝑦 3 ≀ 𝑐 2 π‘₯ 3 : π‘Ž 13 𝑦 1 + π‘Ž 23 𝑦 2 + π‘Ž 33 𝑦 3 = 𝑐 3 𝑦 1 β‰₯0, 𝑦 2 ≀0, 𝑦 3 free Dual:

3 Rules for Taking the Dual
Constraints in primal corresponds to variables in dual (and vice versa). Coefficients of objective function in primal corresponds to constants in dual (and vice versa). Primal (Maximization) Dual (Minimization) ≀ for constraint β‰₯0 for variable β‰₯ for constraint ≀0 for variable = for constraint free for variable

4 Certifying optimality
Suppose we wish to prove to someone that a solution π‘₯ βˆ— is optimal. If we supply both the optimal solution π‘₯ βˆ— and an optimal solution 𝑦 βˆ— to the dual, then one may easily verify that π‘₯ βˆ— is in fact optimal. Can we similarly prove a linear program is infeasible or unbounded?

5 Certifying infeasibility
Farkas Lemma: Exactly one of the following is true: There exist π‘₯ s.t. 𝐴π‘₯≀𝑏. There exist 𝑦β‰₯0 s.t. 𝐴 ⊀ 𝑦=0 and 𝑏 ⊀ 𝑦<0. Proof: Consider the following program and its dual P: max 0 s.t. 𝐴π‘₯≀𝑏 D: min 𝑏 ⊀ 𝑦 s.t. 𝐴 ⊀ 𝑦=0, 𝑦β‰₯0

6 Proof of Farkas Lemma P: max 0 s.t. 𝐴π‘₯≀𝑏 D: min 𝑏 ⊀ 𝑦 s.t. 𝐴 ⊀ 𝑦=0, 𝑦β‰₯0 Assume P is infeasible. Then D is either infeasible or unbounded. D is always feasible, hence unbounded. Thus we may find feasible solution 𝑦 with 𝑏 ⊀ 𝑦<0.

7 The Complementary Slackness Property
Primal P: Maximize 𝑐 ⊺ π‘₯ subject to 𝐴π‘₯≀𝑏, π‘₯β‰₯0. Dual D: Minimize 𝑏 ⊺ 𝑦 subject to 𝐴 ⊺ 𝑦β‰₯𝑐, 𝑦β‰₯0. Solutions π‘₯ and 𝑦 are said to satisfy Complementary Slackness if and only if: for all 𝑖=1,2,…,π‘š (at least) one of the following holds: 𝑗=1 𝑛 π‘Ž 𝑖𝑗 π‘₯ 𝑗 = 𝑏 𝑖 (𝑖th primal constraint has slack 0) 𝑦 𝑖 = (𝑖th dual variable is 0) for all j=1,2,…,𝑛 (at least) one of the following holds: 𝑖=1 π‘š π‘Ž 𝑖𝑗 𝑦 𝑖 = 𝑐 𝑗 (𝑗th dual constraint has slack 0) π‘₯ 𝑗 = (𝑗th primal variable is 0)

8 Complementary Slackness Theorem
Primal P: Maximize 𝑐 ⊺ π‘₯ subject to 𝐴π‘₯≀𝑏, π‘₯β‰₯0. Dual D: Minimize 𝑏 ⊺ 𝑦 subject to 𝐴 ⊺ 𝑦β‰₯𝑐, 𝑦β‰₯0. Theorem Let π‘₯ and 𝑦 be feasible solutions to P and D. Then: π‘₯ and 𝑦 are both optimal if and only if π‘₯ and 𝑦 satisfy complementary slackness.

9 Strict Complementary Slackness Property
Primal P: Maximize 𝑐 ⊺ π‘₯ subject to 𝐴π‘₯≀𝑏, π‘₯β‰₯0. Dual D: Minimize 𝑏 ⊺ 𝑦 subject to 𝐴 ⊺ 𝑦β‰₯𝑐, 𝑦β‰₯0. Solutions π‘₯ and 𝑦 are said to satisfy Strict Complementary Slackness if and only if: for all 𝑖=1,2,…,π‘š exactly one of the following holds: 𝑗=1 𝑛 π‘Ž 𝑖𝑗 π‘₯ 𝑗 = 𝑏 𝑖 (𝑖th primal constraint has slack 0) 𝑦 𝑖 = (𝑖th dual variable is 0) for all j=1,2,…,𝑛 exactly one of the following holds: 𝑖=1 π‘š π‘Ž 𝑖𝑗 𝑦 𝑖 = 𝑐 𝑗 (𝑗th dual constraint has slack 0) π‘₯ 𝑗 = (𝑗th primal variable is 0)

10 Strict Complementary Slackness Theorem
If P has an optimal solution then there exist optimal solution π‘₯ to P and 𝑦 to D that satisfy strict complementary slackness. Proof: Let 𝑧 βˆ— be value of an optimal solution. Suppose 𝑗 is such that π‘₯ 𝑗 =0 in all optimal solutions. Then P’: max x j s.t. 𝐴π‘₯≀𝑏, βˆ’π‘ ⊀ π‘₯β‰€βˆ’ 𝑧 βˆ— , π‘₯β‰₯0 has optimal value 0. D’: min 𝑏 ⊀ π‘¦βˆ’ 𝑧 βˆ— 𝑑 s.t. 𝐴 ⊀ π‘¦βˆ’π‘π‘‘β‰₯ 𝑒 𝑗 ,𝑦β‰₯0,𝑑β‰₯0

11 Proof (cont’d) If π‘₯ 𝑗 =0 in all optimal solutions, then D’: min 𝑏 ⊀ π‘¦βˆ’ 𝑧 βˆ— 𝑑 s.t. 𝐴 ⊀ π‘¦βˆ’π‘π‘‘β‰₯ 𝑒 𝑗 ,𝑦β‰₯0,𝑑β‰₯0 has optimal value 0. Let ( 𝑦 βˆ— , 𝑑 βˆ— ) be any optimal solution to D’. Case 𝑑 βˆ— =0: Then we have 𝑏 ⊀ 𝑦 βˆ— =0 𝐴 ⊀ 𝑦 βˆ— β‰₯ 𝑒 𝑗 𝑦 βˆ— β‰₯0 Let 𝑦 be optimal solution to D. Then 𝑦+ 𝑦 βˆ— is also optimal solution of D, and has nonzero slack in 𝑗th constraint.

12 Proof (cont’d) Case 𝑑 βˆ— >0: Then we have 𝑏 ⊀ 𝑦 βˆ— βˆ’ 𝑧 βˆ— 𝑑=0 𝐴 ⊀ 𝑦 βˆ— βˆ’π‘π‘‘β‰₯ 𝑒 𝑗 𝑦 βˆ— , 𝑑 βˆ— β‰₯0 Define 𝑦= 𝑦 βˆ— / 𝑑 βˆ— . Then we have 𝑏 ⊀ 𝑦= 𝑧 βˆ— 𝐴 ⊀ 𝑦β‰₯𝑐+ 𝑒 𝑗 / 𝑑 βˆ— 𝑦 βˆ— β‰₯0 Hence 𝑦 is optimal solution to D and has nonzero slack in 𝑗th constraint.

13 Proof (cont’d) For each 𝑗 where all optimal solutions to P satisfies π‘₯ 𝑗 =0 we have found an optimal solution 𝑦 (𝑗) to D with nonzero slack in 𝑗th constraint. The average of all these 𝑦 (𝑗) is an optimal solution to D with nonzero slack in 𝑗th constraint for all 𝑗 we have considered. Finally do the same for the dual finding solution π‘₯. Then π‘₯ and 𝑦 satisfy strict complementary slackness.

14 Game Theory Just last week:
CMU poker AI player Libratus beats top human poker players in heads up no-limit Texas Hold’em. A monumental achievement! (Compare to Chess (1997), Jeopardy (2009), Go (2016)).

15 Guess the coin game Minnie hides in an envelope some money.
It is either a 1 DKK coin, a 5 DKK coin, or a 10 DKK coin. If Max can guess what it is, he gets the coin!

16 How should Max play? - Worst case approach
Max does not know Minnie and has no clue as to what coin she has hidden. He wants to find a strategy with a good worst case guarantee. He cannot get a worst case guarantee on his actual winnings. But he can get a non-trivial worst case guarantee on his expected winnings, if he uses a randomized strategy!

17 Guess the coin game - Minnie's perspective
Somehow Minnie has been convinced to play the game with Max. But she does actually not like to just give her money away. Can we find the randomized strategy for Minnie that minimizes her expected loss?

18 Matrix Games A Matrix Game is given by a payoff matrix 𝐴 =( π‘Ž 𝑖𝑗 )∈ 𝑅 π‘šΓ—π‘› . Row player (Minnie) chooses row π‘–βˆˆ 1,…,π‘š (without seeing Max's move). Column player (Max) chooses column π‘—βˆˆ{1,…𝑛} (without seeing Minnies's move). Max gains π‘Ž 𝑖𝑗 β€œdollars” and Minnie loses π‘Ž 𝑖𝑗 β€œdollars”. Matrix games are zero-sum games as Max gains exactly what Minnie loses. Negative numbers can be used to model money transferred in the other direction. Warning: In many texts and old exam problems, the Row player is the maximizer and the Column player is the minimizer.

19 Guess the coin as a matrix game
Guess DKK 1 Guess DKK 5 Guess DKK 10 Hide DKK 1 1 Hide DKK 5 5 Hide DKK 10 10

20 Optimal randomized Strategy (for Max)
Play the game in a randomized way so that the expected gain is as big as possible, assuming worst case behavior of Minnie.

21 Column Players (Max's) Optimal randomized Strategy
Optimal randomized strategy and guaranteed lower bound on exp. gain for Max is ( 𝑝 1 , 𝑝 2 ,…, 𝑝 𝑛 ;𝑔) which is a solution to the LP: max 𝑔 s.t. 𝑗=1 𝑛 π‘Ž 𝑖𝑗 𝑝 𝑗 β‰₯𝑔 𝑖=1,…,π‘š 𝑗=1 𝑛 𝑝 𝑗 =1 𝑝 𝑗 β‰₯0 𝑗=1,…,𝑛

22 Row Players (Minnie's) Optimal randomized Strategy
Optimal randomized strategy and guaranteed upper bound on exp. loss for Minnie is ( π‘ž 1 , π‘ž 2 ,…, π‘ž π‘š ;𝑙) which is a solution to the LP: min 𝑙 s.t. 𝑖=1 π‘š π‘Ž 𝑖𝑗 π‘ž 𝑖 ≀𝑙 j=1,…,𝑛 𝑖=1 π‘š π‘ž 𝑖 =1 π‘ž 𝑖 β‰₯0 i=1,…,π‘š

23 Crucial observation Max's program and Minnie's program are each others duals!

24 Consequence For any Matrix game, Max's guaranteed lower bound on his expected gain when he plays his optimal randomized strategy is equal to Minnie's guaranteed upper bound on her expected loss when she plays her optimal randomized strategy. The common value is thus called the value of the game. A priori, this is not obvious - intuitively, the β€œoptimal" strategies are very timid and β€œpessimistic", as they are optimal only when assuming a worst case opponent.

25 "Guess the basis" trick for solving linear programs and matrix games
If we know the basis of a basic solution to an LP in standard form, we can find it directly by solving the system of linear equations setting the non-basic variables to zero. If we can guess the basis of an optimal solution to the primal and guess the basis of an optimal solution to the dual, we can therefore find the corresponding basic solutions by solving two systems of linear equations, and verify that both are optimal by checking that there is no gap between solution values.

26 "Guess the basis" trick for solving linear programs and matrix games
β€œknowing the basis” = β€œknowing which variables are strictly positive” (unless the LP is degenerate) β€œDefault" guess for the bases of the LPs corresponding to a square matrix game: The probabilities of playing the row/columns are all strictly positive (no strategies are "stupid"), so the variables that should be set to zero are the slack variables of all inequalities.

27 Fair and symmetric games
A matrix game is called symmetric if its matrix is antisymmetric(!). Example: Rock/Scissors/Paper. Counterexample: Guess the coin. A matrix game is called fair if its value is 0. Theorem: All symmetric games are fair. The opposite is not necessarily true. Example?

28 Matrix algebra formulation
Note: If Max plays by (not necessarily optimal) randomized strategy 𝑝= 𝑝 1 ,…, 𝑝 𝑛 and Minnie plays by the randomized strategy π‘ž= π‘ž 1 ,…, π‘ž π‘š the expected gain of Max is π‘ž ⊀ 𝐴𝑝. If the value of the game is 𝑣, we have that 𝑣= max 𝑝 min π‘ž π‘ž ⊀ 𝐴𝑝 Proof: 𝑣 is Max's best possible expected gain, assuming a worst case move of Minnie and hence also assuming a worst case randomized strategy of Minnie.

29 von Neuman Min-Max Theorem (Theorem 11.1)
For any real matrix 𝐴, max 𝑝 min π‘ž π‘ž ⊀ 𝐴𝑝 = min π‘ž max 𝑝 π‘ž ⊀ 𝐴𝑝 where 𝑝 (resp. π‘ž) are arbitrary probability distributions on columns (resp. rows) of 𝐴.

30 Exploiting weak opponents
Can we play an optimal strategy and still exploit opponents that do not play by the optimal strategy? Exploit: Achieve a better expectation that the value of the game.

31 Bad news: Principle of indifference
Suppose 𝑝 βˆ— is optimal for Max and π‘ž βˆ— is optimal for Minnie. Let π‘†βŠ†{1,2,…,π‘š} be the set of rows to which π‘ž βˆ— assigns non-zero probability. Suppose Max plays 𝑝 βˆ— . Then his expected payoff is the same, namely the value of the game, no matter which actual row in 𝑆 Minnie chooses. Proof: Complementary Slackness Theorem! If π‘ž 𝑖 βˆ— >0 then 𝑖th constraint is satisfied with equality by 𝑝 βˆ— .

32 Exploiting weak opponents - good news!
Let π‘†βŠ†{1,2,…,π‘š} be the set of rows to which some optimal strategy π‘ž βˆ— for Minnie assigns non-zero probability. There exists an optimal strategy 𝑝 βˆ— for Max so that his expected payoff is strictly bigger than the value of the game if Minnie chooses a row that is not in 𝑆 βˆ— . Proof: Strict Complementary Slackness Theorem!

33 Best replies and Nash equilibrium
Let 𝑦 be a randomized strategy for Minnie. A best reply to 𝑦 is a randomized strategy for Max that maximizes his expected winnings assuming that Minnie plays 𝑦. A pair of randomized strategies π‘₯ βˆ— and 𝑦 βˆ— is called a Nash equilibrium if they are best responses to each other. Theorem: A pair of randomized strategies ( π‘₯ βˆ— , 𝑦 βˆ— ) for a matrix game is a Nash equilibrium if and only if they are both optimal.


Download ppt "The Duality Theorem Primal P: Maximize "

Similar presentations


Ads by Google