Chap 9. General LP problems: Duality and Infeasibility

Slides:



Advertisements
Similar presentations
Duality for linear programming. Illustration of the notion Consider an enterprise producing r items: f k = demand for the item k =1,…, r using s components:
Advertisements

Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
Duality Dual problem Duality Theorem Complementary Slackness
Computational Methods for Management and Economics Carla Gomes
7(2) THE DUAL THEOREMS Primal ProblemDual Problem b is not assumed to be non-negative.
5.6 Maximization and Minimization with Mixed Problem Constraints
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
Linear Programming - Standard Form
Chapter 3 Linear Programming Methods 高等作業研究 高等作業研究 ( 一 ) Chapter 3 Linear Programming Methods (II)
Chapter 6 Linear Programming: The Simplex Method
Duality Theory 對偶理論.
Simplex method (algebraic interpretation)
Duality Theory LI Xiaolei.
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
OR Perturbation Method (tableau form) (after two iterations, optimal solution obtained) (0+2  1 ) (0+2  2 ) (1+  3 )
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
OR Simplex method (algebraic interpretation) Add slack variables( 여유변수 ) to each constraint to convert them to equations. (We may refer it as.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
Part 3 Linear Programming 3.3 Theoretical Analysis.
An Introduction to Linear Programming
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Solving Linear Program by Simplex Method The Concept
The Duality Theorem Primal P: Maximize
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Chap 10. Sensitivity Analysis
The minimum cost flow problem
Perturbation method, lexicographic method
6.5 Stochastic Prog. and Benders’ decomposition
Proving that a Valid Inequality is Facet-defining
Duality for linear programming.
The Two-Phase Simplex Method
Chapter 4 Linear Programming: The Simplex Method
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 6. Large Scale Optimization
The Simplex Method: Standard Minimization Problems
Chapter 5. Sensitivity Analysis
Chap 3. The simplex method
Chapter 3 The Simplex Method and Sensitivity Analysis
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 4. Duality Theory
Chapter 6. Large Scale Optimization
The Simplex Method The geometric method of solving linear programming problems presented before. The graphical method is useful only for problems involving.
2. Generating All Valid Inequalities
Chapter 8. General LP Problems
Chapter 5. The Duality Theorem
I.4 Polyhedral Theory (NW)
Flow Feasibility Problems
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
I.4 Polyhedral Theory.
Proving that a Valid Inequality is Facet-defining
Chapter 8. General LP Problems
(Convex) Cones Def: closed under nonnegative linear combinations, i.e.
Chapter-III Duality in LPP
6.5 Stochastic Prog. and Benders’ decomposition
Chapter 2. Simplex method
Graphical solution A Graphical Solution Procedure (LPs with 2 decision variables can be solved/viewed this way.) 1. Plot each constraint as an equation.
Simplex method (algebraic interpretation)
DUALITY THEORY Reference: Chapter 6 in Bazaraa, Jarvis and Sherali.
BASIC FEASIBLE SOLUTIONS
Chapter 8. General LP Problems
Prepared by Po-Chuan on 2016/05/24
Chapter 6. Large Scale Optimization
Chapter 2. Simplex method
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Presentation transcript:

Chap 9. General LP problems: Duality and Infeasibility Extend the duality theory to more general form of LP Consider the following form of LP maximize 𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 subject to 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≤ 𝑏 𝑖 , (𝑖∈𝐼) 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 = 𝑏 𝑖 , (𝑖∈𝐸) (9.1) 𝑥 𝑗 ≥0, (𝑗∈𝑅) 𝐹=𝑁\R, 𝑁= 1,2,…,𝑛 Want to define dual problem for this LP so that dual objective value gives an upper bound on the primal optimal value. (1) (2) OR-1 2015

𝑦 𝑖 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≤ 𝑦 𝑖 𝑏 𝑖 , 𝑦 𝑖 ≥0, 𝑖∈𝐼 Take linear combination of constraints with multiplier 𝑦 𝑖 for constraint 𝑖. 𝑦 𝑖 ≥0 for 𝑖∈𝐼, 𝑦 𝑖 unrestricted in sign for 𝑖∈𝐸  doesn’t change the direction of the inequality. 𝑦 𝑖 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≤ 𝑦 𝑖 𝑏 𝑖 , 𝑦 𝑖 ≥0, 𝑖∈𝐼 𝑦 𝑖 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 = 𝑦 𝑖 𝑏 𝑖 , 𝑦 𝑖 unrestricted, 𝑖∈𝐸 Adding up on both sides  𝑖=1 𝑚 𝑦 𝑖 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≤ 𝑖=1 𝑚 𝑦 𝑖 𝑏 𝑖 holds for 𝑥 satisfying (1) and 𝑦 𝑖 ≥0, 𝑖∈𝐼, 𝑦 𝑖 unrestricted 𝑖∈𝐸. Now 𝑖=1 𝑚 𝑦 𝑖 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 = 𝑗=1 𝑛 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 𝑥 𝑗 ≤ 𝑖=1 𝑚 𝑦 𝑖 𝑏 𝑖 Compare this with primal objective coefficient 𝑐 𝑗 Want this as upper bound OR-1 2015

𝑖=1 𝑚 𝑦 𝑖 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 = 𝑗=1 𝑛 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 𝑥 𝑗 ≤ 𝑖=1 𝑚 𝑦 𝑖 𝑏 𝑖 𝑖=1 𝑚 𝑦 𝑖 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 = 𝑗=1 𝑛 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 𝑥 𝑗 ≤ 𝑖=1 𝑚 𝑦 𝑖 𝑏 𝑖 Make 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 ≥ 𝑐 𝑗 , if 𝑗∈𝑅 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 = 𝑐 𝑗 , if 𝑗∈𝐹  𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ≤ 𝑗=1 𝑛 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 𝑥 𝑗 , ∀ 𝑥 satisfying (2)  𝑗=1 𝑛 𝑐 𝑗 𝑥 𝑗 ≤ 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 , ∀ 𝑥 satisfying (1), (2) (i.e. for primal feasible 𝑥) & 𝑦 satisfying the given conditions.  Gives weak duality relationship. We want strong bound, hence solve min 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 s.t. 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 ≥ 𝑐 𝑗 , (𝑗∈𝑅) 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 = 𝑐 𝑗 , (𝑗∈𝐹) (dual problem) (9.9) 𝑦 𝑖 ≥0, (𝑖∈𝐼) ( 𝑦 𝑖 free, (𝑖∈𝐸) ) OR-1 2015

Primal-Dual Correspondence Primal Dual maximize minimize 𝑥 𝑗 ≥0 𝑗 th constraint ≥ free 𝑥 𝑗 𝑗 th constraint = 𝑖 th constraint ≤ 𝑦 𝑖 ≥0 𝑖 th constraint = free 𝑦 𝑖 OR-1 2015

The dual of the dual is the primal: Problem (9.9) may be presented as max 𝑖=1 𝑚 (− 𝑏 𝑖 ) 𝑦 𝑖 s.t. 𝑖=1 𝑚 (− 𝑎 𝑖𝑗 ) 𝑦 𝑖 ≤− 𝑐 𝑗 , (𝑗∈𝑅) 𝑖=1 𝑚 (− 𝑎 𝑖𝑗 ) 𝑦 𝑖 =− 𝑐 𝑗 , (𝑗∈𝐹) 𝑦 𝑖 ≥0, (𝑖∈𝐼) and its dual problem is min 𝑗=1 𝑛 (− 𝑐 𝑗 ) 𝑥 𝑗 s.t. 𝑗=1 𝑛 (− 𝑎 𝑖𝑗 ) 𝑥 𝑗 ≥− 𝑏 𝑖 , (𝑖∈𝐼) 𝑗=1 𝑛 (− 𝑎 𝑖𝑗 ) 𝑥 𝑗 =− 𝑏 𝑖 , (𝑖∈𝐸) 𝑥 𝑗 ≥0, (𝑗∈𝑅) which is just another presentation of (9.1). OR-1 2015

Obtaining dual of unusual form Ex) max 3 𝑥 1 +2 𝑥 2 +5 𝑥 3 max 3 𝑥 1 +2 𝑥 2 +5 𝑥 3 s.t. 5 𝑥 1 +3 𝑥 2 + 𝑥 3 =−8 s.t. 5 𝑥 1 +3 𝑥 2 + 𝑥 3 =−8 4 𝑥 1 +2 𝑥 2 +8 𝑥 3 ≤23 4 𝑥 1 +2 𝑥 2 +8 𝑥 3 ≤23 6 𝑥 1 +7 𝑥 2 +3 𝑥 3 ≥1 −6 𝑥 1 −7 𝑥 2 −3 𝑥 3 ≤−1 𝑥 1 ≤4, 𝑥 3 ≥0 𝑥 1 ≤4 𝑥 3 ≥0 Dual problem is min −8 𝑦 1 +23 𝑦 2 − 𝑦 3 +4 𝑦 4 s.t. 5 𝑦 1 +4 𝑦 2 −6 𝑦 3 + 𝑦 4 =3 3 𝑦 1 +2 𝑦 2 −7 𝑦 3 =2 𝑦 1 +8 𝑦 2 −3 𝑦 3 ≥5 𝑦 2 , 𝑦 3 , 𝑦 4 ≥0 If the LP is given in minimization form, present the problem as (9.9) and then write (9.1). OR-1 2015

Thm 9.1 (The Duality Theorem): If a linear programming problem has an optimal solution, then its dual has an optimal solution and the optimal values of the two problems coincide. Pf) proof parallels the idea for standard LP. At the termination of the simplex method, we identify dual vector 𝑦 ∗ from 𝑦′𝐵= 𝑐 𝐵 ′ and show that it is dual feasible and 𝑏′ 𝑦 ∗ =𝑐′ 𝑥 ∗ . See text for details.  Weak duality and strong duality relationship hold for general primal, dual pair. OR-1 2015

Consider a special case of the general LP max 𝑐 ′ 𝑥 s.t. 𝐴𝑥=𝑏 𝑥≥0, which is used as standard LP problem by some people (maybe in minimization form). Also it is the augmented form we used when we developed the simplex method in Chapter 2, 3. (𝐴:𝑚×𝑛, full row rank) Its dual is min 𝑦 ′ 𝑏 s.t. 𝑦 ′ 𝐴≥𝑐′ 𝑦 unrestricted Suppose we solve the above primal problem using simplex method and find optimal basis 𝐵. Then the updated tableau is expressed the same way as we have seen before. OR-1 2015

−𝑧+0′ 𝑥 𝐵 + 𝑐 𝑁 ′− 𝑐 𝐵 ′ 𝐵 −1 𝑁 𝑥 𝑁 =− 𝑐 𝐵 ′ 𝐵 −1 𝑏 −𝑧+0′ 𝑥 𝐵 + 𝑐 𝑁 ′− 𝑐 𝐵 ′ 𝐵 −1 𝑁 𝑥 𝑁 =− 𝑐 𝐵 ′ 𝐵 −1 𝑏 𝑥 𝐵 + 𝐵 −1 𝑁 𝑥 𝑁 = 𝐵 −1 𝑏 Here we don’t have slack variables appearing. Since 𝑦 is obtained from 𝑦′𝐵= 𝑐 𝐵 ′, the updated objective coefficients in the 𝑧−row can be regarded as 𝑐 𝑗 −𝑦′ 𝐴 𝑗 for all basic and nonbasic variables. At optimality, we have 𝑐 𝑗 −𝑦′ 𝐴 𝑗 ≤0, or 𝑦′ 𝐴 𝑗 ≥ 𝑐 𝑗 , hence 𝑦 is dual feasible vector. The dual objective function value is 𝑦 ′ 𝑏, which is the same value as the current primal objective function value 𝑐 𝐵 ′ 𝐵 −1 𝑏= 𝑐 𝐵 ′ 𝑥 𝐵 . Hence providing the proof that the current solution 𝑥 is optimal to primal and 𝑦 is optimal to dual respectively. OR-1 2015

Unsolvable Systems of Linear Inequalities and Equations Consider the following pair of constraints 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≤ 𝑏 𝑖 𝑖∈𝐼 (9.13) 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 = 𝑏 𝑖 𝑖∈𝐸 (number of constraints, i.e. 𝐼 + 𝐸 =𝑛) 𝑦 𝑖 ≥0, whenever 𝑖∈𝐼 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 =0, for all 𝑗=1,2,…,𝑛 (9.16) 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 <0 Then (9.13) is infeasible if and only if (9.16) is feasible. In other words, exactly one of (9.13) and (9.16) has a feasible solution (Theorem 9.2). (called theorem of the alternatives, many other versions, very important tool and has many applications.) OR-1 2015

Then, we obtain 𝑗=1 𝑛 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 𝑥 𝑗 ≤ 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 . Pf in the text) ) Suppose (9.16) has a feasible solution 𝑦. We multiply 𝑦 𝑖 on both sides of constraints in (9.13) ( 𝑦 𝑖 ≥0 for 𝑖∈𝐼) and add the lhs and rhs, respectively. Then, we obtain 𝑗=1 𝑛 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 𝑥 𝑗 ≤ 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 . Hence, 𝑗=1 𝑛 0× 𝑥 𝑗 ≤ 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 <0, which must be satisfied by any feasible 𝑥 to (9.13). Since it is impossible to satisfy 𝑗=1 𝑛 0× 𝑥 𝑗 <0 by any 𝑥, (9.13) is infeasible. ) Consider the linear program max 𝑖=1 𝑚 − 𝑥 𝑛+𝑖 (or min 𝑖=1 𝑚 𝑥 𝑛+𝑖 ) s.t. 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 + 𝑤 𝑖 𝑥 𝑛+𝑖 ≤ 𝑏 𝑖 𝑖∈𝐼 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 + 𝑤 𝑖 𝑥 𝑛+𝑖 = 𝑏 𝑖 𝑖∈𝐸 (9.18) 𝑥 𝑛+𝑖 ≥0 𝑖=1, 2, …,𝑚 with 𝑤 𝑖 =1 if 𝑏 𝑖 ≥0 and 𝑤 𝑖 =−1 if 𝑏 𝑖 <0. (9.18) has a feasible solution (with 𝑥=0 for original variables). Also the upper bound on the optimal value is 0, hence it has finite optimal. OR-1 2015

(continued) The optimal value of (9. 18) is 0 if and only if (9 (continued) The optimal value of (9.18) is 0 if and only if (9.13) has a feasible solution. If (9.13) is unsolvable, then the optimal value of (9.18) is negative. Then duality theorem guarantees that the dual of (9.18) has optimal value which is negative. min 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 s.t. 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 =0 𝑗=1, 2, …,𝑛 𝑤 𝑖 𝑦 𝑖 ≥−1 ( 𝑖=1,2,…,𝑚) 𝑦 𝑖 ≥0 𝑖∈𝐼 Then the optimal dual solution 𝑦 1 , 𝑦 2 , …, 𝑦 𝑚 satisfies (9.16).  OR-1 2015

Consider the following primal-dual pair Alternative proof) Consider the following primal-dual pair (P) max 𝑗=1 𝑛 0 𝑥 𝑗 (coefficients of 𝑥 𝑗 are all 0) 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≤ 𝑏 𝑖 𝑖∈𝐼 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 = 𝑏 𝑖 𝑖∈𝐸 𝐼 + 𝐸 =𝑚 (D) min 𝑖=1 𝑚 𝑏 𝑖 𝑦 𝑖 𝑖=1 𝑚 𝑎 𝑖𝑗 𝑦 𝑖 =0 for all 𝑗=1, 2, …, 𝑛 𝑦 𝑖 ≥0 whenever 𝑖∈𝐼 ) Suppose (9.16) has a feasible solution 𝑦 with 𝑏 ′ 𝑦<0. Then 𝜆𝑦 is feasible to (D) for all 𝜆>0. Then 𝑏 ′ 𝜆𝑦 → −∞ as 𝜆→∞, hence (D) is unbounded. Therefore (P) is infeasible, i.e. (9.13) is infeasible, from the possible primal-dual statuses. OR-1 2015

(pf continued) ) Suppose (9. 13) is infeasible, i. e (pf continued) ) Suppose (9.13) is infeasible, i.e. (P) does not have a feasible solution. Then (D) is either infeasible or unbounded. But 𝑦=0 is a feasible solution to (D), hence the only remaining possibility is (D) unbounded. Then (9.16) has a feasible solution with 𝑏 ′ 𝑦<0.  OR-1 2015

have precisely the same set of solutions and Thm 9.3 : If a system of 𝑚 linear equations has a nonnegative solution, then it has a solution with at most 𝑚 variables positive. Pf) If the system 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 = 𝑏 𝑖 𝑖=1, 2, …, 𝑚 (9.19) 𝑥 𝑗 ≥0 𝑗=1, 2, …,𝑛 has a solution, then, by Theorem 8.3, there is some set 𝐼 of subscripts 1, 2, …, 𝑚 such that (i) system (9.19) and 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 = 𝑏 𝑖 𝑖∈𝐼 (9.20) have precisely the same set of solutions and (ii) system (9.20) has a basic feasible solution 𝑥 1 ∗ , 𝑥 2 ∗ , …, 𝑥 𝑛 ∗ . Now at most 𝐼 variables are positive at a b.f.s. OR-1 2015

Theorem 9.3 can be used to the case Note that if 𝑏 is considered as a vector in 𝑅 𝑚 , (9.19) means 𝑏 can be expressed as a nonnegative linear combination of columns of coefficient matrix 𝐴 (𝑏 is in a cone generated by the columns of 𝐴). (Caratheodory’s Thm) Theorem 9.3 can be used to the case 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 = 𝑏 𝑖 𝑖=1, 2, …,𝑚 (9.19) ( 𝑗 𝐴 𝑗 𝑥 𝑗 =𝑏 ) 𝑗=1 𝑛 𝑥 𝑗 =1 𝑥 𝑗 ≥0 𝑗=1, 2, …,𝑛 , which means 𝑏∈ 𝑅 𝑚 can be expressed as a convex combination of column vectors of 𝐴. The Theorem can now be said that we need at most 𝑚+1 variables positive. Caratheodory’s theorem says that, if a vector 𝑏∈ 𝑅 𝑚 is in the convex hull of a set 𝑆, then 𝑏 can be expressed as a convex combination of at most 𝑚+1 points of 𝑆. OR-1 2015

This subsystem is unsolvable. Thm 9.4 : Every unsolvable system of linear inequalities in 𝑛 variables contains an unsolvable subsystem of at most 𝑛+1 inequalities. Pf) If 𝐴𝑥≤𝑏 unsolvable, then, by Theorem 9.2, there exists 𝑦 ∗ which satisfies 𝑦 ∗ ≥0, 𝑦 ∗ ′𝐴=0, 𝑦 ∗ ′𝑏<0. Denote 𝑦 ∗ ′𝑏 by 𝑐 (e.g. −1, note that if 𝑦 ∗ is a feasible solution to above, then 𝜆 𝑦 ∗ , 𝜆>0 is also feasible, so the actual value of 𝑦 ∗ ′ 𝑏 can be chosen as any negative value.), and consider the system 𝑦 ′ 𝐴=0, 𝑦 ′ 𝑏=𝑐 consisting of 𝑛+1 equations. Since 𝑦 ∗ is a nonnegative solution, Theorem 9.3 guarantees the existence of nonnegative solution 𝑦 with at most 𝑛+1 positive components 𝑦 𝑖 . The desired subsystem consists of those inequalities 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ≤ 𝑏 𝑖 for which 𝑦 𝑖 >0; since 𝑖 𝑦 𝑖 𝑎 𝑖𝑗 =0 for all 𝑗 but 𝑖 𝑏 𝑖 𝑦 𝑖 =𝑐<0. This subsystem is unsolvable. OR-1 2015