Chapter 5. Sensitivity Analysis

Slides:



Advertisements
Similar presentations
February 14, 2002 Putting Linear Programs into standard form
Advertisements

Computational Methods for Management and Economics Carla Gomes Module 6b Simplex Pitfalls (Textbook – Hillier and Lieberman)
ISM 206 Lecture 4 Duality and Sensitivity Analysis.
Duality Dual problem Duality Theorem Complementary Slackness
ISM 206 Lecture 4 Duality and Sensitivity Analysis.
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
Chapter 3 Linear Programming Methods 高等作業研究 高等作業研究 ( 一 ) Chapter 3 Linear Programming Methods (II)
Presentation: H. Sarper
The Two-Phase Simplex Method LI Xiao-lei. Preview When a basic feasible solution is not readily available, the two-phase simplex method may be used as.
Simplex method (algebraic interpretation)
Duality Theory LI Xiaolei.
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming: The Simplex Method Chapter 5.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Public Policy Modeling Simplex Method Tuesday, October 13, 2015 Hun Myoung Park, Ph.D. Public Management & Policy Analysis Program Graduate School of International.
OR Perturbation Method (tableau form) (after two iterations, optimal solution obtained) (0+2  1 ) (0+2  2 ) (1+  3 )
Duality Theory.
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
1 1 Slide © 2005 Thomson/South-Western Linear Programming: The Simplex Method n An Overview of the Simplex Method n Standard Form n Tableau Form n Setting.
Chapter 4 Linear Programming: The Simplex Method
Linear Programming Implementation. Linear Programming
1 1 Slide © 2005 Thomson/South-Western Simplex-Based Sensitivity Analysis and Duality n Sensitivity Analysis with the Simplex Tableau n Duality.
1 Chapter 4 The Simplex Algorithm PART 2 Prof. Dr. M. Arslan ÖRNEK.
1 THE REVISED SIMPLEX METHOD CONTENTS Linear Program in the Matrix Notation Basic Feasible Solution in Matrix Notation Revised Simplex Method in Matrix.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Copyright © 2006 Brooks/Cole, a division of Thomson Learning, Inc. Linear Programming: An Algebraic Approach 4 The Simplex Method with Standard Maximization.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
SENSITIVITY ANALYSIS. 2 Sensitivity Analysis Sensitivity analysis is carried out after optimal solution is found. Hence called as post optimality analysis.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Chap 10. Sensitivity Analysis
Perturbation method, lexicographic method
6.5 Stochastic Prog. and Benders’ decomposition
Proving that a Valid Inequality is Facet-defining
The Two-Phase Simplex Method
Chapter 5 Simplex-Based Sensitivity Analysis and Duality
Sindhuja K, Annapoorani G, Pramela Devi S Department: CSE
Chapter 4 Linear Programming: The Simplex Method
Chap 9. General LP problems: Duality and Infeasibility
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 6. Large Scale Optimization
The Simplex Method: Standard Minimization Problems
Chap 3. The simplex method
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
St. Edward’s University
Solving Linear Programming Problems: Asst. Prof. Dr. Nergiz Kasımbeyli
Chapter 4. Duality Theory
Chapter 6. Large Scale Optimization
2. Generating All Valid Inequalities
Chapter 8. General LP Problems
Lecture 4 Part I Mohamed A. M. A..
Chapter 5. The Duality Theorem
I.4 Polyhedral Theory (NW)
Flow Feasibility Problems
I.4 Polyhedral Theory.
Proving that a Valid Inequality is Facet-defining
Chapter 8. General LP Problems
Chapter-III Duality in LPP
6.5 Stochastic Prog. and Benders’ decomposition
Chapter 2. Simplex method
Simplex method (algebraic interpretation)
DUALITY THEORY Reference: Chapter 6 in Bazaraa, Jarvis and Sherali.
Chapter 8. General LP Problems
Prepared by Po-Chuan on 2016/05/24
Chapter 6. Large Scale Optimization
Chapter 2. Simplex method
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Presentation transcript:

Chapter 5. Sensitivity Analysis Investigate the dependence of optimal solution on changes of problem data. (1) range of data variation for which current basis remains optimal (2) Reoptimize after changes of data. Linear Programming 2015

5.1 Local sensitivity analysis Current basis optimal if 𝐵 −1 𝑏≥0, 𝑐 ′ − 𝑐 𝐵 ′ 𝐵 −1 𝐴≥0 (a) new variable added min 𝑐 ′ 𝑥+ 𝑐 𝑛+1 𝑥 𝑛+1 𝐴𝑥+ 𝐴 𝑛+1 𝑥 𝑛+1 =𝑏 𝑥≥0 𝑥, 𝑥 𝑛+1 = 𝑥 ∗ , 0 is a b.f.s., check if 𝑐 𝑛+1 = 𝑐 𝑛+1 − 𝑐 𝐵 ′ 𝐵 −1 𝐴 𝑛+1 ≥0 If 𝑐 𝑛+1 ≥0, current solution optimal. If 𝑐 𝑛+1 <0, add the new column to the tableau and reoptimize starting from the current basis 𝐵. Linear Programming 2015

(b) new inequality added. Add 𝑎 𝑚+1 ′ 𝑥≥ 𝑏 𝑚+1 If 𝑎 𝑚+1 ′ 𝑥 ∗ ≥ 𝑏 𝑚+1 , 𝑥 ∗ still optimal. Otherwise, let 𝑎 𝑚+1 ′ 𝑥− 𝑥 𝑛+1 = 𝑏 𝑚+1 , 𝑥 𝑛+1 ≥0. New basis 𝐵 = 𝐵 0 𝑎′ −1 . ( 𝐵 𝑥 𝑥 𝑛+1 = 𝑏 𝑏 𝑚+1 ) New basic solution is 𝑥 ∗ , 𝑎 𝑚+1 ′ 𝑥 ∗ − 𝑏 𝑚+1 , primal infeasible. Dual feasibility? (reduced costs not changed) 𝐵 −1 = 𝐵 −1 0 𝑎′ 𝐵 −1 −1  𝑐 ′ ,0 − 𝑐 𝐵 ′ ,0 𝐵 −1 0 𝑎′ 𝐵 −1 −1 𝐴 0 𝑎 𝑚+1 ′ −1 = 𝑐 ′ − 𝑐 𝐵 ′ 𝐵 −1 𝐴, 0 ≥0 𝑐 𝐵 ′ 𝐵 −1 , 0 Linear Programming 2015

Use dual simplex, constraints in current tableau is 𝐵 −1 𝐴 0 𝑎 𝑚+1 ′ −1 = 𝐵 −1 0 𝑎′ 𝐵 −1 −1 𝐴 0 𝑎 𝑚+1 ′ −1 = 𝐵 −1 𝐴 0 𝑎′ 𝐵 −1 𝐴− 𝑎 𝑚+1 ′ 1 Or we perform elementary row operations on the tableau to make the coefficients of the basic variables in the added constraint become 0. (after making the coefficient of 𝑥 𝑛+1 as 1 by multiplying −1 on both sides) (see ex. 5.2.) Note: dual vector ( 𝑝 ′, 𝑝 𝑚+1 ) can also be obtained as follows. ( 𝑝 ′ , 𝑝 𝑚+1 ) 𝐵 = 𝑐 𝐵  ( 𝑝 ′ , 𝑝 𝑚+1 ) 𝐵 0 𝑎′ −1 = 𝑐 𝐵 ′ 0  𝑝 ′𝐵+ 𝑝 𝑚+1 𝑎′= 𝑐 𝐵 ′ − 𝑝 𝑚+1 =0  𝑝 𝑝 𝑚+1 = 𝑝 ∗ 0 Hence dual variable for added constraint = 0, original dual variable values not changed  No change in reduced costs. Linear Programming 2015

Add 𝑎 𝑚+1 ′ 𝑥= 𝑏 𝑚+1 ( violated by 𝑥 ∗ ) (c) new equality added. Add 𝑎 𝑚+1 ′ 𝑥= 𝑏 𝑚+1 ( violated by 𝑥 ∗ ) 𝑝 ∗ 0 dual feasible, but may not have a primal basic solution. Instead of finding new 𝐵 , solve ( assumning 𝑎 𝑚+1 ′ 𝑥 ∗ > 𝑏 𝑚+1 ) min 𝑐 ′ 𝑥+𝑀 𝑥 𝑛+1 𝐴𝑥 =𝑏 𝑎 𝑚+1 ′ 𝑥− 𝑥 𝑛+1 = 𝑏 𝑚+1 𝑥≥0, 𝑥 𝑛+1 ≥0 Add 𝑥 𝑛+1 to basis (same as (b)), get primal b.f.s and use primal simplex Remark : See ‘Linear Programming’, V. Chvatal, Freeman, 1983 for reoptimization approaches for bounded variable LP problem (Chapter 10. Sensitivity Analysis). Linear Programming 2015

No changes in reduced costs. But need 𝐵 −1 𝑏  𝐵 −1 𝑏+𝛿 𝑒 𝑖 ≥0 (d) changes in 𝑏 𝑏  𝑏+𝛿 𝑒 𝑖 No changes in reduced costs. But need 𝐵 −1 𝑏  𝐵 −1 𝑏+𝛿 𝑒 𝑖 ≥0 Let 𝑔 be the 𝑖−𝑡ℎ column of 𝐵 −1 . 𝐵 −1 𝑏+𝛿 𝑒 𝑖 = 𝑥 𝐵 +𝛿𝑔≥0, find range of 𝛿. If 𝛿 out of range, use dual simplex to reoptimize. Linear Programming 2015

(e-1) 𝑥 𝑗 nonbasic. 𝑐 𝑗  𝑐 𝑗 +𝛿 primal feasibility not affected. (e) changes in 𝑐 (e-1) 𝑥 𝑗 nonbasic. 𝑐 𝑗  𝑐 𝑗 +𝛿 primal feasibility not affected. 𝑐 𝐵 ′ 𝐵 −1 𝐴 𝑗 ≤ 𝑐 𝑗 +𝛿  𝛿≥− 𝑐 𝑗 (e-2) 𝑥 𝑗 basic (suppose 𝑗=𝐵(𝑙) ) 𝑐 𝐵  𝑐 𝐵 +𝛿 𝑒 𝑙 optimality condition : 𝑐 𝐵 +𝛿 𝑒 𝑙 ′ 𝐵 −1 𝐴 𝑖 ≤ 𝑐 𝑖 , ∀ 𝑖≠𝑗  𝑐 𝐵 ′ 𝐵 −1 𝐴 𝑖 +𝛿 𝑒 𝑙 ′ 𝐵 −1 𝐴 𝑖 ≤ 𝑐 𝑖  𝛿 𝑔 𝑙𝑖 ≤ 𝑐 𝑖 − 𝑐 𝐵 ′ 𝐵 −1 𝐴 𝑖 = 𝑐 𝑖 ( 𝑔 𝑙𝑖 =𝑙−𝑡ℎ entry of 𝐵 −1 𝐴 𝑖 ) Note that, for basic variables except 𝑗, have 𝑔 𝑙𝑖 =0 Hence only need to check the range for nonbasic 𝑥 𝑖 ′ 𝑠 Linear Programming 2015

(f) changes in nonbasic column 𝐴 𝑗 𝑎 𝑖𝑗  𝑎 𝑖𝑗 +𝛿 Also may think that we have 𝛿 remaining as the coefficient of 𝑥 𝑗 in 0-th row with the optimal basis 𝐵. Need to pivot to make the coefficient 0. Then the coefficients of nonbasic variables in 0-th row are affected. We need the range of 𝛿 which makes the coefficient of nonbasic variables nonnegative. Ex) 𝑥 1 = 𝑥 1 = 𝑥 2 = 𝑥 2 = (f) changes in nonbasic column 𝐴 𝑗 𝑎 𝑖𝑗  𝑎 𝑖𝑗 +𝛿 𝑐 𝑗 −𝑝′ 𝐴 𝑗 +𝛿 𝑒 𝑖 ≥0  𝑐 𝑗 −𝛿 𝑝 𝑖 ≥0 Linear Programming 2015

5.2. Global dependence on 𝑏 Investigate the change of optimal value as a function of 𝑏 Let 𝑃 𝑏 = 𝑥∈ 𝑅 𝑛 :𝐴𝑥=𝑏, 𝑥≥0 𝑆= 𝑏∈ 𝑅 𝑚 :𝑃 𝑏 is nonempty = 𝐴𝑥:𝑥≥0 (convex) Define 𝐹 𝑏 = min 𝑥∈𝑃(𝑏) 𝑐 ′ 𝑥 ( called value function) Assume dual feasible set 𝑝: 𝑝 ′ 𝐴≤𝑐′ is nonempty.  𝐹(𝑏) finite ∀ 𝑏∈𝑆 Suppose at 𝑏 ∗ ∈𝑆, ∃ nondegenerate optimal solution to primal. ( 𝑥 𝐵 = 𝐵 −1 𝑏) From nondegeneracy assumption, current basis 𝐵 is optimal basis for small changes in 𝑏.  𝐹 𝑏 = 𝑐 𝐵 ′ 𝐵 −1 𝑏= 𝑝 ′ 𝑏 for 𝑏 close to 𝑏 ∗  𝐹(𝑏) is a linear function of 𝑏 near 𝑏 ∗ and gradient is 𝑝. Linear Programming 2015

pf) Let 𝑏 1 , 𝑏 2 ∈𝑆. 𝐹 𝑏 1 =𝑐′ 𝑥 1 , 𝐹 𝑏 2 =𝑐′ 𝑥 2 . Thm 5.1 : 𝐹(𝑏) is convex on 𝑆. pf) Let 𝑏 1 , 𝑏 2 ∈𝑆. 𝐹 𝑏 1 =𝑐′ 𝑥 1 , 𝐹 𝑏 2 =𝑐′ 𝑥 2 . For 𝑦=𝜆 𝑥 1 + 1−𝜆 𝑥 2 , 𝜆∈ 0,1 , have 𝐴𝑦=𝜆 𝑏 1 + 1−𝜆 𝑏 2 𝑦 feasible solution when 𝑏 is 𝜆 𝑏 1 + 1−𝜆 𝑏 2  𝐹 𝜆 𝑏 1 + 1−𝜆 𝑏 2 ≤ 𝑐 ′ 𝑦=𝜆𝑐′ 𝑥 1 + 1−𝜆 𝑐′ 𝑥 2 =𝜆𝐹 𝑏 1 + 1−𝜆 𝐹( 𝑏 2 )  Different reasoning using dual problem max 𝑝 ′ 𝑏, 𝑝 ′ 𝐴≤𝑐′ with the assumption that dual feasibility holds. Then, strong duality holds for all 𝑏∈𝑆. Hence 𝐹 𝑏 = 𝑝 𝑖 ′ 𝑏 for some extreme point 𝑝 𝑖 in dual. ( 𝐴 is full row rank, hence dual has extreme point if feasible)  𝐹(𝑏)= max 𝑖=1,…,𝑁 𝑝 𝑖 ′ 𝑏 , 𝑏∈𝑆 max of linear functions  piecewise linear convex. Linear Programming 2015

Now consider 𝑏= 𝑏 ∗ +𝜃𝑑, 𝜃∈𝑅 𝑓 𝜃 =𝐹 𝑏 ∗ +𝜃𝑑 𝑓 𝜃 =𝐹 𝑏 ∗ +𝜃𝑑 𝑓 𝜃 = max 𝑖=1,…,𝑁 𝑝 𝑖 ′ 𝑏 ∗ +𝜃𝑑 , 𝑏 ∗ +𝜃𝑑∈𝑆 max of affine functions 𝑓(𝜃) 𝑝 1 ′ 𝑏 ∗ +𝜃𝑑 𝑝 3 ′ 𝑏 ∗ +𝜃𝑑 𝑝 2 ′ 𝑏 ∗ +𝜃𝑑 𝜃 𝜃 1 𝜃 2 Figure 5.1 Linear Programming 2015

5.4. Global dependence on 𝑐 Optimal cost variation depending on 𝑐. Assume primal feasible. Let 𝑄 𝑐 = 𝑝: 𝑝 ′ 𝐴≤𝑐′ , 𝑇= 𝑐∈ 𝑅 𝑛 :𝑄 𝑐 is nonempty 𝑇 is convex set. ( If 𝑐 1 , 𝑐 2 ∈𝑇, ∃ 𝑝 1 , 𝑝 2 such that 𝑝 1 ′ 𝐴≤ 𝑐 1 , 𝑝 2 ′ 𝐴≤ 𝑐 2 . 𝜆 𝑝 1 ′ + 1−𝜆 𝑝 2 ′ 𝐴≤𝜆 𝑐 1 + 1−𝜆 𝑐 2 for 𝜆∈ 0,1  𝜆 𝑐 1 + 1−𝜆 𝑐 2 ∈𝑇) If 𝑐∉𝑇  dual infeasible, primal feasible  primal unbounded ( −∞ ) 𝑐∈𝑇  finite optimal ( 𝐺(𝑐) ) 𝐺 𝑐 = min 𝑖=1,…,𝑁 𝑐′ 𝑥 𝑖 ( 𝑥 𝑖 : b.f.s. of primal )  𝐺(𝑐) is piecewise linear concave on 𝑇 If 𝑥 𝑖 is unique optimal when 𝑐= 𝑐 ∗ , then 𝑐 ∗ ′ 𝑥 𝑖 < 𝑐 ∗ ′ 𝑥 𝑗 , ∀ 𝑗≠𝑖 𝑥 𝑖 still optimal near 𝑐 ∗ , 𝐺 𝑐 =𝑐′ 𝑥 𝑖 , and gradient of 𝐺(𝑐) is 𝑥 𝑖 . Linear Programming 2015

Thm 5.3: Consider a feasible LP in standard form. In summary, Thm 5.3: Consider a feasible LP in standard form. (a) The set 𝑇 of all 𝑐 for which the optimal cost is finite, is convex. (b) The optimal cost 𝐺(𝑐) is a concave function of 𝑐 on the set 𝑇. (c) If for some value of 𝑐 the primal problem has a unique optimal solution 𝑥 ∗ , then 𝐺 is linear in the vicinity of 𝑐 and its gradient is equal to 𝑥 ∗ . Linear Programming 2015