Chapter 5. Sensitivity Analysis Investigate the dependence of optimal solution on changes of problem data. (1) range of data variation for which current basis remains optimal (2) Reoptimize after changes of data. Linear Programming 2015
5.1 Local sensitivity analysis Current basis optimal if 𝐵 −1 𝑏≥0, 𝑐 ′ − 𝑐 𝐵 ′ 𝐵 −1 𝐴≥0 (a) new variable added min 𝑐 ′ 𝑥+ 𝑐 𝑛+1 𝑥 𝑛+1 𝐴𝑥+ 𝐴 𝑛+1 𝑥 𝑛+1 =𝑏 𝑥≥0 𝑥, 𝑥 𝑛+1 = 𝑥 ∗ , 0 is a b.f.s., check if 𝑐 𝑛+1 = 𝑐 𝑛+1 − 𝑐 𝐵 ′ 𝐵 −1 𝐴 𝑛+1 ≥0 If 𝑐 𝑛+1 ≥0, current solution optimal. If 𝑐 𝑛+1 <0, add the new column to the tableau and reoptimize starting from the current basis 𝐵. Linear Programming 2015
(b) new inequality added. Add 𝑎 𝑚+1 ′ 𝑥≥ 𝑏 𝑚+1 If 𝑎 𝑚+1 ′ 𝑥 ∗ ≥ 𝑏 𝑚+1 , 𝑥 ∗ still optimal. Otherwise, let 𝑎 𝑚+1 ′ 𝑥− 𝑥 𝑛+1 = 𝑏 𝑚+1 , 𝑥 𝑛+1 ≥0. New basis 𝐵 = 𝐵 0 𝑎′ −1 . ( 𝐵 𝑥 𝑥 𝑛+1 = 𝑏 𝑏 𝑚+1 ) New basic solution is 𝑥 ∗ , 𝑎 𝑚+1 ′ 𝑥 ∗ − 𝑏 𝑚+1 , primal infeasible. Dual feasibility? (reduced costs not changed) 𝐵 −1 = 𝐵 −1 0 𝑎′ 𝐵 −1 −1 𝑐 ′ ,0 − 𝑐 𝐵 ′ ,0 𝐵 −1 0 𝑎′ 𝐵 −1 −1 𝐴 0 𝑎 𝑚+1 ′ −1 = 𝑐 ′ − 𝑐 𝐵 ′ 𝐵 −1 𝐴, 0 ≥0 𝑐 𝐵 ′ 𝐵 −1 , 0 Linear Programming 2015
Use dual simplex, constraints in current tableau is 𝐵 −1 𝐴 0 𝑎 𝑚+1 ′ −1 = 𝐵 −1 0 𝑎′ 𝐵 −1 −1 𝐴 0 𝑎 𝑚+1 ′ −1 = 𝐵 −1 𝐴 0 𝑎′ 𝐵 −1 𝐴− 𝑎 𝑚+1 ′ 1 Or we perform elementary row operations on the tableau to make the coefficients of the basic variables in the added constraint become 0. (after making the coefficient of 𝑥 𝑛+1 as 1 by multiplying −1 on both sides) (see ex. 5.2.) Note: dual vector ( 𝑝 ′, 𝑝 𝑚+1 ) can also be obtained as follows. ( 𝑝 ′ , 𝑝 𝑚+1 ) 𝐵 = 𝑐 𝐵 ( 𝑝 ′ , 𝑝 𝑚+1 ) 𝐵 0 𝑎′ −1 = 𝑐 𝐵 ′ 0 𝑝 ′𝐵+ 𝑝 𝑚+1 𝑎′= 𝑐 𝐵 ′ − 𝑝 𝑚+1 =0 𝑝 𝑝 𝑚+1 = 𝑝 ∗ 0 Hence dual variable for added constraint = 0, original dual variable values not changed No change in reduced costs. Linear Programming 2015
Add 𝑎 𝑚+1 ′ 𝑥= 𝑏 𝑚+1 ( violated by 𝑥 ∗ ) (c) new equality added. Add 𝑎 𝑚+1 ′ 𝑥= 𝑏 𝑚+1 ( violated by 𝑥 ∗ ) 𝑝 ∗ 0 dual feasible, but may not have a primal basic solution. Instead of finding new 𝐵 , solve ( assumning 𝑎 𝑚+1 ′ 𝑥 ∗ > 𝑏 𝑚+1 ) min 𝑐 ′ 𝑥+𝑀 𝑥 𝑛+1 𝐴𝑥 =𝑏 𝑎 𝑚+1 ′ 𝑥− 𝑥 𝑛+1 = 𝑏 𝑚+1 𝑥≥0, 𝑥 𝑛+1 ≥0 Add 𝑥 𝑛+1 to basis (same as (b)), get primal b.f.s and use primal simplex Remark : See ‘Linear Programming’, V. Chvatal, Freeman, 1983 for reoptimization approaches for bounded variable LP problem (Chapter 10. Sensitivity Analysis). Linear Programming 2015
No changes in reduced costs. But need 𝐵 −1 𝑏 𝐵 −1 𝑏+𝛿 𝑒 𝑖 ≥0 (d) changes in 𝑏 𝑏 𝑏+𝛿 𝑒 𝑖 No changes in reduced costs. But need 𝐵 −1 𝑏 𝐵 −1 𝑏+𝛿 𝑒 𝑖 ≥0 Let 𝑔 be the 𝑖−𝑡ℎ column of 𝐵 −1 . 𝐵 −1 𝑏+𝛿 𝑒 𝑖 = 𝑥 𝐵 +𝛿𝑔≥0, find range of 𝛿. If 𝛿 out of range, use dual simplex to reoptimize. Linear Programming 2015
(e-1) 𝑥 𝑗 nonbasic. 𝑐 𝑗 𝑐 𝑗 +𝛿 primal feasibility not affected. (e) changes in 𝑐 (e-1) 𝑥 𝑗 nonbasic. 𝑐 𝑗 𝑐 𝑗 +𝛿 primal feasibility not affected. 𝑐 𝐵 ′ 𝐵 −1 𝐴 𝑗 ≤ 𝑐 𝑗 +𝛿 𝛿≥− 𝑐 𝑗 (e-2) 𝑥 𝑗 basic (suppose 𝑗=𝐵(𝑙) ) 𝑐 𝐵 𝑐 𝐵 +𝛿 𝑒 𝑙 optimality condition : 𝑐 𝐵 +𝛿 𝑒 𝑙 ′ 𝐵 −1 𝐴 𝑖 ≤ 𝑐 𝑖 , ∀ 𝑖≠𝑗 𝑐 𝐵 ′ 𝐵 −1 𝐴 𝑖 +𝛿 𝑒 𝑙 ′ 𝐵 −1 𝐴 𝑖 ≤ 𝑐 𝑖 𝛿 𝑔 𝑙𝑖 ≤ 𝑐 𝑖 − 𝑐 𝐵 ′ 𝐵 −1 𝐴 𝑖 = 𝑐 𝑖 ( 𝑔 𝑙𝑖 =𝑙−𝑡ℎ entry of 𝐵 −1 𝐴 𝑖 ) Note that, for basic variables except 𝑗, have 𝑔 𝑙𝑖 =0 Hence only need to check the range for nonbasic 𝑥 𝑖 ′ 𝑠 Linear Programming 2015
(f) changes in nonbasic column 𝐴 𝑗 𝑎 𝑖𝑗 𝑎 𝑖𝑗 +𝛿 Also may think that we have 𝛿 remaining as the coefficient of 𝑥 𝑗 in 0-th row with the optimal basis 𝐵. Need to pivot to make the coefficient 0. Then the coefficients of nonbasic variables in 0-th row are affected. We need the range of 𝛿 which makes the coefficient of nonbasic variables nonnegative. Ex) 𝑥 1 = 𝑥 1 = 𝑥 2 = 𝑥 2 = (f) changes in nonbasic column 𝐴 𝑗 𝑎 𝑖𝑗 𝑎 𝑖𝑗 +𝛿 𝑐 𝑗 −𝑝′ 𝐴 𝑗 +𝛿 𝑒 𝑖 ≥0 𝑐 𝑗 −𝛿 𝑝 𝑖 ≥0 Linear Programming 2015
5.2. Global dependence on 𝑏 Investigate the change of optimal value as a function of 𝑏 Let 𝑃 𝑏 = 𝑥∈ 𝑅 𝑛 :𝐴𝑥=𝑏, 𝑥≥0 𝑆= 𝑏∈ 𝑅 𝑚 :𝑃 𝑏 is nonempty = 𝐴𝑥:𝑥≥0 (convex) Define 𝐹 𝑏 = min 𝑥∈𝑃(𝑏) 𝑐 ′ 𝑥 ( called value function) Assume dual feasible set 𝑝: 𝑝 ′ 𝐴≤𝑐′ is nonempty. 𝐹(𝑏) finite ∀ 𝑏∈𝑆 Suppose at 𝑏 ∗ ∈𝑆, ∃ nondegenerate optimal solution to primal. ( 𝑥 𝐵 = 𝐵 −1 𝑏) From nondegeneracy assumption, current basis 𝐵 is optimal basis for small changes in 𝑏. 𝐹 𝑏 = 𝑐 𝐵 ′ 𝐵 −1 𝑏= 𝑝 ′ 𝑏 for 𝑏 close to 𝑏 ∗ 𝐹(𝑏) is a linear function of 𝑏 near 𝑏 ∗ and gradient is 𝑝. Linear Programming 2015
pf) Let 𝑏 1 , 𝑏 2 ∈𝑆. 𝐹 𝑏 1 =𝑐′ 𝑥 1 , 𝐹 𝑏 2 =𝑐′ 𝑥 2 . Thm 5.1 : 𝐹(𝑏) is convex on 𝑆. pf) Let 𝑏 1 , 𝑏 2 ∈𝑆. 𝐹 𝑏 1 =𝑐′ 𝑥 1 , 𝐹 𝑏 2 =𝑐′ 𝑥 2 . For 𝑦=𝜆 𝑥 1 + 1−𝜆 𝑥 2 , 𝜆∈ 0,1 , have 𝐴𝑦=𝜆 𝑏 1 + 1−𝜆 𝑏 2 𝑦 feasible solution when 𝑏 is 𝜆 𝑏 1 + 1−𝜆 𝑏 2 𝐹 𝜆 𝑏 1 + 1−𝜆 𝑏 2 ≤ 𝑐 ′ 𝑦=𝜆𝑐′ 𝑥 1 + 1−𝜆 𝑐′ 𝑥 2 =𝜆𝐹 𝑏 1 + 1−𝜆 𝐹( 𝑏 2 ) Different reasoning using dual problem max 𝑝 ′ 𝑏, 𝑝 ′ 𝐴≤𝑐′ with the assumption that dual feasibility holds. Then, strong duality holds for all 𝑏∈𝑆. Hence 𝐹 𝑏 = 𝑝 𝑖 ′ 𝑏 for some extreme point 𝑝 𝑖 in dual. ( 𝐴 is full row rank, hence dual has extreme point if feasible) 𝐹(𝑏)= max 𝑖=1,…,𝑁 𝑝 𝑖 ′ 𝑏 , 𝑏∈𝑆 max of linear functions piecewise linear convex. Linear Programming 2015
Now consider 𝑏= 𝑏 ∗ +𝜃𝑑, 𝜃∈𝑅 𝑓 𝜃 =𝐹 𝑏 ∗ +𝜃𝑑 𝑓 𝜃 =𝐹 𝑏 ∗ +𝜃𝑑 𝑓 𝜃 = max 𝑖=1,…,𝑁 𝑝 𝑖 ′ 𝑏 ∗ +𝜃𝑑 , 𝑏 ∗ +𝜃𝑑∈𝑆 max of affine functions 𝑓(𝜃) 𝑝 1 ′ 𝑏 ∗ +𝜃𝑑 𝑝 3 ′ 𝑏 ∗ +𝜃𝑑 𝑝 2 ′ 𝑏 ∗ +𝜃𝑑 𝜃 𝜃 1 𝜃 2 Figure 5.1 Linear Programming 2015
5.4. Global dependence on 𝑐 Optimal cost variation depending on 𝑐. Assume primal feasible. Let 𝑄 𝑐 = 𝑝: 𝑝 ′ 𝐴≤𝑐′ , 𝑇= 𝑐∈ 𝑅 𝑛 :𝑄 𝑐 is nonempty 𝑇 is convex set. ( If 𝑐 1 , 𝑐 2 ∈𝑇, ∃ 𝑝 1 , 𝑝 2 such that 𝑝 1 ′ 𝐴≤ 𝑐 1 , 𝑝 2 ′ 𝐴≤ 𝑐 2 . 𝜆 𝑝 1 ′ + 1−𝜆 𝑝 2 ′ 𝐴≤𝜆 𝑐 1 + 1−𝜆 𝑐 2 for 𝜆∈ 0,1 𝜆 𝑐 1 + 1−𝜆 𝑐 2 ∈𝑇) If 𝑐∉𝑇 dual infeasible, primal feasible primal unbounded ( −∞ ) 𝑐∈𝑇 finite optimal ( 𝐺(𝑐) ) 𝐺 𝑐 = min 𝑖=1,…,𝑁 𝑐′ 𝑥 𝑖 ( 𝑥 𝑖 : b.f.s. of primal ) 𝐺(𝑐) is piecewise linear concave on 𝑇 If 𝑥 𝑖 is unique optimal when 𝑐= 𝑐 ∗ , then 𝑐 ∗ ′ 𝑥 𝑖 < 𝑐 ∗ ′ 𝑥 𝑗 , ∀ 𝑗≠𝑖 𝑥 𝑖 still optimal near 𝑐 ∗ , 𝐺 𝑐 =𝑐′ 𝑥 𝑖 , and gradient of 𝐺(𝑐) is 𝑥 𝑖 . Linear Programming 2015
Thm 5.3: Consider a feasible LP in standard form. In summary, Thm 5.3: Consider a feasible LP in standard form. (a) The set 𝑇 of all 𝑐 for which the optimal cost is finite, is convex. (b) The optimal cost 𝐺(𝑐) is a concave function of 𝑐 on the set 𝑇. (c) If for some value of 𝑐 the primal problem has a unique optimal solution 𝑥 ∗ , then 𝐺 is linear in the vicinity of 𝑐 and its gradient is equal to 𝑥 ∗ . Linear Programming 2015