Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 5. Sensitivity Analysis

Similar presentations


Presentation on theme: "Chapter 5. Sensitivity Analysis"โ€” Presentation transcript:

1 Chapter 5. Sensitivity Analysis
Investigate the dependence of optimal solution on changes of problem data. (1) range of data variation for which current basis remains optimal (2) Reoptimize after changes of data. Linear Programming 2015

2 5.1 Local sensitivity analysis
Current basis optimal if ๐ต โˆ’1 ๐‘โ‰ฅ0, ๐‘ โ€ฒ โˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ดโ‰ฅ0 (a) new variable added min ๐‘ โ€ฒ ๐‘ฅ+ ๐‘ ๐‘›+1 ๐‘ฅ ๐‘›+1 ๐ด๐‘ฅ+ ๐ด ๐‘›+1 ๐‘ฅ ๐‘›+1 =๐‘ ๐‘ฅโ‰ฅ0 ๐‘ฅ, ๐‘ฅ ๐‘›+1 = ๐‘ฅ โˆ— , 0 is a b.f.s., check if ๐‘ ๐‘›+1 = ๐‘ ๐‘›+1 โˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด ๐‘›+1 โ‰ฅ0 If ๐‘ ๐‘›+1 โ‰ฅ0, current solution optimal. If ๐‘ ๐‘›+1 <0, add the new column to the tableau and reoptimize starting from the current basis ๐ต. Linear Programming 2015

3 (b) new inequality added. Add ๐‘Ž ๐‘š+1 โ€ฒ ๐‘ฅโ‰ฅ ๐‘ ๐‘š+1
If ๐‘Ž ๐‘š+1 โ€ฒ ๐‘ฅ โˆ— โ‰ฅ ๐‘ ๐‘š+1 , ๐‘ฅ โˆ— still optimal. Otherwise, let ๐‘Ž ๐‘š+1 โ€ฒ ๐‘ฅโˆ’ ๐‘ฅ ๐‘›+1 = ๐‘ ๐‘š+1 , ๐‘ฅ ๐‘›+1 โ‰ฅ0. New basis ๐ต = ๐ต 0 ๐‘Žโ€ฒ โˆ’1 . ( ๐ต ๐‘ฅ ๐‘ฅ ๐‘›+1 = ๐‘ ๐‘ ๐‘š+1 ) New basic solution is ๐‘ฅ โˆ— , ๐‘Ž ๐‘š+1 โ€ฒ ๐‘ฅ โˆ— โˆ’ ๐‘ ๐‘š+1 , primal infeasible. Dual feasibility? (reduced costs not changed) ๐ต โˆ’1 = ๐ต โˆ’1 0 ๐‘Žโ€ฒ ๐ต โˆ’1 โˆ’1 ๏ƒž ๐‘ โ€ฒ ,0 โˆ’ ๐‘ ๐ต โ€ฒ ,0 ๐ต โˆ’1 0 ๐‘Žโ€ฒ ๐ต โˆ’1 โˆ’1 ๐ด 0 ๐‘Ž ๐‘š+1 โ€ฒ โˆ’1 = ๐‘ โ€ฒ โˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด, 0 โ‰ฅ0 ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 , 0 Linear Programming 2015

4 Use dual simplex, constraints in current tableau is
๐ต โˆ’1 ๐ด 0 ๐‘Ž ๐‘š+1 โ€ฒ โˆ’1 = ๐ต โˆ’1 0 ๐‘Žโ€ฒ ๐ต โˆ’1 โˆ’1 ๐ด 0 ๐‘Ž ๐‘š+1 โ€ฒ โˆ’1 = ๐ต โˆ’1 ๐ด 0 ๐‘Žโ€ฒ ๐ต โˆ’1 ๐ดโˆ’ ๐‘Ž ๐‘š+1 โ€ฒ 1 Or we perform elementary row operations on the tableau to make the coefficients of the basic variables in the added constraint become 0. (after making the coefficient of ๐‘ฅ ๐‘›+1 as 1 by multiplying โˆ’1 on both sides) (see ex. 5.2.) Note: dual vector ( ๐‘ โ€ฒ, ๐‘ ๐‘š+1 ) can also be obtained as follows. ( ๐‘ โ€ฒ , ๐‘ ๐‘š+1 ) ๐ต = ๐‘ ๐ต ๏ƒž ( ๐‘ โ€ฒ , ๐‘ ๐‘š+1 ) ๐ต 0 ๐‘Žโ€ฒ โˆ’1 = ๐‘ ๐ต โ€ฒ 0 ๏ƒž ๐‘ โ€ฒ๐ต+ ๐‘ ๐‘š+1 ๐‘Žโ€ฒ= ๐‘ ๐ต โ€ฒ โˆ’ ๐‘ ๐‘š+1 =0 ๏ƒž ๐‘ ๐‘ ๐‘š+1 = ๐‘ โˆ— 0 Hence dual variable for added constraint = 0, original dual variable values not changed ๏ƒž No change in reduced costs. Linear Programming 2015

5 Add ๐‘Ž ๐‘š+1 โ€ฒ ๐‘ฅ= ๐‘ ๐‘š+1 ( violated by ๐‘ฅ โˆ— )
(c) new equality added. Add ๐‘Ž ๐‘š+1 โ€ฒ ๐‘ฅ= ๐‘ ๐‘š+1 ( violated by ๐‘ฅ โˆ— ) ๐‘ โˆ— 0 dual feasible, but may not have a primal basic solution. Instead of finding new ๐ต , solve ( assumning ๐‘Ž ๐‘š+1 โ€ฒ ๐‘ฅ โˆ— > ๐‘ ๐‘š+1 ) min ๐‘ โ€ฒ ๐‘ฅ+๐‘€ ๐‘ฅ ๐‘›+1 ๐ด๐‘ฅ =๐‘ ๐‘Ž ๐‘š+1 โ€ฒ ๐‘ฅโˆ’ ๐‘ฅ ๐‘›+1 = ๐‘ ๐‘š+1 ๐‘ฅโ‰ฅ0, ๐‘ฅ ๐‘›+1 โ‰ฅ0 Add ๐‘ฅ ๐‘›+1 to basis (same as (b)), get primal b.f.s and use primal simplex Remark : See โ€˜Linear Programmingโ€™, V. Chvatal, Freeman, for reoptimization approaches for bounded variable LP problem (Chapter 10. Sensitivity Analysis). Linear Programming 2015

6 No changes in reduced costs. But need ๐ต โˆ’1 ๐‘ ๏‚ฎ ๐ต โˆ’1 ๐‘+๐›ฟ ๐‘’ ๐‘– โ‰ฅ0
(d) changes in ๐‘ ๐‘ ๏‚ฎ ๐‘+๐›ฟ ๐‘’ ๐‘– No changes in reduced costs. But need ๐ต โˆ’1 ๐‘ ๏‚ฎ ๐ต โˆ’1 ๐‘+๐›ฟ ๐‘’ ๐‘– โ‰ฅ0 Let ๐‘” be the ๐‘–โˆ’๐‘กโ„Ž column of ๐ต โˆ’1 . ๐ต โˆ’1 ๐‘+๐›ฟ ๐‘’ ๐‘– = ๐‘ฅ ๐ต +๐›ฟ๐‘”โ‰ฅ0, find range of ๐›ฟ. If ๐›ฟ out of range, use dual simplex to reoptimize. Linear Programming 2015

7 (e-1) ๐‘ฅ ๐‘— nonbasic. ๐‘ ๐‘— ๏‚ฎ ๐‘ ๐‘— +๐›ฟ primal feasibility not affected.
(e) changes in ๐‘ (e-1) ๐‘ฅ ๐‘— nonbasic. ๐‘ ๐‘— ๏‚ฎ ๐‘ ๐‘— +๐›ฟ primal feasibility not affected. ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด ๐‘— โ‰ค ๐‘ ๐‘— +๐›ฟ ๏ƒž ๐›ฟโ‰ฅโˆ’ ๐‘ ๐‘— (e-2) ๐‘ฅ ๐‘— basic (suppose ๐‘—=๐ต(๐‘™) ) ๐‘ ๐ต ๏‚ฎ ๐‘ ๐ต +๐›ฟ ๐‘’ ๐‘™ optimality condition : ๐‘ ๐ต +๐›ฟ ๐‘’ ๐‘™ โ€ฒ ๐ต โˆ’1 ๐ด ๐‘– โ‰ค ๐‘ ๐‘– , โˆ€ ๐‘–โ‰ ๐‘— ๏ƒž ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด ๐‘– +๐›ฟ ๐‘’ ๐‘™ โ€ฒ ๐ต โˆ’1 ๐ด ๐‘– โ‰ค ๐‘ ๐‘– ๏ƒž ๐›ฟ ๐‘” ๐‘™๐‘– โ‰ค ๐‘ ๐‘– โˆ’ ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐ด ๐‘– = ๐‘ ๐‘– ( ๐‘” ๐‘™๐‘– =๐‘™โˆ’๐‘กโ„Ž entry of ๐ต โˆ’1 ๐ด ๐‘– ) Note that, for basic variables except ๐‘—, have ๐‘” ๐‘™๐‘– =0 Hence only need to check the range for nonbasic ๐‘ฅ ๐‘– โ€ฒ ๐‘  Linear Programming 2015

8 (f) changes in nonbasic column ๐ด ๐‘— ๐‘Ž ๐‘–๐‘— ๏‚ฎ ๐‘Ž ๐‘–๐‘— +๐›ฟ
Also may think that we have ๐›ฟ remaining as the coefficient of ๐‘ฅ ๐‘— in 0-th row with the optimal basis ๐ต. Need to pivot to make the coefficient 0. Then the coefficients of nonbasic variables in 0-th row are affected. We need the range of ๐›ฟ which makes the coefficient of nonbasic variables nonnegative. Ex) ๐‘ฅ 1 = ๐‘ฅ 1 = ๐‘ฅ 2 = ๐‘ฅ 2 = (f) changes in nonbasic column ๐ด ๐‘— ๐‘Ž ๐‘–๐‘— ๏‚ฎ ๐‘Ž ๐‘–๐‘— +๐›ฟ ๐‘ ๐‘— โˆ’๐‘โ€ฒ ๐ด ๐‘— +๐›ฟ ๐‘’ ๐‘– โ‰ฅ0 ๏ƒž ๐‘ ๐‘— โˆ’๐›ฟ ๐‘ ๐‘– โ‰ฅ0 Linear Programming 2015

9 5.2. Global dependence on ๐‘ Investigate the change of optimal value as a function of ๐‘ Let ๐‘ƒ ๐‘ = ๐‘ฅโˆˆ ๐‘… ๐‘› :๐ด๐‘ฅ=๐‘, ๐‘ฅโ‰ฅ0 ๐‘†= ๐‘โˆˆ ๐‘… ๐‘š :๐‘ƒ ๐‘ is nonempty = ๐ด๐‘ฅ:๐‘ฅโ‰ฅ0 (convex) Define ๐น ๐‘ = min ๐‘ฅโˆˆ๐‘ƒ(๐‘) ๐‘ โ€ฒ ๐‘ฅ ( called value function) Assume dual feasible set ๐‘: ๐‘ โ€ฒ ๐ดโ‰ค๐‘โ€ฒ is nonempty. ๏ƒž ๐น(๐‘) finite โˆ€ ๐‘โˆˆ๐‘† Suppose at ๐‘ โˆ— โˆˆ๐‘†, โˆƒ nondegenerate optimal solution to primal. ( ๐‘ฅ ๐ต = ๐ต โˆ’1 ๐‘) From nondegeneracy assumption, current basis ๐ต is optimal basis for small changes in ๐‘. ๏ƒž ๐น ๐‘ = ๐‘ ๐ต โ€ฒ ๐ต โˆ’1 ๐‘= ๐‘ โ€ฒ ๐‘ for ๐‘ close to ๐‘ โˆ— ๏ƒž ๐น(๐‘) is a linear function of ๐‘ near ๐‘ โˆ— and gradient is ๐‘. Linear Programming 2015

10 pf) Let ๐‘ 1 , ๐‘ 2 โˆˆ๐‘†. ๐น ๐‘ 1 =๐‘โ€ฒ ๐‘ฅ 1 , ๐น ๐‘ 2 =๐‘โ€ฒ ๐‘ฅ 2 .
Thm 5.1 : ๐น(๐‘) is convex on ๐‘†. pf) Let ๐‘ 1 , ๐‘ 2 โˆˆ๐‘†. ๐น ๐‘ 1 =๐‘โ€ฒ ๐‘ฅ 1 , ๐น ๐‘ 2 =๐‘โ€ฒ ๐‘ฅ 2 . For ๐‘ฆ=๐œ† ๐‘ฅ 1 + 1โˆ’๐œ† ๐‘ฅ 2 , ๐œ†โˆˆ 0,1 , have ๐ด๐‘ฆ=๐œ† ๐‘ 1 + 1โˆ’๐œ† ๐‘ 2 ๐‘ฆ feasible solution when ๐‘ is ๐œ† ๐‘ 1 + 1โˆ’๐œ† ๐‘ 2 ๏ƒž ๐น ๐œ† ๐‘ 1 + 1โˆ’๐œ† ๐‘ 2 โ‰ค ๐‘ โ€ฒ ๐‘ฆ=๐œ†๐‘โ€ฒ ๐‘ฅ 1 + 1โˆ’๐œ† ๐‘โ€ฒ ๐‘ฅ 2 =๐œ†๐น ๐‘ โˆ’๐œ† ๐น( ๐‘ 2 ) ๏‚„ Different reasoning using dual problem max ๐‘ โ€ฒ ๐‘, ๐‘ โ€ฒ ๐ดโ‰ค๐‘โ€ฒ with the assumption that dual feasibility holds. Then, strong duality holds for all ๐‘โˆˆ๐‘†. Hence ๐น ๐‘ = ๐‘ ๐‘– โ€ฒ ๐‘ for some extreme point ๐‘ ๐‘– in dual. ( ๐ด is full row rank, hence dual has extreme point if feasible) ๏ƒž ๐น(๐‘)= max ๐‘–=1,โ€ฆ,๐‘ ๐‘ ๐‘– โ€ฒ ๐‘ , ๐‘โˆˆ๐‘† max of linear functions ๏ƒž piecewise linear convex. Linear Programming 2015

11 Now consider ๐‘= ๐‘ โˆ— +๐œƒ๐‘‘, ๐œƒโˆˆ๐‘… ๐‘“ ๐œƒ =๐น ๐‘ โˆ— +๐œƒ๐‘‘
๐‘“ ๐œƒ =๐น ๐‘ โˆ— +๐œƒ๐‘‘ ๐‘“ ๐œƒ = max ๐‘–=1,โ€ฆ,๐‘ ๐‘ ๐‘– โ€ฒ ๐‘ โˆ— +๐œƒ๐‘‘ , ๐‘ โˆ— +๐œƒ๐‘‘โˆˆ๐‘† max of affine functions ๐‘“(๐œƒ) ๐‘ 1 โ€ฒ ๐‘ โˆ— +๐œƒ๐‘‘ ๐‘ 3 โ€ฒ ๐‘ โˆ— +๐œƒ๐‘‘ ๐‘ 2 โ€ฒ ๐‘ โˆ— +๐œƒ๐‘‘ ๐œƒ ๐œƒ 1 ๐œƒ 2 Figure 5.1 Linear Programming 2015

12 5.4. Global dependence on ๐‘ Optimal cost variation depending on ๐‘. Assume primal feasible. Let ๐‘„ ๐‘ = ๐‘: ๐‘ โ€ฒ ๐ดโ‰ค๐‘โ€ฒ , ๐‘‡= ๐‘โˆˆ ๐‘… ๐‘› :๐‘„ ๐‘ is nonempty ๐‘‡ is convex set. ( If ๐‘ 1 , ๐‘ 2 โˆˆ๐‘‡, โˆƒ ๐‘ 1 , ๐‘ 2 such that ๐‘ 1 โ€ฒ ๐ดโ‰ค ๐‘ 1 , ๐‘ 2 โ€ฒ ๐ดโ‰ค ๐‘ 2 . ๐œ† ๐‘ 1 โ€ฒ + 1โˆ’๐œ† ๐‘ 2 โ€ฒ ๐ดโ‰ค๐œ† ๐‘ 1 + 1โˆ’๐œ† ๐‘ 2 for ๐œ†โˆˆ 0,1 ๏‚ฎ ๐œ† ๐‘ 1 + 1โˆ’๐œ† ๐‘ 2 โˆˆ๐‘‡) If ๐‘โˆ‰๐‘‡ ๏ƒž dual infeasible, primal feasible ๏ƒž primal unbounded ( โˆ’โˆž ) ๐‘โˆˆ๐‘‡ ๏ƒž finite optimal ( ๐บ(๐‘) ) ๐บ ๐‘ = min ๐‘–=1,โ€ฆ,๐‘ ๐‘โ€ฒ ๐‘ฅ ๐‘– ( ๐‘ฅ ๐‘– : b.f.s. of primal ) ๏ƒž ๐บ(๐‘) is piecewise linear concave on ๐‘‡ If ๐‘ฅ ๐‘– is unique optimal when ๐‘= ๐‘ โˆ— , then ๐‘ โˆ— โ€ฒ ๐‘ฅ ๐‘– < ๐‘ โˆ— โ€ฒ ๐‘ฅ ๐‘— , โˆ€ ๐‘—โ‰ ๐‘– ๐‘ฅ ๐‘– still optimal near ๐‘ โˆ— , ๐บ ๐‘ =๐‘โ€ฒ ๐‘ฅ ๐‘– , and gradient of ๐บ(๐‘) is ๐‘ฅ ๐‘– . Linear Programming 2015

13 Thm 5.3: Consider a feasible LP in standard form.
In summary, Thm 5.3: Consider a feasible LP in standard form. (a) The set ๐‘‡ of all ๐‘ for which the optimal cost is finite, is convex. (b) The optimal cost ๐บ(๐‘) is a concave function of ๐‘ on the set ๐‘‡. (c) If for some value of ๐‘ the primal problem has a unique optimal solution ๐‘ฅ โˆ— , then ๐บ is linear in the vicinity of ๐‘ and its gradient is equal to ๐‘ฅ โˆ— . Linear Programming 2015


Download ppt "Chapter 5. Sensitivity Analysis"

Similar presentations


Ads by Google