Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Simplex Method and Linear Programming Duality Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305,

Similar presentations


Presentation on theme: "The Simplex Method and Linear Programming Duality Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305,"— Presentation transcript:

1 The Simplex Method and Linear Programming Duality Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/class/msande211/ (Based on slides by Yinyu Ye) 1

2 THE SIMPLEX METHOD

3 Basic and Basic Feasible Solution 3 In the LP standard form, select m linearly independent columns, denoted by the variable index set B, from A. Solve A B x B = b for the dimension-m vector x B. By setting the variables, x N, of x corresponding to the remaining columns of A equal to zero, we obtain a solution x such that Ax = b. Then, x is said to be a basic solution to (LP) with respect to the basic variable set B. The variables in x B are called basic variables, those in x N are nonbasic variables, and A B is called a basis. If a basic solution x B ≥ 0, then x is called a basic feasible solution, or BFS. Note that A B and x B follow the same index order in B. Two BFS are adjacent if they differ by exactly one basic variable. A BFS is non-degenerate if x B > 0

4 Simplex Method George B. Dantzig’s Simplex Method for linear programming stands as one of the most significant algorithmic achievements of the 20th century. It is now over 60 years old and still going strong. The basic idea of the simplex method to confine the search to corner points of the feasible region (of which there are only finitely many) in a most intelligent way, so the objective always improves The key for the simplex method is to make computers see corner points; and the key for interior-point methods is to stay in the interior of the feasible region. 4 x1x1 x2x2

5 From Geometry to Algebra How to make computer recognize a corner point? BFS How to make computer terminate and declare optimality? How to make computer identify a better neighboring corner? 5

6 Feasible Directions at a BFS and Optimality Test 6 Non-degenerate BFS: A B x B +A N x N = b, and x B > 0 and x N = 0. Thus the feasible directions, d, are the ones to satisfy A B d B +A N d N = 0, d N ≥ 0. For the BFS to be optimal, any feasible direction must be an ascent direction, that is, c T d= c T B d B + c T N d N ≥ 0. From d B = -(A B ) -1 A N d N, we must have for all d N ≥ 0, c T d= -c T B (A B ) -1 A N d N +c T N d N =(c T N - c T B (A B ) -1 A N ) d N ≥ 0 Thus, (c T N - c T B (A B ) -1 A N ) ≥ 0 is necessary and sufficient. It is called the reduced cost vector for nonbasic variables.

7 Computing the Reduced Cost Vector 7 We compute shadow prices, y T = c T B (A B ) -1, or y T A B = c T B, by solving a system of linear equations. Then we compute r T =c T -y T A, where r N is the reduced cost vector for nonbasic variables (and r B =0 always). If one of r N is negative, then an improving feasible direction is found by increasing the corresponding nonbasic variable value  Increase along this direction till one of the basic variables becomes 0 and hence non-basic  We are left with m basic variables again The process will always converge and produce an optimal solution if one exists (special care for unbounded optimum and when two basic variables become 0 at the same time) T

8 Yinyu Ye, Stanford, MS&E211 Lecture Notes #4 8 In the LP production example, suppose the basic variable set B = {1, 2, 3}. min −x1−2x2−x1−2x2 s.t. x 1 +x3+x3 = 1 x2x2 +x4+x4 x1x1 +x2+x2 +x5+x5 = 1.5 x1,x1,x2,x2,x3,x3,x4,x4,x5x5 ≥ 0.

9 Yinyu Ye, Stanford, MS&E211 Lecture Notes #4 9 min−x1−2x2−x1−2x2 s.t. x 1 +x3+x3 = 1 x2x2 +x4+x4 x1x1 +x2+x2 +x5+x5 = 1.5 x1,x1,x2,x2,x3,x3,x4,x4,x5x5 ≥ 0. In the LP production example, suppose the basic variable set B = {3, 4, 5}.

10 Summary The theory of Basic Feasible Solutions leads to a solution method The Simplex algorithm is one of the most influential and practical algorithms of all time However, we will not test or assign problems on the Simplex method in this class (a testament to the fact that this method has been so successful that we can use it as a basic technology)

11 SENSITIVITY ANALYSIS

12 The dimension of the shadow price (SP) vector equals the dimension of the right-hand-side (RHS) vector, or the number of linear constraints. In general, the optimal SP on a given active constraint is the rate of change in the optimal value (OV) of the objective as the RHS of the constraint increases in a interval, ceteris paribus. All inactive or nonbinding constraint have zero SP. In non-degenerate case, a small change in the RHS would change the OV and the optimal solution (OS), but not the basis and the optimal SP. Yinyu Ye, Stanford, MS&E211 Lecture Notes #6 12 LP Shadow Price Vector

13 Given a non-degenerate BFS in the LP standard form with basis A B x B = (A B ) −1 b > 0, x N = 0, so that small change in b does not change the optimal basis and the shadow price vector: y T = c B T (A B ) -1 At optimality, the OV c T x = c B T x B = c B T (A B ) −1 b = y T b. Thus, when b is changed to b+Δb, then the new OV OV + = c B T x B = c B T (A B ) −1 (b+Δb)= y T (b+Δb)=OV+ y T Δb when the basis is unchanged. Yinyu Ye, Stanford, MS&E211 Lecture Notes #6 13 Why =Net Change

14 The dimension of the reduced-cost (RC) vector equals the dimension of the objective coefficient vector or the number of decision variables. In general, the RC value of any non-basic variable is the amount the objective coefficient of that variable would have to change, ceteris paribus, in order for it to become a basic variable at optimality. All basic variables have zero RC. Upon termination, all non-basic variables have RC ≥ 0 In non-degenerate case, a small change in the objective coefficients may change OV and optimal SP, but not the basis and OS. Yinyu Ye, Stanford, MS&E211 Lecture Notes #6 14 LP Reduced Cost Vector

15 Given a BFS in the LP standard form with basis A B and its companion SP vector: y T = c B T (A B ) -1 and RC r N T =c N T -y T A N > 0 If c N makes a small change, nothing would change. But if they reduced enough such that one of the reduced costs become negative, then the current BFS is no longer optimal. On the other hand, if c B makes a small change, say c B is changed to c B +Δc B, then the new SP and OV y + T = (c B + Δc B ) T (A B ) -1 =y T + Δc B T (A B ) −1 OV + =(y T +Δc B T (A B ) −1 )b=OV+Δc B T (A B ) −1 b=OV + Δc B T x B Yinyu Ye, Stanford, MS&E211 Lecture Notes #6 15 Why =Net Change

16 LP DUALITY

17 Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 17 Dual Problem of Linear Programming Every LP problem is associated with another LP problem called dual (the original problem is called primal). Every variable of the dual is associated with a constraint of the primal; every constraint of the dual is associated with a variable of the primal. The dual is max (min) if the primal is min (max); the objective coefficients of the dual are the RHS of the primal; and the RHS of the dual are the objective coefficients of the primal. The constraint matrix of the dual is the transpose of the constraint matrix of the primal. The final shadow price vector of the primal is an optimal solution of the dual.

18 Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 18 The Dual of the Production Problem Primal Dual

19 obj. coef. vector right-hand-side A right-hand-side obj. coef. vector A T Max model x j ≥ 0 x j ≤ 0 x j free ith constraint ≤ ith constraint ≥ ith constraint = Min model jth constraint ≥ jth constraint ≤ jth constraint = y i ≥ 0 y i ≤ 0 y i free Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 19 More Rules to Construct the Dual The dual of the dual is the primal

20 (LP ) minc T x s.t.Ax = b, x ≥ 0, x ∈ R n. (LD) maxb T y s.t.A T y ≤ c, y ∈ R m Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 20 Dual of LP in Standard Equality Form Usually, we let r = c - A T y ∈ R n called dual slacks; and it should be non-negative for any dual feasible solution. In the simplex method, the final reduced cost vector is a feasible slack vector of the dual.

21 Given a basis A B, the dual vector y satisfying A T y = c B is said to be a dual basic solution B If a dual basic solution is also feasible, that is, c − A T y ≥ 0, it is said to be a dual basic feasible solution (BFS). Every dual BFS is a corner point of the dual feasible region! Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 21 Dual Feasible Region of LP in Standard Equality Form (LD) maxb T y s.t.A T y ≤ c, y ∈ R m This is an LP in the standard inequality form

22 Theorem 1 (Weak duality theorem) Let both primal feasible region F p and dual feasible region F d be non- empty. Then, c T x ≥ b T y for all x ∈ F p, y ∈ F d. Proof: c T x − b T y = c T x − (Ax) T y = x T (c − A T y) = x T r ≥ 0. This theorem shows that a feasible solution to either problem yields a bound on the value of the other problem. We call c T x − b T y the duality gap. If the duality gap is zero, then x and y are optimal for the primal and dual, respectively! Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 22 Dual Theorem Is the reverse true?

23 (LP ) minc T x s.t.Ax = b, x ≥ 0, x ∈ R n. (LD) maxb T y s.t.A T y ≤ c, y ∈ R m Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 23 Dual of LP in Standard Equality Form Usually, we let r = c - A T y ∈ R n called dual slacks; and it should be non-negative for any dual feasible solution. In the simplex method, the final reduced cost vector is a feasible slack vector of the dual and the final shadow price vector is an optimum solution of the dual, since c T x = y T b

24 Theorem 2 (Strong duality theorem) Let both primal feasible region F p and dual feasible region F d be non-empty. Then, x ∗ ∈ F p is optimal for (LP) and y ∗ ∈ F d is optimal for (LD) if and only if the duality gap c T x ∗ − b T y ∗ = 0. Corollary If (LP) and (LD) both have feasible solutions then both problems have optimal solutions and the optimal objective values of the objective functions are equal. If one of (LP) or (LD) has no feasible solution, then the other is either unbounded or has no feasible solution. If one of (LP) or (LD) is unbounded then the other has no feasible solution. Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 24 Dual Theorem continued Proved by the Simplex Method

25 Primal Dual F-BF-UBIF F-B F-UB IF Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 25 Possible Combination of Primal and Dual Yes

26 (x, y, r) ∈ ( R n, R m, R n ) : ++ c T x − b T y=0 Ax=b A T y + r=c, which is a system of linear inequalities and equations. Thus it is easy to verify whether or not a pair (x, y, r) is optimal by a computer. Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 26 Application of the Theorem: Optimality Condition Check if a pair of primal x and dual y with slack r, is optimal: These conditions can be classified as Primal Feasibility, Dual Feasibility, and Zero Duality Gap.

27 For feasible primal x ≥ 0 and dual (y, r ≥ 0 ), x T r = x T (c − A T y) = c T x − b T y is also called the complementarity gap. Since both x and r are nonnegative, zero duality gap 0= x T r = x 1 r 1 + x 2 r 2 +…+ x n r n implies that x j r j = 0 for all j = 1,..., n, where we say x and r are complementary to each other. Note that r j = 0 implies that the corresponding inequality constraint is active at the solution. Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 27 Application of the Theorem: Complementarity Slackness

28 Primal (Dual) Dual (Primal) Max model x j ≥ 0 x j ≤ 0 x j free ith constraint ≤ ith constraint ≥ ith constraint = Min model jth constraint ≥ jth constraint ≤ jth constraint = y i ≥ 0 y i ≤ 0 y i free Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 28 Implication of the Complementarity Complementarity condition implies that at optimality: every inactive inequality constraint has the zero dual value; every non-zero variable value implies that the dual constraint is active; every equality constraint is viewed active.

29 The simplex method described earlier is the primal simplex method, meaning that the method maintains and improves a primal basic feasible solution x B. Shadow vector y in the method is a dual basic solution and it is not feasible until at the termination; the reduced vector r in the method is the dual dual slack vector. Note that x N = 0 and r B = 0, so that x and r are complementary to each other at any basis A B. When the method terminates, x B is primal optimal and (y, r) becomes dual feasible so that it is also dual optimal, since they complementary. Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 29 The Ideology of the (Primal) Simplex Method

30 Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 30 Interpretation of the Dual of the Production Problem Dual Primal Acquisition Pricing: y: prices of the resources A T y≥c: the prices are competitive for each product min b T y: minimize the total liquidation cost max c T xs.t. Ax ≤ b, x ≥ 0 min b T ys.t. A T y ≥ c, y ≥ 0

31 31 Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 The Transportation Problem 2 3 n 1 2 m.............. s1s1 s2s2 smsm Supply Demand d1d1 d2d2 d3d3 dndn C 11, x 11 1 C mn, x mn

32 32 Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 The Transportation Dual Shipping Company’s new charge scheme: u i : supply site unit charge v i : demand site unit charge u i + v j ≤ c ij : competitiveness Primal Dual

33 1234Supply 1121346 500 u 1 2641011 700 u 2 3109124 800 u 3 Demand400 v 1 900 v 2 200 v 3 500 v 4 20000 33 Yinyu Ye, Stanford, MS&E211 Lecture Notes #7 The Transportation Example

34 Theorem When b is changed to b+Δb, the current optimal basis A B remains optimal if and only if or When c B is changed to c B +Δc B, the current optimal basis A B remains optimal if and only if This will establish a range for each coefficient of b or c B. Yinyu Ye, Stanford, MS&E211 Lecture Notes #6 34 Wrapping up: Range Analyses (A B ) −1 (b+Δb)≥0 x B +(A B ) −1 Δb≥0. c N T -y + T A N = r N T - Δc B T (A B ) −1 A N ≥ 0


Download ppt "The Simplex Method and Linear Programming Duality Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305,"

Similar presentations


Ads by Google