Download presentation
Presentation is loading. Please wait.
Published byBernard Mitchell Modified over 5 years ago
1
3.3 Implementation (1) naive implementation (2) revised simplex method
(3) full tableau implementation (1) Naive implementation : Given basis ๐ต. Compute ๐ โฒ = ๐ ๐ต โฒ ๐ต โ1 ( solve ๐ โฒ ๐ต= ๐ ๐ต โฒ) Choose ๐ such that ๐ ๐ = ๐ ๐ โ ๐ ๐ต โฒ ๐ต โ1 ๐ด ๐ = ๐ ๐ โ๐โฒ ๐ด ๐ <0, ๐โ๐. ๐ฅ ๐ต ๐ฅ ๐ = ๐ต โ1 ๐ 0 +๐ ๐ ๐ต ๐ ๐ = ๐ต โ1 ๐ 0 +๐ โ ๐ต โ1 ๐ด ๐ ๐ ๐ Let ๐ข= ๐ต โ1 ๐ด ๐ ( solve ๐ต๐ข= ๐ด ๐ ) Determine ๐ โ {= min ๐โ๐ต: ๐ข ๐ >0 ( ๐ต โ1 ๐ ๐ / ๐ข ๐ )} Let ๐ต(๐) be the index of the leaving basic variable. Replace ๐ด ๐ต(๐) by ๐ด ๐ in the basis and update basis indices. Update solution ๐ฅ. Naive implementation itself is frequently called the revised simplex method compared to the full tableau implementation. Linear Programming 2017
2
(2) Revised simplex method :
Naive implementation needs to find ๐ โฒ = ๐ ๐ต โฒ ๐ต โ1 and ๐ข= ๐ต โ1 ๐ด ๐ ( or solve ๐ โฒ ๐ต= ๐ ๐ต โฒ, ๐ต๐ข= ๐ด ๐ ) in each iteration. Update ๐ต โ1 efficiently so that computational burden is reduced. (compute ๐ ๐ต โฒ ๐ต โ1 and ๐ต โ1 ๐ด ๐ easily. Similar idea can be used to update ๐ต efficiently and find ๐,๐ข vectors easily.) ๐ต= ๐ด ๐ต(1) ,โฆ, ๐ด ๐ต(๐) โ ๐ต = ๐ด ๐ต(1) ,โฆ, ๐ด ๐ต(๐โ1) , ๐ด ๐ , ๐ด ๐ต(๐+1) ,โฆ, ๐ด ๐ต(๐) ๐ต โ1 ๐ต = ๐ 1 ,โฆ, ๐ ๐โ1 ,๐ข, ๐ ๐+1 ,โฆ, ๐ ๐ , ๐ข= ๐ต โ1 ๐ด ๐ Premultiply elementary row operation matrices ๐ ๐ ๐ ๐โ1 โฆ ๐ 1 โก๐ to ๐ต โ1 ๐ต so that ๐ ๐ต โ1 ๐ต =๐ ๐ 1 ,โฆ, ๐ ๐โ1 ,๐ข, ๐ ๐+1 ,โฆ, ๐ ๐ =๐ผ ๏ ๐ ๐ต โ1 = ๐ต โ1 Hence applying the same row operations ( to convert ๐ข to ๐ ๐ ) to ๐ต โ1 results in ๐ต โ1 . ( see example in the text) Linear Programming 2017
3
(3) Full tableau implementation : ๐ด๐ฅ=๐ ๏ ๐ต โ1 ๐ด๐ฅ= ๐ต โ1 ๐
๐ด๐ฅ=๐ ๏ ๐ต โ1 ๐ด๐ฅ= ๐ต โ1 ๐ maintain ๐ต โ1 ๐: ๐ต โ1 ๐ด 1 ,โฆ, ๐ต โ1 ๐ด ๐ can read current b.f.s. ๐ฅ from ๐ต โ1 ๐ and ๐ ๐ต =โ ๐ต โ1 ๐ด ๐ . Update to ๐ต โ1 ๐:๐ด . We know ๐ต โ1 =๐ ๐ต โ1 . Hence ๐ต โ1 ๐:๐ด =๐ ๐ต โ1 ๐:๐ด =๐ ๐ต โ1 ๐: ๐ต โ1 ๐ด . So apply row operations, which convert ๐ข= ๐ต โ1 ๐ด ๐ to ๐ ๐ , to the matrix ๐ต โ1 ๐: ๐ต โ1 ๐ด . (To find exiting column ๐ด ๐ต(๐) and step size ๐ โ , compare ๐ฅ ๐ต(๐) ๐ข ๐ for ๐ข ๐ >0, ๐=1,โฆ,๐) (pivot column, pivot row, pivot element) Also maintain and update information about reduced costs and objective value. Linear Programming 2017
4
Currently 0-th row is 0:๐โฒ โ๐โฒ ๐:๐ด , where ๐ โฒ = ๐ ๐ต โฒ ๐ต โ1 .
To update the 0-th row, add ๐ร (pivot row) to 0-th row for some scalar ๐ so that the coefficient for the entering variable ๐ฅ ๐ in 0-th row becomes 0. Currently 0-th row is 0:๐โฒ โ๐โฒ ๐:๐ด , where ๐ โฒ = ๐ ๐ต โฒ ๐ต โ1 . Let column ๐ be the pivot column and row ๐ be the pivot row. Pivot row ๐ of ๐ต โ1 ๐: ๐ต โ1 ๐ด is โโฒ ๐:๐ด where โโฒ is ๐โ๐กโ row of ๐ต โ1 . Hence, after adding, new 0-th row still represented as 0:๐โฒ โ๐โฒ ๐:๐ด for some ๐, while ๐ ๐ต (๐) โ๐โฒ ๐ด ๐ต ๐ = ๐ ๐ โ๐โฒ ๐ด ๐ =0. Linear Programming 2017
5
Now for ๐ฅ ๐ basic, ๐โ ๐, have ๐ ๐ remains at 0.
(continued) Now for ๐ฅ ๐ basic, ๐โ ๐, have ๐ ๐ remains at 0. ( ๐ ๐ = 0 before the pivot. Have ๐ต โ1 ๐ด ๐ต(๐) = ๐ ๐ , ๐โ ๐. Hence ๐ต โ1 ๐ด ๐ต(๐) ๐ =0, ๐โ ๐ ) ๏ ๐ ๐ต (๐) โ๐โฒ ๐ด ๐ต ๐ =0, for all ๐โ ๐ต ๏ ๐ ๐ต โฒโ ๐ โฒ ๐ต =0 ๏ ๐โฒ= ๐ ๐ต โฒ ๐ต โ1 ๏ new 0-th row is 0:๐โฒ โ ๐ ๐ต โฒ ๐ต โ1 ๐:๐ด as desired. Linear Programming 2017
6
See text example 3.5 for more iterations.
๐ฅ 4 = ๐ฅ 5 = ๐ฅ 6 = ๐ฅ 4 = ๐ฅ 1 = ๐ฅ 6 = See text example 3.5 for more iterations. Linear Programming 2017
7
(1) Tableau form also can be derived as the following :
(Remarks) (1) Tableau form also can be derived as the following : Given min ๐ โฒ ๐ฅ, ๐ด๐ฅ=๐, ๐ฅโฅ0, let ๐ด= ๐ต:๐ , ๐ฅ= ๐ฅ ๐ต , ๐ฅ ๐ , ๐= ๐ ๐ต , ๐ ๐ , where ๐ต is the current basis. Also let ๐ง denote the value of the objective function, i. e. ๐ง= ๐ โฒ ๐ฅ. Since all feasible solutions must satisfy ๐ด๐ฅ=๐, they must satisfy ๐ต:๐ ๐ฅ ๐ต ๐ฅ ๐ =๐ ๏ฎ ๐ต ๐ฅ ๐ต +๐ ๐ฅ ๐ =๐ ๏ฎ ๐ต ๐ฅ ๐ต =๐โ๐ ๐ฅ ๐ ๏ฎ ๐ฅ ๐ต = ๐ต โ1 ๐โ ๐ต โ1 ๐ ๐ฅ ๐ ๐ผ ๐ฅ ๐ต + ๐ต โ1 ๐ ๐ฅ ๐ = ๐ต โ1 ๐ or in matrix form, ๐ผ: ๐ต โ1 ๐ ๐ฅ ๐ต ๐ฅ ๐ = ๐ต โ1 ๐ Linear Programming 2017
8
๐ง= ๐ โฒ ๐ฅ= ๐ ๐ต โฒ ๐ฅ ๐ต + ๐ ๐ โฒ ๐ฅ ๐ = ๐ ๐ต โฒ ๐ต โ1 ๐โ ๐ต โ1 ๐ ๐ฅ ๐ + ๐ ๐ โฒ ๐ฅ ๐
(continued) Since all feasible solutions must satisfy these equations, we can plug in the expression for ๐ฅ ๐ต into the objective function to obtain ๐ง= ๐ โฒ ๐ฅ= ๐ ๐ต โฒ ๐ฅ ๐ต + ๐ ๐ โฒ ๐ฅ ๐ = ๐ ๐ต โฒ ๐ต โ1 ๐โ ๐ต โ1 ๐ ๐ฅ ๐ + ๐ ๐ โฒ ๐ฅ ๐ = ๐ ๐ต โฒ ๐ต โ1 ๐+0โฒ ๐ฅ ๐ต + ๐ ๐ โฒโ ๐ ๐ต โฒ ๐ต โ1 ๐ ๐ฅ ๐ = ๐ ๐ต โฒ ๐ต โ1 ๐+ ๐ ๐ โฒโ๐โฒ๐ ๐ฅ ๐ , where ๐ โฒ = ๐ ๐ต โฒ ๐ต โ1 or ๐ โฒ ๐ต= ๐ ๐ต โฒ. Hence obtain the tableau with respect to the current basis B ๐งโ ๐ ๐ต โฒ ๐ต โ1 ๐= 0 โฒ ๐ฅ ๐ต + ๐ ๐ โฒโ ๐ ๐ต โฒ ๐ต โ1 ๐ ๐ฅ ๐ ๐ต โ1 ๐= ๐ผ ๐ฅ ๐ต + ๐ต โ1 ๐ ๐ฅ ๐ = ๐ต โ1 ๐ด ๐ฅ Linear Programming 2017
9
(continued) (2) Tableau also can be obtained using the following logic
(continued) (2) Tableau also can be obtained using the following logic. Note that elementary row operations on equations do not change the set of feasible solutions, but the representation is changed. Suppose current basis ๐ต is known. From โ๐ง+ ๐ ๐ต โฒ ๐ฅ ๐ต + ๐ ๐ โฒ ๐ฅ ๐ =0 ๐ต ๐ฅ ๐ต +๐ ๐ฅ ๐ =๐ We compute multiplier vector ๐ for the constraints by solving ๐โฒ๐ต= ๐ ๐ต โฒ ๐โฒ= ๐ ๐ต โฒ ๐ต โ1 . Then we take linear combination of constraints using weight vector โ๐ and add it to objective row, resulting in โ๐ง+0โฒ ๐ฅ ๐ต + ๐ ๐ โฒโ ๐ ๐ต โฒ ๐ต โ1 ๐ ๐ ๐ โฒโ ๐ ๐ต โฒ ๐ต โ1 ๐ ๐ฅ ๐ =โ ๐ ๐ต โฒ ๐ต โ1 ๐, or ๐งโ ๐ ๐ต โฒ ๐ต โ1 ๐= 0 โฒ ๐ฅ ๐ต + ๐ ๐ โฒโ ๐ ๐ต โฒ ๐ต โ1 ๐ ๐ฅ ๐ . Also, we multiply ๐ต โ1 on both sides of constraints ๏ฎ ๐ผ ๐ฅ ๐ต + ๐ต โ1 ๐ ๐ฅ ๐ = ๐ต โ1 ๐. Here, the ๐โ๐กโ row of ๐ต โ1 is the weight vector we use for weighted sum of constraints to obtain updated ๐โ๐กโ constraint. Linear Programming 2017
10
(continued) (3) Updating the tableau using elementary row operations is tantamount to updating the system of linear equations using elementary row operations so that the current basic feasible solution for updated basis ๐ต can be easily identified. Hence the submatrix of the updated ๐ด matrix corresponding to the new basis ๐ต constitutes an identity matrix. However, the set of feasible solutions does not change when we apply elementary row operations to the system of equations. Linear Programming 2017
11
Practical Performance Enhancements
In commercial LP solvers, ๐ต โ1 is not updated in the revised simplex method. Instead we update the representation of ๐ต as ๐ต=๐ฟ๐, where ๐ฟ is lower triangular and ๐ is upper triangular matrices (with some row permutation allowed, called ๐ฟ๐ decomposition, triangular factorization). We solve the system (with proper updates), ๐โฒ๐ฟ๐= ๐ ๐ต โฒ, ๐ฟ๐๐ข= ๐ด ๐ , each system takes ๐ ๐ 2 to solve and numerically more stable than using ๐ต โ1 . Moreover, less fill-in occurs in ๐ฟ๐ decomposition than in ๐ต โ1 , which is important when we solve large sparse problems. Linear Programming 2017
12
3.4 Anticycling 1. Lexicographic pivoting rule
Def : ๐ขโ ๐
๐ is said to be lexicographically larger than ๐ฃโ ๐
๐ if ๐ขโ ๐ฃ and the first nonzero component of ๐ขโ๐ฃ is positive ( ๐ข > ๐ฃ ) Lexicographic pivoting rule (1) choose entering ๐ฅ ๐ with ๐ ๐ <0. Compute updated column ๐ข= ๐ต โ1 ๐ด ๐ . (2) For each ๐ with ๐ข ๐ >0, divide ๐โ๐กโ row of the tableau by ๐ข ๐ and choose lexicographically smallest row. If row ๐ is smallest, ๐ฅ ๐ต(๐) leaves basis. Linear Programming 2017
13
Suppose pivot column is the third one ( ๐=3)
Ex 3.7: Suppose pivot column is the third one ( ๐=3) ratio = 1/3 for 1st and 3rd row third row is pivot row, and ๐ฅ ๐ต(3) exits the basis. Linear Programming 2017
14
(a) every row except 0-th remains lexicographically positive.
Thm 3.4 : Suppose the rows in the current simplex tableau is lexicographically positive except 0-th row and lexicographic rule is used, then (a) every row except 0-th remains lexicographically positive. (b) 0-th row strictly increases lexicographically. (c) simplex method terminates finitely. Linear Programming 2017
15
Then (๐โ๐กโ row) / ๐ข ๐ < (๐โ๐กโ row) / ๐ข ๐ , ๐โ ๐, ๐ข ๐ >0
Pf) (a) Suppose ๐ฅ ๐ enters, ๐ฅ ๐ leaves ( ๐ข ๐ >0, pivot row is l-th row. ) Then (๐โ๐กโ row) / ๐ข ๐ < (๐โ๐กโ row) / ๐ข ๐ , ๐โ ๐, ๐ข ๐ >0 (๐โ๐กโ row) ๏ฎ (๐โ๐กโ row) / ๐ข ๐ ( lexicographically positive ) For ๐โ๐กโ row, ๐โ ๐ (1) ๐ข ๐ <0 ; add pos. num. ๏ด (๐โ๐กโ row) to ๐โ๐กโ row ๏ lex. pos. (2) ๐ข ๐ >0 ; (new ๐โ๐กโ row) = (old ๐โ๐กโ row)โ ๐ข ๐ ๐ข ๐ ร(old ๐โ๐กโ row) ๏ ๐ข ๐ old ๐โ๐กโ row ๐ข ๐ โ old ๐โ๐กโ row ๐ข ๐ is lexicographically positive ๏ (new ๐โ๐กโ row) is lexicographically positive (3) ๐ข ๐ =0 ; remains unchanged (b) ๐ ๐ <0 ๏ we add positive multiple of ๐โ๐กโ row to 0โ๐กโ row (c) 0โ๐กโ row determined by current basis ๏ no basis repeated since 0-th row increases lexicographically ๏ finite termination. ๏ Linear Programming 2017
16
(2) Idea of lexicographic rule is related to the perturbation method.
Remarks : (1) To have initial lexicographically positive rows, permute the columns (variables) so that the basic variables come first in the current tableau (2) Idea of lexicographic rule is related to the perturbation method. If no degenerate solution ๏ objective value strictly decreases, hence no cycling ( decrease of objective function value is ๐ โ ๐ ๐ ). Hence add small positive ๐ ๐ to ๐ฅ ๐ต(๐) , ๐=1,โฆ,๐ to obtain ๐ฅ ๐ต ๐ = ๐ต โ1 ๐ ๐ + ๐ ๐ , where 0< ๐ ๐ โช ๐ ๐โ1 โชโฆโช ๐ 2 โช ๐ 1 โช1. Linear Programming 2017
17
(continued) It can be shown that no degenerate solution appears in subsequent iterations ( think ๐ ๐ โฒ๐ as symbols), hence cycling is avoided. Lexicographic rule is an implementation of the perturbation method without using ๐ ๐ โฒ๐ explicitly. Note that the coefficient matrix of ๐ ๐ โฒ๐ and basic variables are all identity matrices. Hence the simplex iterations (elementary row operations) result in the same coefficient matrices. Linear Programming 2017
18
2. Blandโs rule ( smallest subscript rule )
(1) Choose smallest ๐ among the nonbasic variables having ๐ ๐ <0 and let the column ๐ด ๐ enter the basis (2) Out of all basic variables ๐ฅ ๐ โฒ๐ that are tied in the minimum ratio test for choosing an exiting variable, choose the one with the smallest value of index ๐. Pf) see p.37-38, Vasek Chvatal, Linear Programming, Freeman, Note that the proof is given for the maximization problem. Note that the lexicographic rule and the smallest subscript rule can be started and stopped at any time during the simplex iterations. Linear Programming 2017
19
3.5 Finding an initial b.f.s. Given (P): min ๐ โฒ ๐ฅ, s.t. ๐ด๐ฅ=๐, ๐ฅโฅ (assume ๐โฅ0 ) Introduce artificial variables and solve (P-I) min ๐ฆ 1 + ๐ฆ 2 +โฆ+ ๐ฆ ๐ , s.t. ๐ด๐ฅ+๐ผ๐ฆ=๐, ๐ฅโฅ0, ๐ฆโฅ0 Initial b.f.s. : ๐ฅ=0, ๐ฆ=๐ If optimal value >0 ๏ (P) is infeasible (If (P) feasible ๏ (P-I) has a solution with ๐ฆ=0) If optimal value =0 ๏ all ๐ฆ ๐ =0, so current optimal solution ๐ฅ gives a feasible solution to (P). Drop ๐ฆ ๐ variables and use the original objective function. However, we need a b.f.s. to use simplex. Have trouble if some artificial variables remain basic in the optimal basis. Linear Programming 2017
20
Driving artificial variables out of the basis (in tableau form)
Pivot element Linear Programming 2017
21
So bring ๐ฅ ๐ into the basis by pivoting (solution not changed)
Suppose ๐ฅ ๐ต(1) ,โฆ, ๐ฅ ๐ต(๐) , ๐<๐ are basic variables which are from original variables. Suppose artificial variable ๐ฆ ๐ is in the ๐โ๐กโ position of the basis (๐โ๐กโ component of the column for ๐ฆ ๐ in the optimal tableau is 1 and all other components are 0.) and ๐โ๐กโ component of ๐ต โ1 ๐ด ๐ is nonzero for some nonbasic original variable ๐ฅ ๐ . Then ๐ต โ1 ๐ด ๐ต(1) ,โฆ, ๐ต โ1 ๐ด ๐ต(๐) = ๐ 1 ,โฆ, ๐ ๐ and ๐ต โ1 ๐ด ๐ are linearly independent ๏ ๐ด ๐ต(1) ,โฆ, ๐ด ๐ต ๐ , ๐ด ๐ linearly independent So bring ๐ฅ ๐ into the basis by pivoting (solution not changed) If not exist ๐ฅ ๐ with ๐ต โ1 ๐ด ๐ ๐ โ 0 ๏ ๐โฒ๐ด=0โฒ (๐โฒ is ๐โ๐กโ row of ๐ต โ1 ) So rows of ๐ด linearly dependent. Also have ๐ โฒ ๐=0 since ๐ด๐ฅ=๐ feasible. Hence ๐ โฒ ๐ด๐ฅ= ๐ โฒ ๐ โฒ ๐ฅ=0 redundant eq. and it is ๐โ๐กโ row of tableau ๏ can eliminate it. Linear Programming 2017
22
Remarks Note that although we may eliminate the ๐โ๐กโ row of the current tableau, it may not imply that the ๐โ๐กโ row of the initial tableau is redundant. To see this, suppose that ๐โ๐กโ artificial variable (with corresponding column ๐ ๐ in the initial tableau) is in the ๐โ๐กโ position of the current basis matrix (hence ๐ต โ1 ๐ ๐ = ๐ ๐ in the current tableau). Let ๐โฒ be the ๐โ๐กโ row of ๐ต โ1 , then from ๐โฒ ๐ ๐ =1, we know that ๐โ๐กโ component of ๐ is 1. Then, from ๐ โฒ ๐ด=0โฒ and ๐ โฒ ๐=0, ๐โ๐กโ row of ๐:๐ด can be expressed as a linear combination of the other rows, hence ๐โ๐กโ row in the original tableau is redundant. (What effect can be observed if there are additional artificial basic variable ๐ฆ ๐ ?) Linear Programming 2017
23
Sometimes, we may want to retain the redundant rows when we solve the problem because we do not want to change the problem data so that we can perform sensitivity analysis later, i.e. change the data a little bit and solve the problem again. Then the artificial variables corresponding to the redundant equations should remain in the basis (we should not drop the variable). It will not leave the basis in subsequent iterations since the corresponding row has all 0 coefficients. If we do not drive the artificial variables out of the basis and perform the simplex iterations using the current b.f.s., it may happen that the value of the basic artificial variables become positive, hence gives an infeasible solution to the original problem. To avoid this, modification on the simplex method is needed or we may use the bounded variable simplex method by setting the upper bounds of the remaining artificial variables to 0. ( lower bounds are 0 ) Linear Programming 2017
24
Two-phase simplex method
See text p.116 for two-phase simplex method Big-M method Use the objective function, min ๐=1 ๐ ๐ ๐ ๐ฅ ๐ +๐ ๐=1 ๐ ๐ฆ ๐ , where ๐ is a large number. see text sec. 3.6 for definition of ๐-dimensional simplex and the interpretation of simplex method by column geometry. Linear Programming 2017
25
3.7 Computational efficiency of the simplex method
See text sec. 1.6 Algorithms and operation counts. Count the number of operations of an algorithm. Polynomial time vs. Exponential time. Size of the numbers involved may need to be considered (Chapter 8, 9). Each iteration of the simplex method takes polynomial time of m, n and length of encoding of data. But number of iterations is exponential in the worst case. Empirically, number of iterations is ๐(๐) and ๐ log ๐ . For pivoting rules, there exist counter examples on which simplex takes exponential number of iterations, hence simplex algorithm is not a polynomial time algorithm. ( Still, there exists a possibility that some other pivoting rules may provide polynomial running time. Though it may be very difficult to prove.) Linear Programming 2017
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.