3.3 Implementation (1) naive implementation (2) revised simplex method

Slides:



Advertisements
Similar presentations
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
Advertisements

Chapter 5: Linear Programming: The Simplex Method
Linear Programming – Simplex Method
OR Simplex method (algebraic interpretation) Add slack variables( 여유변수 ) to each constraint to convert them to equations. (We may refer it as.
Computational Methods for Management and Economics Carla Gomes Module 8b The transportation simplex method.
MATH 685/ CSI 700/ OR 682 Lecture Notes
Linear Inequalities and Linear Programming Chapter 5
Computational Methods for Management and Economics Carla Gomes Module 6b Simplex Pitfalls (Textbook – Hillier and Lieberman)
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Gaussian Elimination and Simplex.
Operation Research Chapter 3 Simplex Method.
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
LINEAR PROGRAMMING SIMPLEX METHOD.
Chapter 3 Linear Programming Methods 高等作業研究 高等作業研究 ( 一 ) Chapter 3 Linear Programming Methods (II)
1. The Simplex Method.
OR Chapter 3. Pitfalls OR  Selection of leaving variable: a)No restriction in minimum ratio test : can increase the value of the entering.
The Two-Phase Simplex Method LI Xiao-lei. Preview When a basic feasible solution is not readily available, the two-phase simplex method may be used as.
Simplex method (algebraic interpretation)
Duality Theory LI Xiaolei.
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming: The Simplex Method Chapter 5.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
The Simplex Method Updated 15 February Main Steps of the Simplex Method 1.Put the problem in Row-Zero Form. 2.Construct the Simplex tableau. 3.Obtain.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
OR Perturbation Method (tableau form) (after two iterations, optimal solution obtained) (0+2  1 ) (0+2  2 ) (1+  3 )
1. 2 Computing the Adjoint matrix: 3 Assignment #3 is available. Due: Monday Nov. 5, beginning of class.
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Mechanical Engineering Department 1 سورة النحل (78)
Simplex Method Continued …
Discrete Optimization Lecture #3 2008/3/41Shi-Chung Chang, NTUEE, GIIE, GICE Last Time 1.Algorithms and Complexity » Problems, algorithms, and complexity.
1 1 Slide © 2005 Thomson/South-Western Linear Programming: The Simplex Method n An Overview of the Simplex Method n Standard Form n Tableau Form n Setting.
Chapter 4 Linear Programming: The Simplex Method
Linear Programming Implementation. Linear Programming
Chapter 3 Linear Programming Methods
1 THE REVISED SIMPLEX METHOD CONTENTS Linear Program in the Matrix Notation Basic Feasible Solution in Matrix Notation Revised Simplex Method in Matrix.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
University of Colorado at Boulder Yicheng Wang, Phone: , Optimization Techniques for Civil and Environmental Engineering.
OR Simplex method (algebraic interpretation) Add slack variables( 여유변수 ) to each constraint to convert them to equations. (We may refer it as.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室. Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a.
Simplex Method Simplex: a linear-programming algorithm that can solve problems having more than two decision variables. The simplex technique involves.
Linear Programming Back to Cone  Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
OR  Now, we look for other basic feasible solutions which gives better objective values than the current solution. Such solutions can be examined.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
Foundations-1 The Theory of the Simplex Method. Foundations-2 The Essence Simplex method is an algebraic procedure However, its underlying concepts are.
Decision Support Systems INF421 & IS Simplex: a linear-programming algorithm that can solve problems having more than two decision variables.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Linear Programming Revised Simplex Method, Duality of LP problems and Sensitivity analysis D Nagesh Kumar, IISc Optimization Methods: M3L5.
Chap 10. Sensitivity Analysis
Linear Equations.
Perturbation method, lexicographic method
Chapter 4 Linear Programming: The Simplex Method
Chap 9. General LP problems: Duality and Infeasibility
Chapter 6. Large Scale Optimization
Chapter 5. Sensitivity Analysis
Chap 3. The simplex method
Chapter 4. Duality Theory
Chapter 6. Large Scale Optimization
Chapter 8. General LP Problems
Chapter 5. The Duality Theorem
Chapter 8. General LP Problems
Chapter 2. Simplex method
Simplex method (algebraic interpretation)
Chapter 8. General LP Problems
Practical Issues Finding an initial feasible solution Cycling
Chapter 6. Large Scale Optimization
Chapter 2. Simplex method
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Presentation transcript:

3.3 Implementation (1) naive implementation (2) revised simplex method (3) full tableau implementation (1) Naive implementation : Given basis B. Compute p’ = cB’B-1 ( solve p’B = cB’ ) Choose j such that cj = cj – cB’B-1Aj = cj – p’Aj < 0, jN. (xB, xN) = ( B-1b, 0) + (dB, dN) = ( B-1b, 0) +  ( -B-1Aj , ej ) Let u = B-1Aj ( solve Bu = Aj ) Determine * { = min i  B| ui > 0 ( (B-1b)i / ui ) } Let B(l) be the index of the leaving basic variable. Replace AB(l) by Aj in the basis and update basis indices. Update solution x. Naive implementation itself is frequently called the revised simplex method compared to the full tableau implementation. Linear Programming 2011

(2) Revised simplex method : Naive implementation needs to find p’ = cB’B-1 and u = B-1Aj ( or solve p’B = cB’, Bu = Aj ) in each iteration. Update B-1 efficiently so that computational burden is reduced. (compute cB’B-1 and B-1Aj easily. Similar idea can be used to update B efficiently and find p, u vectors easily.) B = [ AB(1), … , AB(m) ]  B = [ AB(1), .., AB(l-1) , Aj , AB(l+1) ,.. , AB(m) ] B-1B = [ e1 , … , el-1 , u , el+1 , … , em ], u = B-1Aj Premultiply elementary row operation matrices QkQk-1 … Q1  Q to B-1B so that Q B-1B = Q [ e1 , … , el-1 , u , el+1 , … , em ] = I  QB-1 = B-1 Hence applying the same row operations ( to convert u to el ) to B-1 results in B-1. ( see example in the text) Linear Programming 2011

(3) Full tableau implementation : Ax = b  B-1Ax = B-1b maintain [ B-1b : B-1A1 , … , B-1An ] can read current b.f.s. x from B-1b and dB = -B-1Aj Update to B-1 [ b : A ]. We know B-1 = QB-1 Hence B-1 [ b : A ] = QB-1 [ b : A ] = Q [B-1b : B-1A ] So apply row operations, which convert u = B-1Aj to el , to the matrix [B-1b : B-1A ] (To find exiting column AB(l) and step size *, compare xB(i)/ui for ui > 0, i = 1, … , m. ) Also maintain and update information about reduced costs and objective value. Linear Programming 2011

Currently 0-th row is [ 0 : c’ ] – g’ [ b : A ] , where g’ = cB’B-1. To update the 0-th row, add k  (pivot row) to 0-th row for some scalar k so that the coefficient for the entering variable xj in 0-th row becomes 0. Currently 0-th row is [ 0 : c’ ] – g’ [ b : A ] , where g’ = cB’B-1. Let column j be the pivot column and row l be the pivot row. Pivot row l of [ B-1b : B-1A ] is h’ [ b : A ] where h’ is l-th row of B-1. Hence, after adding, new 0-th row still represented as [ 0 : c’ ] – p’ [ b : A] for some p, while cB(l) – p’AB(l) = cj –p’Aj = 0 Linear Programming 2011

Now for xi basic, i  l, have ci remains at 0. (continued) Now for xi basic, i  l, have ci remains at 0. ( ci = 0 before the pivot. Have B-1AB(i) = ei , i  l. Hence (B-1AB(i))l = 0 )  cB(i) - p’AB(i) = 0, for all i  B  cB’ – p’B = 0  p’ = cB’B-1  new 0-th row is [ 0 : c’ ] – cB’B-1 [ b : A ] as desired. Linear Programming 2011

Ex: x4 = x5 = x6 = x4 = x1 = x6 = Linear Programming 2011

(1) Tableau form also can be derived as the following : (Remark) (1) Tableau form also can be derived as the following : Given min c’x, Ax = b, x  0, let A = [ B : N ], x = [ xB , xN ], c = [ cB , cN ], where B is the current basis. Also let z denote the value of the objective function, i. e. z = c’x. Since all feasible solutions must satisfy Ax = b, they must satisfy [ B : N ] [ xB , xN ]’ = b  BxB + NxN = b  BxB = b – NxN  xB = B-1b – B-1NxN ( IxB + B-1NxN = B-1b ) or in matrix form, [ I : B-1N ] [ xB , xN ]’ = B-1b Linear Programming 2011

z = c’x = cB’xB + cN’xN = cB’ (B-1b – B-1NxN) + cN’xN (continued) Since all feasible solutions must satisfy these equations, we can plug in the expression for xB into the objective function to obtain z = c’x = cB’xB + cN’xN = cB’ (B-1b – B-1NxN) + cN’xN = cB’ B-1b + 0’xB + ( cN’ - cB’ B-1N) xN = cB’ B-1b + ( cN’ – p’N) xN , where p’ = cB’ B-1 or p’B = cB’ Hence obtain the tableau with respect to the current basis B z - cB’ B-1b = 0’xB + ( cN’ - cB’ B-1N) xN B-1b = I xB + B-1N xN ( =(B-1A)x ) See text for an example of full tableau implementation. Linear Programming 2011

(continued) (2) Tableau also can be obtained using the following logic (continued) (2) Tableau also can be obtained using the following logic. Note that elementary row operations on equations do not change the solution space, but the representation is changed. Starting from -z + cB’xB + cN’xN = 0 BxB + NxN = b We compute multiplier vector y for the constraints by solving y’B = cB’ ( y’ = cB’B-1). Then we take linear combination of constraints using weight vector –y and add it to objective row, resulting in -z + 0’xB + ( cN’ - cB’ B-1N) xN = - cB’ B-1b. We multiply B-1 on both sides of constraints  I xB + B-1N xN = B-1b Linear Programming 2011

Practical Performance Enhancements In commercial LP solver, B-1 is not updated in the revised simplex method. Instead we update the representation of B as B = LU, where L is lower triangular and U is upper triangular matrices (with some row permutation allowed, LU decomposition, triangular factorization). We solve the system (with proper updates), p’LU = cB’, LUu = Aj, each system takes O(m2) to solve and numerically more stable than using B-1. Moreover, less fill-in occurs in LU decomposition than in B-1, which is important when we solve large sparse problems. Linear Programming 2011

3.4 Anticycling 1. Lexicographic pivoting rule Def : u Rn is said to be lexicographically larger than v  Rn if u  v and the first nonzero component of u – v is positive ( u > v ) Lexicographic pivoting rule (1) choose entering xj with cj < 0. Compute updated column u = B-1Aj . (2) For each i with ui > 0, divide i-th row of the tableau by ui and choose lexicographically smallest row. If row l is smallest, xB(l) leaves basis. (see example in the text) Linear Programming 2011

Suppose pivot column is the third one ( j = 3) Ex: Suppose pivot column is the third one ( j = 3) ratio = 1/3 for 1st and 3rd row third row is pivot row, and xB(3) exits the basis. Linear Programming 2011

(a) every row except 0-th remains lexicographically positive. Thm : Suppose the rows in the current simplex tableau is lexicographically positive except 0-th row and lexicographic rule is used, then (a) every row except 0-th remains lexicographically positive. (b) 0-th row strictly increases lexicographically. (c) simplex method terminates finitely. Linear Programming 2011

(a) Suppose xj enter, xl leaves ( ul > 0 ) Pf) (a) Suppose xj enter, xl leaves ( ul > 0 ) Then ( l-th row ) / ul < ( i-th row ) / ui , i  l , ui > 0 ( l-th row )  ( l-th row ) / ul ( lexicographically positive ) For i-th row, i  l (1) ui < 0 ; add pos. num.  ( l-th row ) to i-th row  lex. pos. (2) ui > 0 ; (new i-th row) = (old i-th row) – ( ui / ul )(old l-th row)  lexicographically positive (3) ui = 0 ; remain unchanged (b) cj < 0  we add positive multiple of l-th row to 0-th row (c) 0-th row determined by current basis  no basis repeated since 0-th row increases lexicographically  finite termination.  Linear Programming 2011

(2) Idea of lexicographic rule is related to the perturbation method. Remarks : (1) To have initial lexicographically positive rows, permute the columns (variables) so that the basic variables come first in the current tableau (2) Idea of lexicographic rule is related to the perturbation method. If no degenerate solution  objective value strictly decreases, hence no cycling. Hence add small positive i to xB(i) , i = 1, … , m to obtain xB(i) = (B-1b)i + i , where 0 < m << m-1 << … << 2 <<1 <<1 Linear Programming 2011

(continued) It can be shown that no degenerate solution appears in subsequent iterations ( think I’s as symbols), hence cycling is avoided. Lexicographic rule is an implementation of the perturbation method without using i’s explicitly. Note that the coefficient matrix of i’s and basic variables are all identity matrices. Hence the simplex iterations (elementary row operations) results in the same coefficient matrices. Linear Programming 2011

2. Bland’s rule ( smallest subscript rule ) (1) Find smallest j for which cj < 0 and have the column Aj enter the basis (2) Out of all variables xi that are tied in the test for choosing an exiting variable, choose the one with the smallest value of i. Pf) see Vasek Chvatal, Linear Programming, Freeman, 1983 Note that the lexicographic rule and the Bland’s rule can be adopted and stopped at any time during the simplex iterations. Linear Programming 2011

3.5 Finding an initial b.f.s. Given (P): min c’x, s.t. Ax = b, x  0 ( b  0 ) Introduce artificial variables and solve (P-I) min y1 + y2 + … + ym , s.t. Ax + Iy = b, x  0, y  0 Initial b.f.s. : x = 0, y = b If optimal value > 0  (P) infeasible (If P feasible  P-I has solution with y = 0 ) If optimal value = 0  all yi = 0, so current optimal solution x gives a feasible solution to (P). Drop yi variables and use original objective function. However, we need a b.f.s. to use simplex. Have trouble if some artificial variables remain basic in the optimal basis. Linear Programming 2011

Driving artificial variables out of the basis Pivot element Linear Programming 2011

 { AB(1) , … , AB(k) , Aj } linearly independent Suppose { xB(1), … , xB(k) } , k < m are basic variables which are from original variables. Suppose artificial variable yi is in the l-th position of the basis ( l-th component of the column for yi in the optimal tableau is 1 and all other components are 0. ) and l-th component of B-1Aj is nonzero for some nonbasic original variable xj. Then { B-1AB(1) , … , B-1AB(k) } = { e1, ... , ek } and B-1Aj linearly independent  { AB(1) , … , AB(k) , Aj } linearly independent So bring xj into the basis by pivoting (solution not changed) If not exist xj with (B-1Aj)l  0  g’A = 0’ (g’ is l-th row of B-1) So rows of A linearly dependent. Also have g’b = 0 since Ax = b feasible. Hence g’Ax = g’b ( 0’x = 0’ ) redundant eq. and it is l-th row of tableau  eliminate it. Linear Programming 2011

Remarks Note that although we may eliminate the l-th row of the current tableau, it may not imply that the l-th row of the original tableau is redundant. To see this, suppose that i-th artificial variable (with corresponding column ei in the initial tableau) is in the l-th position of the current basis matrix (hence B-1ei = el in the current tableau). Let g be the l-th row of B-1 , then from g’ei = 1, we know that i-th component of g is 1. Then, from g’A = 0’ and g’b = 0, i-th row of [b : A] can be expressed as a linear combination of the other rows, hence i-th row in the original tableau is redundant. Linear Programming 2011

Sometimes, we may want to retain the redundant rows when we solve the problem because we do not want to change the problem data so that we can perform sensitivity analysis later, i.e. change the data a little bit and solve the problem again. Then the artificial variables corresponding to the redundant equations should remain in the basis (we should not drop the variable). It will not leave the basis in subsequent iterations since the corresponding row has all 0 coefficients. If we do not drive the artificial variables out of the basis and perform the simplex iterations using the current b.f.s., it may happen that the value of the basic artificial variables become positive, hence gives an infeasible solution to the original problem. To avoid this, modification on the simplex method is needed or we may use the bounded variable simplex method by setting the upper bounds of the remaining artificial variables to 0. ( lower bounds are 0 ) Linear Programming 2011

Two-phase simplex method See text p116 for two-phase simplex method Big-M method Use the objective function min , where M is a large number. see text sec. 3.6 for definition of k-dimensional simplex and the interpretation of simplex method by column geometry. Linear Programming 2011

3.7 Computational efficiency of the simplex method Each iteration of the simplex method takes polynomial time of m, n and length of encoding of data. But number of iterations is exponential in the worst case/ Empirically, number of iterations is O(m) and O(log n). For pivoting rules, there exist counter examples on which simplex takes exponential number of iterations, hence simplex algorithm is not a polynomial time algorithm. ( Still, there exists a possibility that some other pivoting rules may provide polynomial running time. Though it may be very difficult to prove.) Linear Programming 2011