Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 2. Simplex method

Similar presentations


Presentation on theme: "Chapter 2. Simplex method"— Presentation transcript:

1 Chapter 2. Simplex method
Geometric view : max 𝑥 1 +2 𝑥 2 , s.t. 𝑥 ≤2 𝑥 2 ≤2 4 𝑥 1 +3 𝑥 2 ≤10 𝑥 1 , 𝑥 2 ≥0 𝑥2 (0,2) (1,2) (2,2/3) 𝑥1 (2,0) OR-1 Opt

2 Geometric intuition for the solution sets of
Let 𝑎∈ 𝑅 𝑛 , 𝑏∈𝑅. Geometric intuition for the solution sets of {𝑥 :𝑎’𝑥=0} {𝑥 :𝑎’𝑥≤0} {𝑥 : 𝑎 ′ 𝑥≥0} {𝑥 : 𝑎 ′ 𝑥=𝑏} {𝑥 : 𝑎 ′ 𝑥≤𝑏} {𝑥 : 𝑎 ′ 𝑥≥𝑏} OR-1 Opt

3 Geometry in 2-D { 𝑥 :𝑎’𝑥  0} 𝑎 { 𝑥 : 𝑎 ′ 𝑥≤0} { 𝑥 : 𝑎 ′ 𝑥=0}
OR-1 Opt

4 Let 𝑧 be a (any) point satisfying 𝑎 ′ 𝑥=𝑏. Then
𝑥: 𝑎 ′ 𝑥=𝑏 = 𝑥: 𝑎 ′ 𝑥= 𝑎 ′ 𝑧 ={𝑥: 𝑎 ′ 𝑥−𝑧 =0} Hence 𝑥−𝑧=𝑦, where 𝑦 is any solution to 𝑎 ′ 𝑦=0, or 𝑥=𝑦+𝑧. So 𝑥 can be obtained by adding 𝑧 to every point 𝑦 satisfying 𝑎 ′ 𝑦= Similarly, for {𝑥: 𝑎 ′ 𝑥≤𝑏}, and {𝑥: 𝑎 ′ 𝑥≥𝑏}. { 𝑥 : 𝑎 ′ 𝑥≥𝑏} 𝑎 𝑧 { 𝑥 : 𝑎 ′ 𝑥=𝑏} { 𝑥 : 𝑎 ′ 𝑥≤𝑏} { 𝑥 : 𝑎 ′ 𝑥=0} OR-1 Opt

5 Points satisfying 4 𝑥 1 +3 𝑥 2 ≤10 (halfspace)
x2 (4,3) (2,2/3) x1 OR-1 Opt

6 Thm: Polyhedron {𝑥∈ 𝑅 𝑛 :𝐴𝑥≤𝑏} is a convex set. Pf) HW earlier.
Def: The set of points which can be described in the form {𝑥∈ 𝑅 𝑛 :𝐴𝑥≤𝑏} is called a polyhedron ( Intersection of finite number of halfspaces) Hence, linear programming is the problem of optimizing (maximize, minimize) a linear function over a polyhedron. Thm: Polyhedron {𝑥∈ 𝑅 𝑛 :𝐴𝑥≤𝑏} is a convex set. Pf) HW earlier. To understand the simplex method, we will consider geometric intuition and algebraic logic together throughout the lectures. OR-1 Opt

7 Solving LP graphically
x2 (0,2) (1,2) (2,2/3) x1 (2,0) OR-1 Opt

8 Properties of optimal solutions
Thm: If LP has a unique optimal solution, the unique optimal solution is an extreme point. Pf) Suppose 𝑥 ∗ is a unique optimal solution and it is not an extreme point of the feasible set. Then there exist feasible points 𝑦,𝑧≠ 𝑥 ∗ , 𝑦≠𝑧 such that 𝑥 ∗ =𝜆𝑦+ 1−𝜆 𝑧 for some 0<𝜆<1. Then 𝑐′𝑥 ∗ = 𝜆𝑐 ′ 𝑦+ 1−𝜆 𝑐 ′ 𝑧. If 𝑐′𝑥 ∗ ≠ 𝑐 ′ 𝑦, then either 𝑐 ′ 𝑦>𝑐′ 𝑥 ∗ or 𝑐 ′ 𝑧>𝑐′ 𝑥 ∗ , hence contradiction to 𝑥 ∗ being an optimal solution. If 𝑐′𝑥 ∗ = 𝑐 ′ 𝑦, 𝑦 is also optimal solution. Contradiction to 𝑥 ∗ being unique optimal solution.  Thm: Suppose polyhedron 𝑃 has at least one extreme point. If LP over 𝑃 has an optimal solution, it has an extreme point optimal solution. Pf) not given here.  Above theorem indicates that we need to examine extreme points only to find an optimal solution of an LP. OR-1 Opt

9 Multiple optimal solutions
x2 (0,2) (1,2) (2,2/3) x1 (2,0) OR-1 Opt

10 Obtaining extreme point algebraically
Extreme point (0, 2) can be iden tified by solving system of eq. 𝑥 1 =0, 𝑥 2 =2 Similarly, (1, 2) can be identified by 4 𝑥 1 +3 𝑥 2 =10, 𝑥 2 =2. Note that 4 𝑥 1 +3 𝑥 2 =10, 𝑥 2 =0 gives (5/2, 0), which is not an extreme point since it is not in the polyhedron. (0,2) (1,2) (2,2/3) (2,0) x1 OR-1 Opt

11 Let 𝑀={1,2,…,𝑚} be the index set of constraints.
Thm: Suppose polyhedron 𝑃={𝑥∈ 𝑅 𝑛 :𝐴𝑥≤𝑏} is given (𝐴:𝑚×𝑛) and 𝑃 has at least one extreme point (the space is 𝑛-dimensional). Let 𝑀={1,2,…,𝑚} be the index set of constraints. Let 𝑥 ∗ ∈𝑃 and 𝐼⊆𝑀 be the set of constraints which hold at equality by 𝑥 ∗ , i.e., 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ∗ = 𝑏 𝑖 , 𝑖∈𝐼, and 𝑗=1 𝑛 𝑎 𝑖𝑗 𝑥 𝑗 ∗ < 𝑏 𝑖 , 𝑖∉𝐼. Then 𝑥 ∗ is an extreme point of 𝑃 if and only if the rank of 𝐴 𝐼 is 𝑛, where 𝐴 𝐼 is the submatrix of 𝐴 obtained by choosing the rows 𝑖∈𝐼 of 𝐴. Pf) not given here.  Using the above theorem, an extreme point of a polyhedron can be identified by setting some 𝑛 of the inequalities as equalities and obtaining the solution satisfying the equalities (coefficient vectors (chosen rows of 𝐴 matrix) must be linearly independent so that the system gives a unique solution.). If the obtained point satisfies other inequalities, it is in 𝑃 and it is an extreme point of the polyhedron. If the obtained point is not in 𝑃, it is not an extreme point. See the earlier example. OR-1 Opt

12 Note: If 𝑃 is given as 𝑃= 𝑥∈ 𝑅 𝑛 :𝐴𝑥≤𝑏, 𝐶𝑥=𝑑 , where 𝐴:𝑚×𝑛, 𝐶:𝑘×𝑛, the rows of matrix 𝐶 (linearly independent row vectors) should be included in 𝑛 equalities to identify an extreme point. Above result needs rigorous proof, but we use it in the class without proof since the proof is quite involved and the result itself is crucial in understanding the behavior of the simplex method. From the previous theorems, we know that we only need to consider extreme points of the polyhedron to find an optimal solution. The immediate previous theorem gives a means of how to identify an extreme point algebraically. OR-1 Opt

13 Idea of algorithm? Enumeration of all extreme points? :
maximum of 𝑚+𝑛 𝑛 choices for standard LP (𝐴𝑥≤𝑏, −𝑥≤0) ( 𝐴:𝑚×𝑛, (𝑚+𝑛) constraints and 𝑛 variables), which is quite large. ( the number of ways to choose 𝑛 inequalities (which hold at equalities) out of (𝑚+𝑛) inequalities.) Algorithm strategy : from an extreme point, move to the neighboring extreme point which gives a better (precisely speaking, not worse) solution in each iteration of the algorithm. OR-1 Opt

14 Algebraic Derivation of Extreme Points for standard LP
Any LP problem must be converted to a problem having only equations and no inequalities except the nonnegativity constraints if simplex method can be applied (details later) Consider the LP problem max 𝑐 ′ 𝑥, 𝐴𝑥=𝑏, −𝑥≤0 (𝑥≥0) (we solve this form throughout) 𝐴:𝑚×𝑛, full row rank (𝑛≥𝑚) 𝑃={𝑥∈ 𝑅 𝑛 :𝐴𝑥=𝑏, −𝑥≤0} An extreme point of 𝑃 can be obtained as the solution satisfying 𝐴𝑥=𝑏, 𝑥 𝑖 =0 for some 𝑖, and the rank of the coefficient matrix is 𝑛. We choose nonsingular matrix which includes matrix 𝐴 as submatrix (note that the number of equations can be > 𝑛). Let 𝑁 be the index set such that 𝑥 𝑖 =0, 𝑖∈𝑁 holds (there are 𝑛−𝑚 of them). OR-1 Opt

15 Then an extreme point can be found by solving 𝐴𝑥=𝑏, − 𝑥 𝑁 =0.
(continued) Let 𝐴=[𝐵:𝑁], 𝐵:𝑚×𝑚, and nonsingular, 𝑁:𝑚×(𝑛−𝑚), where 𝑁 is the submatrix of 𝐴 having columns associated with variables set to 0 ( 𝑥 𝑖 , 𝑖∈𝑁). Then an extreme point can be found by solving 𝐴𝑥=𝑏, − 𝑥 𝑁 =0.  𝐵:𝑁 𝑥 𝐵 𝑥 𝑁 =𝑏, − 𝑥 𝑁 =0.  𝐵 𝑥 𝐵 +𝑁 𝑥 𝑁 =𝑏, − 𝑥 𝑁 =0. (or 𝐵 𝑥 𝐵 =𝑏−𝑁 𝑥 𝑁 , − 𝑥 𝑁 =0 )  multiplying 𝐵 −1 on both sides, we obtain 𝐵 −1 𝐵 𝑥 𝐵 + 𝐵 −1 𝑁 𝑥 𝑁 = 𝐵 −1 𝑏. or 𝐼 𝑥 𝐵 + 𝐵 −1 𝑁 𝑥 𝑁 = 𝐵 −1 𝑏, − 𝑥 𝑁 =0. ( 𝑥 𝐵 = 𝐵 −1 𝑏− 𝐵 −1 𝑁 𝑥 𝑁 , − 𝑥 𝑁 =0) Solution is 𝑥 𝐵 = 𝐵 −1 𝑏, 𝑥 𝑁 =0. This is the basic solution we mentioned earlier (refer note-2, p22, p29). By the choice of the variables we set at 0, we obtain different basic solutions. 𝑥 𝐵 are called basic variables, and 𝑥 𝑁 are called nonbasic variables. If the obtained solution satisfies nonnegativity, 𝑥 𝐵 = 𝐵 −1 𝑏≥0, we have a basic and feasible solution (satisfies nonnegativity of variables). So this point is in the polyhedron 𝑃={𝑥:𝐴𝑥=𝑏, −𝑥≤0} and hence an extreme point. OR-1 Opt

16 Ex: extreme point ( 1, 0, 0 ) can be obtained from 𝑥 1 + 𝑥 2 + 𝑥 3 =1, 𝑥 2 =0, 𝑥 3 =0. Since ( 1, 0, 0 ) satisfies − 𝑥 1 ≤0, it is an extreme point. x3 1 x1 1 1 x2 OR-1 Opt

17 Why do we need 𝐵 nonsingular?
Coefficient matrix (which is nonsingular) defining an extreme point (𝐴𝑥=𝑏 is included, assuming A is of full row rank) 𝑚 𝑛−𝑚 𝐵 𝑁 𝑥 𝐵 𝑏 𝐴𝑥=𝑏 𝑚 = 𝑥 𝑁 − 𝑥 𝑁 =0 −𝐼 𝑛−𝑚 OR-1 Opt

18 Suppose 𝐴:𝑛×𝑛, then |𝐴| (determinant of 𝐴) is given as
𝐴 = 𝑗=1 𝑛 𝑎 𝑖𝑗 𝐴 𝑖𝑗 for any row 𝑖 = 𝑖=1 𝑛 𝑎 𝑖𝑗 𝐴 𝑖𝑗 for any column 𝑗, where 𝐴 𝑖𝑗 = (−1) 𝑖+𝑗 | 𝑀 𝑖𝑗 |. 𝑀 𝑖𝑗 is an (𝑛−1)×(𝑛−1) square submatrix of 𝐴 obtained by deleting 𝑖-th row and 𝑗-th column of 𝐴 (| 𝑀 𝑖𝑗 | is called minor of 𝑎 𝑖𝑗 ). 𝐴 𝑖𝑗 is called the cofactor of 𝑎 𝑖𝑗 . Also 𝐴 is nonsingular if and only if |𝐴|≠0. square matrix 𝐴 is nonsingular  𝐴𝑥=𝑏 has a unique solution. Hence the condition that the system in the previous slide has a unique solution is equivalent to the condition that the coefficient matrix is nonsingular  determinant of the coefficient matrix ≠0  determinant of 𝐵 matrix ≠0  𝐵 is nonsingular. Hence we need the condition that B matrix is nonsingular when we define an extreme point, in which case the corresponding solution is a basic feasible solution. OR-1 Opt

19 Simplex method searches only basic feasible solutions, which is tantamount to searching the extreme points of the corresponding polyhedron until it finds an optimal solution. OR-1 Opt


Download ppt "Chapter 2. Simplex method"

Similar presentations


Ads by Google