Linear Programming 2011 1 Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
Lecture #3; Based on slides by Yinyu Ye
MATHEMATICS 3 Operational Analysis Štefan Berežný Applied informatics Košice
OR Simplex method (algebraic interpretation) Add slack variables( 여유변수 ) to each constraint to convert them to equations. (We may refer it as.
How should we define corner points? Under any reasonable definition, point x should be considered a corner point x What is a corner point?
The Structure of Polyhedra Gabriel Indik March 2006 CAS 746 – Advanced Topics in Combinatorial Optimization.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Convexity of Point Set Sandip Das Indian Statistical Institute.
Computational Methods for Management and Economics Carla Gomes Module 6b Simplex Pitfalls (Textbook – Hillier and Lieberman)
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
CSC5160 Topics in Algorithms Tutorial 1 Jan Jerry Le
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Simplex method (algebraic interpretation)
Linear Programming System of Linear Inequalities  The solution set of LP is described by Ax  b. Gauss showed how to solve a system of linear.
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
Section 2.3 Properties of Solution Sets
Linear Programming (Convex) Cones  Def: closed under nonnegative linear combinations, i.e. K is a cone provided a 1, …, a p  K  R n, 1, …, p.
3.3 Implementation (1) naive implementation (2) revised simplex method
The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室. Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a.
OR Backgrounds-Convexity  Def: line segment joining two points is the collection of points.
Linear Programming Chap 2. The Geometry of LP.
§1.4 Algorithms and complexity For a given (optimization) problem, Questions: 1)how hard is the problem. 2)does there exist an efficient solution algorithm?
Chapter 3 Linear Programming Methods
Chap. 4 Vector Spaces 4.1 Vectors in Rn 4.2 Vector Spaces
I.4 Polyhedral Theory 1. Integer Programming  Objective of Study: want to know how to describe the convex hull of the solution set to the IP problem.
X y x-y · 4 -y-2x · 5 -3x+y · 6 x+y · 3 Given x, for what values of y is (x,y) feasible? Need: y · 3x+6, y · -x+3, y ¸ -2x-5, and y ¸ x-4 Consider the.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
OR Simplex method (algebraic interpretation) Add slack variables( 여유변수 ) to each constraint to convert them to equations. (We may refer it as.
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm file Simplex2_AMII_05a_gr.
The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室. Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
Proving that a Valid Inequality is Facet-defining  Ref: W, p  X  Z + n. For simplicity, assume conv(X) bounded and full-dimensional. Consider.
Linear Programming: Formulations, Geometry and Simplex Method Yi Zhang January 21 th, 2010.
Linear Programming Back to Cone  Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they.
OR  Now, we look for other basic feasible solutions which gives better objective values than the current solution. Such solutions can be examined.
Chapter 4 The Simplex Algorithm and Goal Programming
1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Chap 10. Sensitivity Analysis
Perturbation method, lexicographic method
Proving that a Valid Inequality is Facet-defining
Chap 9. General LP problems: Duality and Infeasibility
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 5. Sensitivity Analysis
Chap 3. The simplex method
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Chapter 4. Duality Theory
2. Generating All Valid Inequalities
Chapter 8. General LP Problems
Chapter 5. The Duality Theorem
System of Linear Inequalities
Affine Spaces Def: Suppose
I.4 Polyhedral Theory (NW)
Flow Feasibility Problems
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
I.4 Polyhedral Theory.
Proving that a Valid Inequality is Facet-defining
Chapter 8. General LP Problems
(Convex) Cones Def: closed under nonnegative linear combinations, i.e.
Chapter 2. Simplex method
Simplex method (algebraic interpretation)
BASIC FEASIBLE SOLUTIONS
Chapter 8. General LP Problems
1.2 Guidelines for strong formulations
Chapter 2. Simplex method
1.2 Guidelines for strong formulations
Chapter 3. Pitfalls Initialization Ambiguity in an iteration
Presentation transcript:

Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should be taken with modifications.  Thm 2.1: (a) The intersections of convex sets is convex. (b) Every polyhedron is a convex set. (c) Convex combination of a finite number of elements of a convex set also belongs to that set. (recall that S closed for convex combination of 2 points.  S closed for convex combination of a finite number of pts) (d) Convex hull of a finite number of vectors (polytope) is convex.

Linear Programming Pf) (a) Let x, y   i  I S i, S i convex  x, y  S i,  i  x + (1- )y  S i  i since S i convex  x + (1- )y   i  I S i,   S i convex. (b) Halfspace { x : a’x  b } is convex. P =  halfspaces  From (a), P is convex ( or we may directly show A( x + (1- )y )  b. ) (c) Use induction. True for k = 2 by definition. Suppose statement holds for k. Suppose k+1  1. Then  i=1 k+1 i x i = k+1 x k+1 + (1- k+1 ) (  i=1 k ( i / (1- k+1 )) x i ) i / (1- k+1 )  0 and sum up to 1, hence  i = 1 k ( i / (1- k+1 )) x i  S.   i = 1 k+1 i x i  S. (d) Let S be the convex hull of vectors x 1, …, x k and y, z  S  y =  i=1 k  i x i, z =  i=1 k  i x i for some  i,  i y + (1- )z =   i x i + (1- )   i x i =  i=1 k (  i +(1- )  i ) x i  i +(1- )  i  0 and sum up to 1  convex comb. of x i  y + (1- )z  S. 

Linear Programming Extreme points, vertices, and b.f.s’s  Def: (a) Extreme point ( as we defined earlier) (b) x  P is a vertex if  c  R n such that c’x < c’y  y  P and y  x. ( x is unique optimal solution of min c’x, x  P ) (c) Consider polyhedron P and x*  R n. Then x* is a basic solution if all equality constraints are active at x* and  n linearly independent active constraints among the constraints active at x*. ( basic feasible solution if x* is basic solution and x*  P )  Note: Earlier, we defined the extreme point same as in the text. Vertex as 0-dimensional face ( dim (P) + rank (A =, b = ) = n ) which is the same as basic feasible solution in the text. We defined basic solution (and b.f.s) only for the standard LP. ( x B = B - 1 b, x N = 0 ) Definition (b) is new. It gives an equivalent characterization of extreme point. (b) can be extended to characterize a face F of P.

Linear Programming  Fig. 2.6: P = { (x 1, x 2, x 3 ): x 1 +x 2 +x 3 = 1, x 1, x 2, x 3  0} Three constraints active at A, B, C, D. Only two constraints active at E. Note that D is not a basic solution since it does not satisfy the equality constraint. However, if P is denoted as P = { (x 1, x 2, x 3 ): x 1 +x 2 +x 3  1, x 1 +x 2 +x 3  1, x 1, x 2, x 3  0}, D is a basic solution by the definition in the text, i.e. whether a solution is basic depends on the representation of P. x1x1 x2x2 x3x3 P A B C D E

Linear Programming  Fig. 2.7: A, B, C, D, E, F are all basic solutions. C, D, E, F are basic feasible solutions. P A B C D E F

Linear Programming  Comparison of definitions in the notes and the text NotesText Extreme point Geometric definition Vertex0-dimensional face Existence of c vector which makes x* as the unique optimal solution for the LP Basic solution, b.f.s. Defined for standard form. Set n-m variables at 0 and solve the remaining system. b.f.s. if nonnegative. Defined for general polyhedron. Satisfy equality constraints and n linearly independent constraints are active. ( 0-dimensional face if feasible)

Linear Programming  Thm 2.3: x*  P, then x* vertex, extreme point, and b.f.s. are equivalent statements. Pf) We follow the definitions given in the text. We already showed in the notes that extreme point and 0-dimensional face ( A I x* = b I, A I : rank n, b.f.s. in the text) are equivalent. To show all are equivalent, take the following steps: x* vertex (1)  x* extreme point (2)  x* b.f.s. (3)  x* vertex (1) x* vertex  x* extreme point Suppose x* is vertex, i.e.  c  R n such that x* is unique min of min c’x, x  P. If y, z  P, y, z  x*, then c’x* < c’y and c’x* < c’z. Hence c’x* < c’( y + (1- )z ), 0   1  y + (1- )z  x* Hence x* cannot be expressed as convex combination of two other points in P  x* extreme point.

Linear Programming (continued) (2) x* extreme point  x* b.f.s. Suppose x* is not a b.f.s.. Let I = { i : a i ’x* = b i } Since x* is not a b.f.s., the number of linearly independent vectors a i in I < n. Hence  nonzero d  R n such that a i ’d = 0,  i  I. Consider y = x* +  d, z = x* -  d. But, y, z  P for sufficiently small positive , and x* = (y+z)/2, which implies x* is not an extreme point. (3) x* b.f.s.  x* vertex Let x* be a b.f.s. and let I = { i : a i ’x* = b i } Let c =  i  I a i. Then c’x* =  i  I a i ’x*=  i  I b i  x  P, we have c’x =  i  I a i ’x   i  I b i = c’x*, hence x* optimal. For uniqueness, equality holds  a i ’x = b i, i  I. Since x* is b.f.s., it is the unique solution of a i ’x = b i, i  I Hence x* is a vertex. 

Linear Programming  Note: Whether x* is a basic solution depends on the representation of P. However, x* is b.f.s. iff x* extreme point and x* being extreme point is independent of the representation of P. Hence the property of being a b.f.s. is also independent of the representation of P.  Cor 2.1: For polyhedron P  , there can be finite number of basic or basic feasible solutions.  Def: Two distinct basic solutions are said to be adjacent if we can find n-1 linearly independent constraints that are active at both of them. ( In Fig 2.7, D and E are adjacent to B; A and C are adjacent to D.) If two adjacent basic solutions are also feasible, then the line segment that joins them is called an edge of the feasible set ( one dimensional face).

Linear Programming Polyhedra in standard form  Thm 2.4: P = { x : Ax = b, x  0 }, A: m  n, full row rank. Then x is a basic solution  x satisfies Ax = b and  indices B(1), …, B(m) such that A B(1), …, A B(m) are linearly independent and x i = 0, i  B(1), …, B(m). Pf) see text. ( To find a basic solution, choose m linearly independent columns A B(1), …, A B(m). Set x i = 0 for all i  B(1), …, B(m), then solve Ax = b for x B(1), …, x B(m). )  Def: basic variable, nonbasic variable, basis, basic columns, basis matrix B. (see text) ( Bx B = b  x B = B -1 b )

Linear Programming  Def: For standard form problems, we say that two bases are adjacent if they share all but one basic column.  Note: A basis uniquely determines a basic solution. Hence if have two different basic solutions  have different basis. But two different bases may correspond to the same basic solution. (e.g. when b = 0 ) Similarly, two adjacent basic solutions  two adjacent bases Two adjacent bases with different basic solutions  two adjacent basic solutions. However, two adjacent bases only not necessarily imply two adjacent basic solutions. The two solutions may be the same solution.

Linear Programming  Check that full row rank assumption on A results in no loss of generality.  Thm 2.5: P = { x : Ax = b, x  0 }  , A : m  n, rank is k < m. Q = { x : A I x = b I, x  0 }, I = { i 1, …, i k } with linearly indep. rows. Then P = Q. Pf) Suppose first k rows of A are linearly independent. P  Q is clear. Show Q  P. Every row a i ’ of A can be expressed as a i ’ =  j=1 k ij a j ’. Hence, for x  P, b i = a i ’x =  j=1 k ij a j ’x =  j=1 k ij b j, i = 1, …, m i.e. b i is also linear combination of b j, j  I. Suppose y  Q, then  i = 1, …, m, a i ’y =  j=1 k ij a j ’y =  j=1 k ij b j = b i Hence, y  P  Q  P 

Linear Programming Degeneracy  Def 2.10: A basic solution x  R n is said to be degenerate if more than n of the constraints are active at x.  Def 2.11: P = { x  R n : Ax = b, x  0 }, A: m  n, full row rank. Then x is a degenerate basic solution if more than n – m of the components of x are 0 ( i.e. some basic variables have 0 value)  For standard LP, if we have more than n – m variables at 0 for a basic feasible solution x*, it means that more than n – m of the nonnegativity constraints are active at x* in addition to the m constraints in Ax = b. The solution can be identified by defining n-m nonbasic variables ( value = 0). Hence, depending on the choice of nonbasic variables, we have different bases, but the solution is the same.

Linear Programming  Fig 2.9: A and C are degenerate basic feasible solutions. B and E are nondegenerate. D is a degenerate basic solution. A B C D E P

Linear Programming  Fig 2.11: (n-m)-dimensional illustration of degeneracy. Here, n=6, m=4. A is nondegenerate and basic variables are x 1, x 2, x 3, x 6. B is degenerate. We can choose x 1, x 6 as the nonbasic variables. Other possibilities are to choose x 1, x 5, or to choose x 5, x 6. A B P x 5 =0 x 4 =0 x 3 =0 x 2 =0 x 1 =0 x 6 =0

Linear Programming  Degeneracy is not purely geometric property, it may depend on representation of the polyhedrom ex)P = { x : Ax = b, x  0 }, A: m  n P’ = { x : Ax  b, -Ax  -b, x  0 } We know that P = P’, but representation is different. Suppose x* is a nondegenerate basic feasible solution of P. Then exactly n – m of the variables x i * are equal to 0. For P’, at the basic feasible solution x*, we have n – m variables set to 0 and additional 2m constraints are satisfied with equality. Hence, we have n + m active constraints and x* is degenerate.

Linear Programming Existence of extreme points  Def 2.12: Polyhedron P  R n contains a line if  a vector x  P and a nonzero d  R n such that x + d  P for all  R. Note that if d is a line in P, then A(x + d)  b for all  R  Ad = 0 Hence d is a vector in the lineality space S. ( in P = S+K+Q )  Thm 2.6: P = { x  R n : a i ’x  b i, i = 1, …, m }  , then the following are equivalent. (a) P has at least one extreme point. (b) P does not contain a line. (c)  n vectors out of a 1, …, a m, which are linearly independent. Pf) see proof in the text.

Linear Programming  Note that the conditions given in Thm 2.6 means that the lineality space S = {0}  Cor 2.2: Every nonempty bounded polyhedron (polytope) and every nonempty polyhedron in standard form has at least one basic feasible solution (extreme point).

Linear Programming Optimality of extreme points  Thm 2.7: Consider the LP of minimizing c’x over a polyhedron P. Suppose P has at least one extreme point and there exists an optimal solution. Then there exists an optimal solution which is an extreme point of P. Pf) see text.  Thm 2.8: Consider the LP of minimizing c’x over a polyhedron P. Suppose P has at least one extreme point. Then, either the optimal cost is - , or there exists an extreme point which is optimal.

Linear Programming (continued) Idea of proof in the text) Consider any x  P. Let I = { i : a i ’x = b i } Then we move to y = x + d, where a i ’d = 0, i  I and c’d  0. Then either the optimal cost is -  ( if the half line d is in P and c’d < 0 ) or we meet a new inequality which becomes active ( cost does not increase). By repeating the process, we eventually arrive at an extreme point which has value not inferior to x. Therefore, for any x in P, there exists an extreme point y such that c’y  c’x. Then we choose the extreme point which gives the smallest objective value with respect to c.

Linear Programming  ( alternative proof of Thm 2.8) P = S + K + Q. Pointedness of P implies S = {0}. Hence x  P  x =  i q i d i +  j r j u j, where d i ’s are extreme rays of K and u j ’s are extreme points of P and q i  0, r j  0,  j r j = 1. Suppose  i such that c’d i < 0, then LP is unbounded. ( For x  P, x + d i  P for  0. Then c’( x+ d i )  -  as   ) Otherwise, c’d i  0 for all i, take u* such that c’u* = min j c’u j Then  x  P, c’x =  i q i (c’d i ) +  j r j (c’u j )   j r j (c’u j )  (c’u*)  j r j = c’u*. Hence LP is either unbounded or  an extreme point of P which is an optimal solution. Proof here shows that the existence of an extreme ray d i of the pointed recession cone Ax  0 ( if have min problem and polyhedron is Ax  b) such that c’d i < 0 is the necessary and sufficient condition for unboundedness of the LP. ( If P has at least one extreme point, then LP unbounded   an extreme ray d i in recession cone K such that c’d i < 0) 