Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.

Similar presentations


Presentation on theme: "1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient."— Presentation transcript:

1 1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient to examine this aspect in two dimensions (n=2) and try to extrapolate to higher dimensions (be careful!)

2 2 4.1 Example z* := max z= 4x 1 + 3x 2  2x 1  x 2  40 (production machine time) x 1  x 2  30 (packaging machine time) x 1  15 (market demand) x 1  0 x 2  0

3 3 Feasible Region  First constraint: Corresponding hyperplane:

4 4

5 5 10203040 x 1 x 2 10 20 30 40 2x 1 + x 2 ≥ 40  2x 1 + x 2 40

6 6

7 7

8 8 x 1 =15

9 9  Objective function: f(x) = z = 4x 1 + 3x 2 Hence x 2 = (z -4x 1 )/3 so that for a given value of z the level curve is a straight line with slope  4/3. We can plot it for various values of z. We can identify the (x 1,x 2 ) pair yielding the largest feasible value for z.

10 10 203040 x 1 x 2 10 20 30 40 x 2 = (z - 4x 1 )/3 z = 4x 1 + 3x 2

11 11 10203040 x 1 x 2 10 20 30 40 z=60 x 2 = (z - 4x 1 )/3 z = 4x 1 + 3x 2

12 12 10203040 x 1 x 2 10 20 30 40 z=60 z=140 x 2 = (z - 4x 1 )/3 z = 4x 1 + 3x 2

13 13 10203040 x 1 x 2 10 20 30 40 z=60 z=100 z=140 x 2 = (z - 4x 1 )/3 z = 4x 1 + 3x 2

14 14 10203040 x 1 x 2 10 20 30 40 z=100

15 15 Important Observations  The graphical method is used to identify the hyperplanes specifying the optimal solution.  The optimal solution itself is determined by solving the respective equations.  Don’t be tempted to “read” the optimal solution directly from the graph!  The optimal solution in this example is an extreme point of the feasible region.

16 16 Questions????Questions????  What guarantee is there that an optimal solution exists?  In fact, is there any a priori guarantee that a feasible solution exists?  Could there be more that one optimal solution?

17 17 4.2 Multiple Optimal Solutions

18 18 No Feasible Solutions

19 19 Unbounded Feasible Region 10203040 x 1 x 2 10 20 30 40 z=120 z=160 z=200 Direction of increasing z

20 20

21 21 Feasible region Feasible region not closed

22 22 Feasible Region Not Closed 10203040 x 1 x 2 10 20 30 40 Corner point at (10,20) is no longer feasible

23 23 4.4 Geometry in Higher Dimensions Time Out!!! We need the first section of Appendix C

24 24 Appendix C Convex Sets and Functions  C.1 Convex Sets  C.1.1 Definition Given a collection of points x (1),...,x (k) in R n, a convex combination of these points is a point w such that w =  1 x (1) +  2 x (2) +... +  k x (k) where  i  and  i  i =1.

25 25  c.1.2 Definition The line segment joining two points p,q in R n is the collection of all points x such that x = p + (1- )q for some 

26 26 NILNNILN p q w = p + (1- )q

27 27  c.1.3. definition A subset C of R n is convex if for every pair of points (p,q) in C and any  the point w = p + (1- )q is also in C.  Namely, for any pair of points (p,q) in C, the line segment connecting these points is in C.

28 28

29 29  C.1.5 Theorem  The intersection of any finite number of convex sets in a convex set. Intersection

30 30  C.1.6 Definition A set of points H  R n satisfying a linear equation of the form a 1 x 1 + a 2 x 2 +... + a n x n = b for a  (0,0,0,...,0), is a hyperplane.  Observation: Such hyperplanes are of “dimension” n- 1. (why?)

31 31 Example (not in the notes) x1x1 x2x2 x 1 + x 2 = 3 1 2 3 123

32 32  C.1.7 Definition The two closed half-spaces of the hyperplane defined by a 1 x 1 + a 2 x 2 +... + a n x n = b are the set defined by a 1 x 1 + a 2 x 2 +... + a n x n ≥ b (positive half space) and a 1 x 1 + a 2 x 2 +... + a n x n  b (negative half space)

33 33 Example (not in the notes) x1x1 x2x2 x 1 + x 2 = 3 1 2 3 123 positive half space negative half-space

34 34  C.1.8 Theorem  Hyperplanes and their half-spaces are convex sets.  C.1.9 Definition A convex polytope is a set that can be expressed as the intersection of a finite number of closed half-spaces.

35 35  C.1.9 Definition A polyhedron is a non-empty bounded polytope.

36 36  C.1.10 Definition  A point x of a convex set C is said to be an extreme point if it cannot be expressed as a convex combination of other points in C.  More specifically, there are no points y and z in C (different from x) such that x lies on the line segment connecting these points.

37 37 examples (not in notes) Corner points are extreme points Boundary points are extreme points

38 38 examples (not in notes) Corner points are extreme points Boundary points are extreme points

39 39 Linear Combination (Not in Lecture Notes)  A linear combination is similar to a convex combination, except that the coefficients are not restricted to the interval [0,1]. Thus, formally:

40 40  Definition: A vector x in R n is said to be a linear combination of vectors {x (1),...,x (s) } in R n if and only if there are scalars {  1,...,  s } - not all zeros - such that x =  t=1,...,s  t x (t)

41 41 ExampleExample  x=(3,2,1) is a linear combination of x (1) = (1,0,0) x (2) = (0,1,0) x (3) = (0,0,1) using the coefficients  1 =3,  2 =2 and  3 =1.  y=(9,4,1) is not a linear combination of x (1) = (3,1,0) x (2) = (2,4,0) x (3) = (4,3,0) Why?

42 42 GeometricallyGeometrically a b Linear Combination a b Convex Combination a + (1- )b 0 <= <= 1 a + (1- )b unrestricted

43 43 a b Set of all convex combinations of a and b. Linear combinations of these two vectors span the entire plane.

44 44 Linear Independence A collection of vectors x (1),...,x (s) in R n are said to be linearly independent if no vector in this collection can be expressed as a linear combination of the other vectors in this collection. This means that if  t=1,...,s  t x (t) = (0,...,0) then  t =0 for t=1,2,...,s. Try to show this equivalence on your own!

45 45 ExampleExample  The vectors (1,0,0), (0,1,0), (0,0,1) are linearly independent.  The vectors (2,4,3), (1,2,3), (1,2,0) are not linearly independent.

46 46 a b b a o o Linearly independent Linearly dependent

47 47 Back to Chapter 4..... 4.4 Geometry in Higher Dimensions

48 48  The region of contact between the optimal hyperplane of the objective function and the polytope of the feasible region is either an extreme point or a face of the polytope. Feasible region Objective function Objective function Feasible region (NILN)

49 49  4.4.1 Theorem The set of feasible solutions of the standard LP problem is a convex polytope:

50 50  Proof: Follows directly from the definition of a convex polytope, i.e. a convex polytope is the intersection of finitely many half- spaces.

51 51  4.4.2 Theorem If a linear programming problem has exactly one optimal solution, then this solution must be an extreme point of the feasible region. Proof: We shall prove this theorem by contradiction!!!

52 52 So contrary to the theorem assume that the problem has exactly one optimal solution, call it x, and that x is not an extreme point of the feasible region.

53 53 This means that there are two distinct feasible solutions, say x’ and x’’, and a scalar, 0< <1, such that x = x’ + (1- )x” x’ x” x (NILN)

54 54  If we rewrite the objective function in terms of x’ and x” rather than x, we obtain: f(x) = f( x’ + (1- )x”) hence  f(x’) + (1- )f(x”) (4.13)

55 55  Now, because 0< <1, there are only three cases to consider with regard to the relationship between f(x), f(x”) and f(x”): 1. f(x’) < f(x) < f(x”) 2. f(x”) < f(x) < f(x’) 3. f(x) = f(x’) = f(x”)  But since x is an optimal solution, the first two cases are impossible (why?).  Thus, the third case must be true.  But this contradicts the assertion that x is the only optimal solution to the problem. [f(x)  f(x’) + (1- )f(x”)]

56 56 On your own, prove the following:  4.4.3 Lemma  If the LP has more than one optimal solution, it must have infinitely many optimal solutions. Furthermore, the set of optimal solutions is convex.

57 57  4.4.5 Proposition If a linear programming problem has an optimal solution, then at least one optimal solution is an extreme point of the feasible region.  Observation: This result does not say that all the optimal solutions are extreme points.

58 58  This result is so important that we discuss it under the header  4.5 The Fundamental Theorem of Linear Programming

59 59  4.5.1 Canonical Form As in the standard format, b i ≥0 for all i.

60 60  4.5.2 Corollary  The canonical form has at least one feasible solution, namely  x = (0,0,0,...,0,b 1,b 2,...,b m )  Note:  This solution is obtained by: – Setting all the original variables to zero – setting the new variables to the respective right-hand side values. – The new variables are called slack variables (n zeros)

61 61 4.5.3 Definition Basic Feasible Solutions  Bla, bla, bla....................  Given a system of m linear equations with k variables such that k > m  Select m columns whose coefficients are linearly independent.  Solve the system comprising these columns and the right hand side.

62 62  Set the other k-m variables to zero.  Any solution of this nature is called a basic solution.

63 63 (NILN)(NILN) m k m m

64 64  A basic feasible solution is a basic solution satisfying the non-negativity constraints x j ≥0, for all i.

65 65 4.5.4. Example

66 66 canonical form  Trivial basic feasible solution: x=(0,0,4,3)

67 67 Other basic feasible solutions ? Suppose we select x 2 and x 3 to be basic: Then, the reduced system is This yields the basic feasible solution x = (0,3/2,5/2,0)

68 68  If we select x 1 and x 2 to be the basic variables, the reduced system is This yields the basic feasible solution x=(5/3,2/3,0,0).

69 69  If we select x 1 and x 3 as basic variable, the reduced system is This yields the basic solution x=(3,0,-2,0). This solution is not feasible.

70 70 Next Result  Relation between the geometric and algebraic representations of LP problems: Geometry Extreme Points Algebra Basic feasible solutions  (NILN)

71 71 4.5.5 Theorem  Consider the LP problem: 

72 72  Where:  k>m  b i ≥o, for all i  and the coefficient matrix has m linearly independent columns.  Then, A vector x in R n is an extreme point of the feasible region of this problem if, and only if, x is a basic feasible solution of this problem. Proof: In the Lecture Notes (NE).

73 73 4.5.6 The Fundamental Theorm of Linear Programming Consider the LP problem featured in Theorem 4.5.5.  If this problem has a feasible solution then it must have a basic feasible solution.  If this problem has an optimal solution then it must have an optimal basic feasible solution.  Proof: In the Lecture Notes (NE).

74 74  4.5.7 Corollary If the set determined by (4.17) is not empty then it must have at least one extreme point.  4.5.8 Corollary The convex set determined by (4.17) possesses at most a finite number of extreme points (Can you suggest an upper bound?)

75 75  4.5 9 Corollary If the linear programming problem determined by (4.16)-(4.17) possesses a finite optimal solution, then there is a finite optimal solution which is an extreme point of the feasible solution.

76 76  4.5.10 Corollary If the feasible region determined by (4.17) is not empty and bounded, then the feasible region is a polyhedron.  4.5.11 Corollary At least one of the points that optimizes a linear objective function over a polyhedron is an extreme point of the polyhedron.

77 77  Direct Proof (utilising the fact that the feasible region is a polyhedron). Let {x (1),...,x (s) } be the set of extreme points of the feasible region (note: x (q) is a k- vector). Thus, any point in the feasible region can be expressed as a convex combination of these points, namely x =  t=1,...,s  t x (t) where  t=1,...,s  t = 1,  t ≥0, t=1,2,...,s.

78 78 Thus, the objective function can be rewritten as follows: z(x) =  j=1,..,k c j x j =  j=1,..,k c j {  t=1,..,s  t x j (t) } j =  t=1,..,s  t {  j=1,..,k c j x j (t) } =  t=1,..,s  t z (t) (4.41) where z (t) :=  j=1,..,k c j x j (t), t=1,...,s.

79 79 Because  t ≥0, and  j=1,...,k  j =1, it follows from (4.41) that max {z (t) : t=1,...,s} ≥ z ≥ min {z (t) : t=1,...,s} (4.43) where z=  j=1,...,k c j x j. Since x is an arbitrary feasible solution, (4.43) entails that at least one extreme point is an optimal solution (regardless of what opt is).

80 80 A Subtlety (NILN)  Given a list of numbers (y 1,...,y p ) and a list of coefficients (  1,...,  p ) each in the unit interval [0,1] and their sum is equal to 1, we have: max {y 1,...,y p } ≥  j=1,...,p  j y j ≥ min {y 1,...,y p } In words, any convex combination of a collection of numbers is in the interval specified by the smallest and largest elements of the collection

81 81 b = max {y 1,...,y p } a = min {y 1,...,y p } Any convex combination of  y 1,...,y p must lie in the interval [a,b ]  Try to prove it on your own!

82 82 4.6 Solution Strategies Bottom Line  Given a LP with an optimal solution, at least one of the optimal solutions is an extreme point of the feasible region.  So how about solving the problem by enumerating all the extreme points of the feasible region?

83 83  Since each extreme point of the feasible region of the standard problem is a basic solution of the system linear constraints having n+m variables and m (functional) constraints, it follows that there are at most

84 84  Since each extreme point of the feasible region of the standard problem is a basic solution of the system linear constraints having n+m variables and m (functional) constraints, it follows that there are at most extreme points.

85 85  For large n and m this yields a very large number (Curse of Dimensionality!), eg  for n=m=50, this yields 10 29.

86 86 Most popular Methods  Simplex, [Dantzig, 1940s] – Visits only extreme points  Interior Point [Karmarkar, 1980s] – Moves from the (relative) interior of the region or faces, towards the optimal solution.  In this year’s version of 620-261 we shall focus on the Simplex Method.


Download ppt "1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient."

Similar presentations


Ads by Google