1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient.

Slides:



Advertisements
Similar presentations
5.4 Basis And Dimension.
Advertisements

5.1 Real Vector Spaces.
Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
Lecture #3; Based on slides by Yinyu Ye
OR Simplex method (algebraic interpretation) Add slack variables( 여유변수 ) to each constraint to convert them to equations. (We may refer it as.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
How should we define corner points? Under any reasonable definition, point x should be considered a corner point x What is a corner point?
Linear Programming Fundamentals Convexity Definition: Line segment joining any 2 pts lies inside shape convex NOT convex.
The Structure of Polyhedra Gabriel Indik March 2006 CAS 746 – Advanced Topics in Combinatorial Optimization.
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Chapter 3 An Introduction to Linear Programming
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
Linear Algebra Chapter 4 Vector Spaces.
ME 1202: Linear Algebra & Ordinary Differential Equations (ODEs)
C&O 355 Mathematical Programming Fall 2010 Lecture 4 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Simplex method (algebraic interpretation)
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A Image:
A LINEAR PROGRAMMING PROBLEM HAS LINEAR OBJECTIVE FUNCTION AND LINEAR CONSTRAINT AND VARIABLES THAT ARE RESTRICTED TO NON-NEGATIVE VALUES. 1. -X 1 +2X.
Section 2.3 Properties of Solution Sets
Chapter 2 Introduction to Linear Programming n Linear Programming Problem n Problem Formulation n A Maximization Problem n Graphical Solution Procedure.
The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室. Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a.
Chapter 3 Linear Programming Methods
Chap. 4 Vector Spaces 4.1 Vectors in Rn 4.2 Vector Spaces
I.4 Polyhedral Theory 1. Integer Programming  Objective of Study: want to know how to describe the convex hull of the solution set to the IP problem.
OR Simplex method (algebraic interpretation) Add slack variables( 여유변수 ) to each constraint to convert them to equations. (We may refer it as.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
The Simplex Algorithm 虞台文 大同大學資工所 智慧型多媒體研究室. Content Basic Feasible Solutions The Geometry of Linear Programs Moving From Bfs to Bfs Organization of a.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
OR  Now, we look for other basic feasible solutions which gives better objective values than the current solution. Such solutions can be examined.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
1 1 Slide Graphical solution A Graphical Solution Procedure (LPs with 2 decision variables can be solved/viewed this way.) 1. Plot each constraint as an.
7.3 Linear Systems of Equations. Gauss Elimination
An Introduction to Linear Programming
Linear Programming Many problems take the form of maximizing or minimizing an objective, given limited resources and competing constraints. specify the.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Computation of the solutions of nonlinear polynomial systems
5 Systems of Linear Equations and Matrices
Perturbation method, lexicographic method
Proving that a Valid Inequality is Facet-defining
Chapter 1. Introduction Mathematical Programming (Optimization) Problem: min/max
Linear Programming Prof. Sweta Shah.
Chap 9. General LP problems: Duality and Infeasibility
Introduction to linear programming (LP): Minimization
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
1.4 The Extreme Point Theorem : Geometry of a linear programming problem The set of feasible solutions to a general linear programming problem is a convex.
Chapter 3 The Simplex Method and Sensitivity Analysis
Polyhedron Here, we derive a representation of polyhedron and see the properties of the generators. We also see how to identify the generators. The results.
Solving Linear Programming Problems: Asst. Prof. Dr. Nergiz Kasımbeyli
Linear Algebra Lecture 39.
Linear Programming I: Simplex method
Chapter 5. The Duality Theorem
I.4 Polyhedral Theory (NW)
Elementary Linear Algebra Anton & Rorres, 9th Edition
Properties of Solution Sets
Quantum Foundations Lecture 3
Back to Cone Motivation: From the proof of Affine Minkowski, we can see that if we know generators of a polyhedral cone, they can be used to describe.
I.4 Polyhedral Theory.
Proving that a Valid Inequality is Facet-defining
(Convex) Cones Def: closed under nonnegative linear combinations, i.e.
Part 3. Linear Programming
Chapter 2. Simplex method
Graphical solution A Graphical Solution Procedure (LPs with 2 decision variables can be solved/viewed this way.) 1. Plot each constraint as an equation.
Simplex method (algebraic interpretation)
BASIC FEASIBLE SOLUTIONS
Vector Spaces RANK © 2012 Pearson Education, Inc..
Linear Equations in Linear Algebra
Chapter 2. Simplex method
Presentation transcript:

1 Chapter 4 Geometry of Linear Programming  There are strong relationships between the geometrical and algebraic features of LP problems  Convenient to examine this aspect in two dimensions (n=2) and try to extrapolate to higher dimensions (be careful!)

2 4.1 Example z* := max z= 4x 1 + 3x 2  2x 1  x 2  40 (production machine time) x 1  x 2  30 (packaging machine time) x 1  15 (market demand) x 1  0 x 2  0

3 Feasible Region  First constraint: Corresponding hyperplane:

4

x 1 x x 1 + x 2 ≥ 40  2x 1 + x 2 40

6

7

8 x 1 =15

9  Objective function: f(x) = z = 4x 1 + 3x 2 Hence x 2 = (z -4x 1 )/3 so that for a given value of z the level curve is a straight line with slope  4/3. We can plot it for various values of z. We can identify the (x 1,x 2 ) pair yielding the largest feasible value for z.

x 1 x x 2 = (z - 4x 1 )/3 z = 4x 1 + 3x 2

x 1 x z=60 x 2 = (z - 4x 1 )/3 z = 4x 1 + 3x 2

x 1 x z=60 z=140 x 2 = (z - 4x 1 )/3 z = 4x 1 + 3x 2

x 1 x z=60 z=100 z=140 x 2 = (z - 4x 1 )/3 z = 4x 1 + 3x 2

x 1 x z=100

15 Important Observations  The graphical method is used to identify the hyperplanes specifying the optimal solution.  The optimal solution itself is determined by solving the respective equations.  Don’t be tempted to “read” the optimal solution directly from the graph!  The optimal solution in this example is an extreme point of the feasible region.

16 Questions????Questions????  What guarantee is there that an optimal solution exists?  In fact, is there any a priori guarantee that a feasible solution exists?  Could there be more that one optimal solution?

Multiple Optimal Solutions

18 No Feasible Solutions

19 Unbounded Feasible Region x 1 x z=120 z=160 z=200 Direction of increasing z

20

21 Feasible region Feasible region not closed

22 Feasible Region Not Closed x 1 x Corner point at (10,20) is no longer feasible

Geometry in Higher Dimensions Time Out!!! We need the first section of Appendix C

24 Appendix C Convex Sets and Functions  C.1 Convex Sets  C.1.1 Definition Given a collection of points x (1),...,x (k) in R n, a convex combination of these points is a point w such that w =  1 x (1) +  2 x (2)  k x (k) where  i  and  i  i =1.

25  c.1.2 Definition The line segment joining two points p,q in R n is the collection of all points x such that x = p + (1- )q for some 

26 NILNNILN p q w = p + (1- )q

27  c.1.3. definition A subset C of R n is convex if for every pair of points (p,q) in C and any  the point w = p + (1- )q is also in C.  Namely, for any pair of points (p,q) in C, the line segment connecting these points is in C.

28

29  C.1.5 Theorem  The intersection of any finite number of convex sets in a convex set. Intersection

30  C.1.6 Definition A set of points H  R n satisfying a linear equation of the form a 1 x 1 + a 2 x a n x n = b for a  (0,0,0,...,0), is a hyperplane.  Observation: Such hyperplanes are of “dimension” n- 1. (why?)

31 Example (not in the notes) x1x1 x2x2 x 1 + x 2 =

32  C.1.7 Definition The two closed half-spaces of the hyperplane defined by a 1 x 1 + a 2 x a n x n = b are the set defined by a 1 x 1 + a 2 x a n x n ≥ b (positive half space) and a 1 x 1 + a 2 x a n x n  b (negative half space)

33 Example (not in the notes) x1x1 x2x2 x 1 + x 2 = positive half space negative half-space

34  C.1.8 Theorem  Hyperplanes and their half-spaces are convex sets.  C.1.9 Definition A convex polytope is a set that can be expressed as the intersection of a finite number of closed half-spaces.

35  C.1.9 Definition A polyhedron is a non-empty bounded polytope.

36  C.1.10 Definition  A point x of a convex set C is said to be an extreme point if it cannot be expressed as a convex combination of other points in C.  More specifically, there are no points y and z in C (different from x) such that x lies on the line segment connecting these points.

37 examples (not in notes) Corner points are extreme points Boundary points are extreme points

38 examples (not in notes) Corner points are extreme points Boundary points are extreme points

39 Linear Combination (Not in Lecture Notes)  A linear combination is similar to a convex combination, except that the coefficients are not restricted to the interval [0,1]. Thus, formally:

40  Definition: A vector x in R n is said to be a linear combination of vectors {x (1),...,x (s) } in R n if and only if there are scalars {  1,...,  s } - not all zeros - such that x =  t=1,...,s  t x (t)

41 ExampleExample  x=(3,2,1) is a linear combination of x (1) = (1,0,0) x (2) = (0,1,0) x (3) = (0,0,1) using the coefficients  1 =3,  2 =2 and  3 =1.  y=(9,4,1) is not a linear combination of x (1) = (3,1,0) x (2) = (2,4,0) x (3) = (4,3,0) Why?

42 GeometricallyGeometrically a b Linear Combination a b Convex Combination a + (1- )b 0 <= <= 1 a + (1- )b unrestricted

43 a b Set of all convex combinations of a and b. Linear combinations of these two vectors span the entire plane.

44 Linear Independence A collection of vectors x (1),...,x (s) in R n are said to be linearly independent if no vector in this collection can be expressed as a linear combination of the other vectors in this collection. This means that if  t=1,...,s  t x (t) = (0,...,0) then  t =0 for t=1,2,...,s. Try to show this equivalence on your own!

45 ExampleExample  The vectors (1,0,0), (0,1,0), (0,0,1) are linearly independent.  The vectors (2,4,3), (1,2,3), (1,2,0) are not linearly independent.

46 a b b a o o Linearly independent Linearly dependent

47 Back to Chapter Geometry in Higher Dimensions

48  The region of contact between the optimal hyperplane of the objective function and the polytope of the feasible region is either an extreme point or a face of the polytope. Feasible region Objective function Objective function Feasible region (NILN)

49  Theorem The set of feasible solutions of the standard LP problem is a convex polytope:

50  Proof: Follows directly from the definition of a convex polytope, i.e. a convex polytope is the intersection of finitely many half- spaces.

51  Theorem If a linear programming problem has exactly one optimal solution, then this solution must be an extreme point of the feasible region. Proof: We shall prove this theorem by contradiction!!!

52 So contrary to the theorem assume that the problem has exactly one optimal solution, call it x, and that x is not an extreme point of the feasible region.

53 This means that there are two distinct feasible solutions, say x’ and x’’, and a scalar, 0< <1, such that x = x’ + (1- )x” x’ x” x (NILN)

54  If we rewrite the objective function in terms of x’ and x” rather than x, we obtain: f(x) = f( x’ + (1- )x”) hence  f(x’) + (1- )f(x”) (4.13)

55  Now, because 0< <1, there are only three cases to consider with regard to the relationship between f(x), f(x”) and f(x”): 1. f(x’) < f(x) < f(x”) 2. f(x”) < f(x) < f(x’) 3. f(x) = f(x’) = f(x”)  But since x is an optimal solution, the first two cases are impossible (why?).  Thus, the third case must be true.  But this contradicts the assertion that x is the only optimal solution to the problem. [f(x)  f(x’) + (1- )f(x”)]

56 On your own, prove the following:  Lemma  If the LP has more than one optimal solution, it must have infinitely many optimal solutions. Furthermore, the set of optimal solutions is convex.

57  Proposition If a linear programming problem has an optimal solution, then at least one optimal solution is an extreme point of the feasible region.  Observation: This result does not say that all the optimal solutions are extreme points.

58  This result is so important that we discuss it under the header  4.5 The Fundamental Theorem of Linear Programming

59  Canonical Form As in the standard format, b i ≥0 for all i.

60  Corollary  The canonical form has at least one feasible solution, namely  x = (0,0,0,...,0,b 1,b 2,...,b m )  Note:  This solution is obtained by: – Setting all the original variables to zero – setting the new variables to the respective right-hand side values. – The new variables are called slack variables (n zeros)

Definition Basic Feasible Solutions  Bla, bla, bla  Given a system of m linear equations with k variables such that k > m  Select m columns whose coefficients are linearly independent.  Solve the system comprising these columns and the right hand side.

62  Set the other k-m variables to zero.  Any solution of this nature is called a basic solution.

63 (NILN)(NILN) m k m m

64  A basic feasible solution is a basic solution satisfying the non-negativity constraints x j ≥0, for all i.

Example

66 canonical form  Trivial basic feasible solution: x=(0,0,4,3)

67 Other basic feasible solutions ? Suppose we select x 2 and x 3 to be basic: Then, the reduced system is This yields the basic feasible solution x = (0,3/2,5/2,0)

68  If we select x 1 and x 2 to be the basic variables, the reduced system is This yields the basic feasible solution x=(5/3,2/3,0,0).

69  If we select x 1 and x 3 as basic variable, the reduced system is This yields the basic solution x=(3,0,-2,0). This solution is not feasible.

70 Next Result  Relation between the geometric and algebraic representations of LP problems: Geometry Extreme Points Algebra Basic feasible solutions  (NILN)

Theorem  Consider the LP problem: 

72  Where:  k>m  b i ≥o, for all i  and the coefficient matrix has m linearly independent columns.  Then, A vector x in R n is an extreme point of the feasible region of this problem if, and only if, x is a basic feasible solution of this problem. Proof: In the Lecture Notes (NE).

The Fundamental Theorm of Linear Programming Consider the LP problem featured in Theorem  If this problem has a feasible solution then it must have a basic feasible solution.  If this problem has an optimal solution then it must have an optimal basic feasible solution.  Proof: In the Lecture Notes (NE).

74  Corollary If the set determined by (4.17) is not empty then it must have at least one extreme point.  Corollary The convex set determined by (4.17) possesses at most a finite number of extreme points (Can you suggest an upper bound?)

75  Corollary If the linear programming problem determined by (4.16)-(4.17) possesses a finite optimal solution, then there is a finite optimal solution which is an extreme point of the feasible solution.

76  Corollary If the feasible region determined by (4.17) is not empty and bounded, then the feasible region is a polyhedron.  Corollary At least one of the points that optimizes a linear objective function over a polyhedron is an extreme point of the polyhedron.

77  Direct Proof (utilising the fact that the feasible region is a polyhedron). Let {x (1),...,x (s) } be the set of extreme points of the feasible region (note: x (q) is a k- vector). Thus, any point in the feasible region can be expressed as a convex combination of these points, namely x =  t=1,...,s  t x (t) where  t=1,...,s  t = 1,  t ≥0, t=1,2,...,s.

78 Thus, the objective function can be rewritten as follows: z(x) =  j=1,..,k c j x j =  j=1,..,k c j {  t=1,..,s  t x j (t) } j =  t=1,..,s  t {  j=1,..,k c j x j (t) } =  t=1,..,s  t z (t) (4.41) where z (t) :=  j=1,..,k c j x j (t), t=1,...,s.

79 Because  t ≥0, and  j=1,...,k  j =1, it follows from (4.41) that max {z (t) : t=1,...,s} ≥ z ≥ min {z (t) : t=1,...,s} (4.43) where z=  j=1,...,k c j x j. Since x is an arbitrary feasible solution, (4.43) entails that at least one extreme point is an optimal solution (regardless of what opt is).

80 A Subtlety (NILN)  Given a list of numbers (y 1,...,y p ) and a list of coefficients (  1,...,  p ) each in the unit interval [0,1] and their sum is equal to 1, we have: max {y 1,...,y p } ≥  j=1,...,p  j y j ≥ min {y 1,...,y p } In words, any convex combination of a collection of numbers is in the interval specified by the smallest and largest elements of the collection

81 b = max {y 1,...,y p } a = min {y 1,...,y p } Any convex combination of  y 1,...,y p must lie in the interval [a,b ]  Try to prove it on your own!

Solution Strategies Bottom Line  Given a LP with an optimal solution, at least one of the optimal solutions is an extreme point of the feasible region.  So how about solving the problem by enumerating all the extreme points of the feasible region?

83  Since each extreme point of the feasible region of the standard problem is a basic solution of the system linear constraints having n+m variables and m (functional) constraints, it follows that there are at most

84  Since each extreme point of the feasible region of the standard problem is a basic solution of the system linear constraints having n+m variables and m (functional) constraints, it follows that there are at most extreme points.

85  For large n and m this yields a very large number (Curse of Dimensionality!), eg  for n=m=50, this yields

86 Most popular Methods  Simplex, [Dantzig, 1940s] – Visits only extreme points  Interior Point [Karmarkar, 1980s] – Moves from the (relative) interior of the region or faces, towards the optimal solution.  In this year’s version of we shall focus on the Simplex Method.