Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.

Similar presentations


Presentation on theme: "Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2."— Presentation transcript:

1 Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2  Duality [PS82]-Ch3  Primal-Dual Algorithm [PS82]-Ch5  Additional topics: uReading/Presentation by students Lecture notes adapted from Comb Opt course by Jorisk, Math Dept, Maashricht Univ,

2 Hon Wai Leong, NUS (CS6234, Spring 2009) Page 2 Copyright © 2009 by Leong Hon Wai Comments by LeongHW: First review some old transparencies…

3 1/28/2016Combinatorial Optimization Masters OR Chapter 2 The Simplex Algorithm (Linear programming)

4 1/28/2016Combinatorial Optimization Masters OR General LP Given: m x n integer Matrix, A, with rows a ’ i. M set of rows corresponding to equality constraints M ’ set of rows corresponding to inequality constraints x ε R n N set of columns corresponding to constraint variables, N ’ set of colums corresponding to unconstraint variables. m-vector b of integers, n-vector c of integers

5 1/28/2016Combinatorial Optimization Masters OR General LP (cont.) Definition 2.1 An instance of general LP is defined by minc’ x s.t. a ’ i = b i i ε M a ’ i ≤ b i i ε M ’ x j ≥0j ε N x j freej ε N ’

6 1/28/2016Combinatorial Optimization Masters OR Other forms of LP Canonical form minc’ x s.t.Ax ≥ r x≥0 Standard form minc’ x s.t.Ax= b x≥0 It is possible to reformulate general forms to canonical form and standard form and vice versa. The forms are equivalent (see [PS])

7 1/28/2016Combinatorial Optimization Masters OR Linear algebra basics Definition 0.1 Two or more vectors v 1, v 2,...,v m which are not linearly dependent, i.e., cannot be expressed in the form d 1 v 1 + d 2 v 2 +...+ d m v m = 0 with d 1, d 2,..., d m, constants which are not all zero are said to be linearly independent. Definition 0.2 A set of vectors v 1, v 2,...,v m is linearly independent iff the matrix rank of the matrix V = (v 1, v 2,...,v m ) is m, in which case V is diagonazible.

8 1/28/2016Combinatorial Optimization Masters OR Linear Algebra basics (cont.) Definition 0.3 A square m x m matrix of rank m is called regular or nonsingular. A square m x m matrix of rank less than m is called singular. Alternative definition A square m x m matrix is called singular if its determinant equals zero. Otherwise it is called nonsingular or regular.

9 1/28/2016Combinatorial Optimization Masters OR Basis Assumption 2.1 Matrix A is of Rank m. Definition 2.3 A basis of A is a linearly independent collection Q = {A j 1,...,A j m }. Thus Q can be viewed as a nonsingular Matrix B. The basic solution corresponding to Q is a vector x ε R n such that =k-th component of B -1 b for k =1,…,m, x j k = 0otherwise.

10 1/28/2016Combinatorial Optimization Masters OR Finding a basic solution x 1.Choose a set Q of linearly independent columns of A. 2.Set all components of x corresponding to columns not in Q to zero. 3.Solve the m resulting equations to determine the components of x. These are the basic variables.

11 1/28/2016Combinatorial Optimization Masters OR Basic Feasible Solutions Definition 2.4: If a basic solution is in F, then is a basic feasible soln (bfs). Lemma 2.2: Let x be a bfs of Ax= b x≥0 corresponding to basis Q. Then there exists a cost vector c such that x is the unique optimal solution of minc’ x s.t. Ax= b x≥0

12 1/28/2016Combinatorial Optimization Masters OR Lemma 2.2 (cont.) Proof: Choose c j = 0 if A j ε B, 1 otherwise. Clearly c ’ x = 0, and must be optimal since all coefficients of c are non-negative integers. Now consider any other feasible optimal solution y. It must have y j =0 for all A j not in B. Therefore y must be equal to x. Hence x is unique.

13 1/28/2016Combinatorial Optimization Masters OR Existence of a solution Assumption 2.2: The set F of feasible points is not empty. Theorem 2.1: Under assumptions 2.1. and 2.2 at least one bfs exists. Proof. [PS82]

14 1/28/2016Combinatorial Optimization Masters OR And finally on feasible basic solutions Assumption 2.3 The set of real numbers {c’x : x ε F} is bounded from below. Then, using Lemma 2.1, Theorem 2.2 (which you may both skip) derives that x can be bounded from above, and that there is some optimal value of its cost function.

15 1/28/2016Combinatorial Optimization Masters OR Geometry of linear programming Definition 0.4. A subspace S of R d is the set of points in R d satisfying a set of homogenous equations S={x ε R d : a j 1 x 1 + a j 2 x 2 +.....+ a j d x d =0, j=1...m} Definition 0.5. The dimension dim(S) of a subspace S equals the maximum number of independent vectors in it. Dim(S) = d-rank(A).

16 1/28/2016Combinatorial Optimization Masters OR Geometry of linear programming Definition 0.6. An affine subspace S of R d is the set of points in R d satisfying a set of nonhomogenous equations S = {x ε R d : a j 1 x 1 + a j 2 x 2 +.....+ a j d x d =b j, j=1...m} Consequence: The dimension of the set F defined by the LP minc’ x s.t. Ax= b, A an m x d Matrix x≥0 is at most d-m

17 1/28/2016Combinatorial Optimization Masters OR Convex Polytopes Affine subspaces: a 1 x 1 + a 2 x 2 =b a 1 x 1 + a 2 x 2 + a 3 x 3 = b x1x1 x2x2 x3x3 x1x1 x2x2

18 1/28/2016Combinatorial Optimization Masters OR Convex polytopes Definition 0.7 An affine subspace a 1 x 1 + a 2 x 2 + + a d x d = b of dimension d-1 is called a hyperplane. A hyperplane defines two half spaces a 1 x 1 + a 2 x 2 + + a d x d ≤ b a 1 x 1 + a 2 x 2 + + a d x d ≥ b

19 1/28/2016Combinatorial Optimization Masters OR Convex polytopes A half space is a convex set.  Lemma 1.1 yields that the intersection of half spaces is a convex set. Definition 0.8 If the intersection of a finite number of half spaces is bounded and non empty, it is called a convex polytope, or simply polytope.

20 1/28/2016Combinatorial Optimization Masters OR Example polytope Theorem 2.3 Every convex polytope is the convex hull of its vertices Convention: Only in non-negative orthant  d equations of the form x j ≥ 0.

21 1/28/2016Combinatorial Optimization Masters OR Convex Polytopes and LP A convex polytope can be seen as: 1.The convex hull of a finite set of points 2.The intersection of many halfspaces 3.A representation of an algebraic system Ax= b, where A an m x n matrix x≥ 0

22 1/28/2016Combinatorial Optimization Masters OR Cont. Since rank(A) = m, Ax = b can be rewritten as x i = b i – Σ j=1 n-m a ij x j i=n-m+1,...,n Thus, F can be defined by b i – Σ j=1 n-m a ij x j ≥ 0i=n-m+1,...,n x j ≥ 0j=1,..,n-m The intersection of these half spaces is bounded, and therefore this system defines a convex polytope P which is a subset of R n-m. Thus the set F of an LP in standard form can be viewed as the intersection of a set of half spaces and as a convex polytope.

23 1/28/2016Combinatorial Optimization Masters OR Cont. Conversely let P be a polytope in R n-m. Then n half spaces defining P can be expressed as h i,1 x 1 + h i,2 x 2 + … + h i,n-m x n-m + g i ≤ 0, i =1..n. By convention, we assume that the first n-m inequalities are of the form x i ≥ 0. Introduce m slack variables for the remaining inequalities to obtain Ax= b where A an m x n matrix x≥ 0 Where A = [H | I] and x ε R n.

24 1/28/2016Combinatorial Optimization Masters OR Cont. Thus every polytope can indeed be seen as the feasible region of an LP. Any point x * = (x 1 x 2,,.., x n-m ) in P can be transformed to x = (x 1 x 2,,.., x n ) by letting x i = – g i –Σ j-1 n-m h ij x j,i = n+m-1,...,n.(*) Conversely any x = (x 1 x 2,,.., x n ) ε F can be transformed to x * = (x 1 x 2,,.., x n-m ) by truncation.

25 1/28/2016Combinatorial Optimization Masters OR Vertex theorem Theorem 2.4: Let P be a convex polytope, F = {x : Ax=b, x≥0} the corresponding feasible set of an LP and x * = (x 1 x 2,,.., x n-m ) ε P. Then the following are equivalent: a.The point x * is a vertex of P. b.x * cannot be a strict convex combination of points of P. c.The corresponding vector x as defined in (*) is a basic feasible solution of F. Proof: DIY, [Show a  b  c  a] (see [PS82])

26 1/28/2016Combinatorial Optimization Masters OR A glimpse at degeneracy Different bfs’s lead to different vertices of P (see proof from c  a), and hence lead to to different bases, because they have different non-zero components. However in the augmentation process from a  b different bases may lead to the same bfs.

27 1/28/2016Combinatorial Optimization Masters OR Example x 1 +x 2 +x 3 ≤ 4 x 1 ≤ 2 x 3 ≤ 3 3x 2 +x 3 ≤ 6 x 1 ≥ 0 x 2 ≥ 0 x 3 ≥ 0

28 1/28/2016Combinatorial Optimization Masters OR Example (cont.) x 1 +x 2 +x 3 +x 4 = 4 x 1 +x 5 = 2 x 3 +x 6 = 3 3x 2 +x 3 +x 7 = 6 First basis: A 1,A 2,A 3,A 6.  x 1 =2,x 2 =2,x 6 =3. Second basis: A 1,A 2,A 4,A 6.  x 1 =2,x 2 =2,x 6 =3

29 1/28/2016Combinatorial Optimization Masters OR Example (cont.) x 1 +x 2 +x 3 ≤ 4 x 1 ≤ 2 x 3 ≤ 3 3x 2 +x 3 ≤ 6 x 1 ≥ 0 x 2 ≥ 0 x 3 ≥ 0

30 1/28/2016Combinatorial Optimization Masters OR Degeneracy Definition 2.5 A basic feasible solution is called degenerate if it contains more than n-m zeros. Theorem 2.5 If two distinct bases correspond to the same bfs x, then x is degenerate. Proof: Suppose Q and Q ’ determine the same bfs x. Then they must have zeros in the columns not in Q, but also in the columns Q\Q ’. Since Q\Q ’ is not empty, x is degenerate.

31 1/28/2016Combinatorial Optimization Masters OR Optimal solutions Theorem 2.6: There is an optimal bfs in any instance of LP. Furthermore, if q bfs’ are optimal, so are the convex combinations of these q bfs’s. Proof: By Theorem 2.4 we may alternatively proof that the corresponding polytope P has an optimal vertex, and that if q vertices are optimal, then so are their convex combinations. Assume linear cost = d ’ x. P is closed and bounded and therefore d attains its minimum in P.

32 1/28/2016Combinatorial Optimization Masters OR Cont. Let x o be a solution in which this minimum is attained and let x 1,...,x n be the vertices of P. Then, by Theorem 2.3, x o = Σ N i=1 α i x i where Σ N i=1 α i = 1, α i ≥ 0. Let x j be the vertex with lowest cost. Then d ’ x o = Σ N i=1 α i d ’ x i ≥ d ’ x i Σ N i=1 α i = d ’ x i, and therefore x j is optimal. This proofs the first part of the Theorem. For the second part,notice that if y is a convex combination of optimal vertices x 1,x 2,..x q, then, since the objective function is linear, y has the same objective function value and is also optimal.

33 1/28/2016Combinatorial Optimization Masters OR Moving from bfs to bfs Let x 0 = {x 10,…,x m0 } be such that Σ i x i0 A b(i) = b (1) Let B be the corresponding basis, (namely, set of columns {A B(i) : i =1,…m} Then every non basic column A j can be written as Σ i x ij A B(i) = A j (2) Together this yields: Σ i (x i0 – θ x ij )A B(i) + θ A j = b.

34 1/28/2016Combinatorial Optimization Masters OR Consider Σ i (x i0 – θ x ij )A B(i) + θ A j = b Now start increasing θ. This corresponds to a basic solution in which m+1 variables are positive (assuming x is non-degenerate). Increase θ until at least one of the (x i0 – θ x ij ) becomes zero. We have arrived at a solution with at most m non-zeros. Moving from bfs to bfs (2) Another bfs

35 1/28/2016Combinatorial Optimization Masters OR Simplex Algorithm in Tableau 3x 1 + 2x 2 +x 3 = 1 5x 1 + 2x 2 + x 3 + x 4 = 3 2 x 1 + 5 x 2 + x 3 + x 5 = 4 x1x1 x2x2 x3x3 x4x4 x5x5 132100 351110 425101

36 1/28/2016Combinatorial Optimization Masters OR Tableau x1x1 x2x2 x3x3 x4x4 x5x5 132100 351110 425101 x1x1 x2x2 x3x3 x4x4 x5x5 132100 22010 3 3001 Basis diagonalized x1x1 x2x2 x3x3 x4x4 x5x5 1/312/31/300 4/30-7/3-2/310 10/3011/31/301 Moving to other bfs Bfs {x 3 =1, x 4 =2, x 5 =3} = { x i0 }. From Tableau 2, A 1 = 3A 3 + 2A 4 – A 5 = Σ x i1 A B(i) To bring column 1 into basis, θ 0 = min { 1/3, 2/2 } = 1/3 corresponding to row1 (x 3 ) leaving the basis

37 1/28/2016Combinatorial Optimization Masters OR Choosing a Profitable Column The cost of a bfs x 0 = {x 10,…,x m0 } with basis B is given by z 0 = Σ i x i0 c B(i) Now consider bring a non-basic column A j into the basis. Recall that A j can be written as A j = Σ i x ij A B(i) (2) Interpretation: For every unit of the variable x j that enters the new basis, an amount of x ij of each variable x B(i) must leave. Nett change in cost (for unit increase of x j ) is c j – Σ i x ij c B(i)

38 1/28/2016Combinatorial Optimization Masters OR Choosing a Profitable Column (2) Nett change in cost (for unit increase of x j ) is c ~ j = c j – z j where z j = Σ i x ij c B(i) Call this quantity the relative cost for column j. Observations: * It is only profitable to bring in column j if c ~ j < 0. * If c ~ j  0, for all j, then we reached optimum.

39 1/28/2016Combinatorial Optimization Masters OR Optimal solutions Theorem 2.8: (Optimality Condition) At a bfs x 0, a pivot step in which x j enters the basis changes the cost by the amount θ 0 c ~ j = θ 0 (c j – z j ) If c ~ = (c – z)  0, then x 0 is optimal.

40 1/28/2016Combinatorial Optimization Masters OR Tableau (example 2.6) Min Z = x 1 + x 2 + x 3 + x 4 + x 5 s.t. 3x 1 + 2x 2 +x 3 = 1 5x 1 + 2x 2 + x 3 + x 4 = 3 2 x 1 + 5 x 2 + x 3 + x 5 = 4 x1x1 x2x2 x3x3 x4x4 x5x5 – z011111 132100 351110 425101

41 1/28/2016Combinatorial Optimization Masters OR Simplex Algorithm (Tableau Form) x1x1 x2x2 x3x3 x4x4 x5x5 –z–z011111 132100 351110 425101 x1x1 x2x2 x3x3 x4x4 x5x5 –z–z-6-3 000 x3=x3=132100 x4=x4=22010 x5=x5=3 3001 x1x1 x2x2 x3x3 x4x4 x5x5 –z–z011111 x3=x3=132100 x4=x4=22 010 x5=x5=3 3001 Diagonalize to get bfs { x 3, x 4, x 5 } Determine reduced cost by making c ~ j = 0 for all basic col

42 1/28/2016Combinatorial Optimization Masters OR Simplex Algorithm (Tableau Form) x1x1 x2x2 x3x3 x4x4 x5x5 –z–z-9/23/20 00 ½ 1½00 5/27/20½10 3/2-11/20-3/201 x1x1 x2x2 x3x3 x4x4 x5x5 –z–z-6-3 000 x3=x3=132100 x4=x4=22010 x5=x5=3 3001 Bring column 2 into the basis; select pivot element and get the new basis; OPTIMAL! (since c~ > 0)

43 1/28/2016Combinatorial Optimization Masters OR Remainder of the Chapter Ch 2.7: Pivot Selection & Anti-Cycling Ch 2.8: Simplex Algorithm (2-phase alg)

44 Hon Wai Leong, NUS (CS6234, Spring 2009) Page 44 Copyright © 2009 by Leong Hon Wai Thank you. Q & A


Download ppt "Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2."

Similar presentations


Ads by Google