Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.

Slides:



Advertisements
Similar presentations
The Primal-Dual Method: Steiner Forest TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A AA A A A AA A A.
Advertisements

Duality for linear programming. Illustration of the notion Consider an enterprise producing r items: f k = demand for the item k =1,…, r using s components:
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
Linear Programming (LP) (Chap.29)
C&O 355 Mathematical Programming Fall 2010 Lecture 22 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A.
Transportation Problem (TP) and Assignment Problem (AP)
Totally Unimodular Matrices
Introduction to Algorithms
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
The Simplex Method and Linear Programming Duality Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305,
Basic Feasible Solutions: Recap MS&E 211. WILL FOLLOW A CELEBRATED INTELLECTUAL TEACHING TRADITION.
Chapter 7 Maximum Flows: Polynomial Algorithms
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Linear Programming and Approximation
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Primal Dual Method Lecture 20: March 28 primaldual restricted primal restricted dual y z found x, succeed! Construct a better dual.
A general approximation technique for constrained forest problems Michael X. Goemans & David P. Williamson Presented by: Yonatan Elhanani & Yuval Cohen.
Approximation Algorithms
Computational Methods for Management and Economics Carla Gomes
Linear Programming – Max Flow – Min Cut Orgad Keller.
Lecture 11. Matching A set of edges which do not share a vertex is a matching. Application: Wireless Networks may consist of nodes with single radios,
MIT and James Orlin © Chapter 3. The simplex algorithm Putting Linear Programs into standard form Introduction to Simplex Algorithm.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234 Lecture 1 -- (14-Jan-09) “Introduction”  Combinatorial Optimization.
V. V. Vazirani. Approximation Algorithms Chapters 3 & 22
Design Techniques for Approximation Algorithms and Approximation Classes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 1 Slide © 2000 South-Western College Publishing/ITP Slides Prepared by JOHN LOUCKS.
Kerimcan OzcanMNGT 379 Operations Research1 Linear Programming: The Simplex Method Chapter 5.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
1 1 © 2003 Thomson  /South-Western Slide Slides Prepared by JOHN S. LOUCKS St. Edward’s University.
Theory of Computing Lecture 13 MAS 714 Hartmut Klauck.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A Image:
1 1 Slide © 2005 Thomson/South-Western Linear Programming: The Simplex Method n An Overview of the Simplex Method n Standard Form n Tableau Form n Setting.
Chapter 4 Linear Programming: The Simplex Method
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
OR Chapter 8. General LP Problems Converting other forms to general LP problem : min c’x  - max (-c)’x   = by adding a nonnegative slack variable.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
OR Relation between (P) & (D). OR optimal solution InfeasibleUnbounded Optimal solution OXX Infeasible X( O )O Unbounded XOX (D) (P)
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2.
Supplementary: Feasible Labels and Linear Programming  Consider the LP formulation of the shortest s-t path problem: (P) Minimize  (i, j)  A c ij x.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Approximation Algorithms Duality My T. UF.
Approximation Algorithms based on linear programming.
Tuesday, March 19 The Network Simplex Method for Solving the Minimum Cost Flow Problem Handouts: Lecture Notes Warning: there is a lot to the network.
EMGT 6412/MATH 6665 Mathematical Programming Spring 2016
Lap Chi Lau we will only use slides 4 to 19
Chap 10. Sensitivity Analysis
Topics in Algorithms Lap Chi Lau.
The minimum cost flow problem
Perturbation method, lexicographic method
Chapter 5. Optimal Matchings
James B. Orlin Presented by Tal Kaminker
Chap 9. General LP problems: Duality and Infeasibility
Chap 3. The simplex method
Analysis of Algorithms
Linear Programming and Approximation
Chapter 5. The Duality Theorem
Flow Feasibility Problems
Prepared by Po-Chuan on 2016/05/24
Presentation transcript:

Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234: Lecture 4  Linear Programming  LP and Simplex Algorithm [PS82]-Ch2  Duality [PS82]-Ch3  Primal-Dual Algorithm [PS82]-Ch5  Additional topics: uReading/Presentation by students Lecture notes adapted from Comb Opt course by Jorisk, Math Dept, Maashricht Univ,

11/14/2015Combinatorial Optimization Masters OR Combinatorial Optimization Chapter 5 [PS82] The primal dual algorithm

11/14/2015Combinatorial Optimization Masters OR Primal Dual Method Source: CUHK, CS5160 primaldual restricted primal restricted dual y z found x, succeed! Construct a better dual

11/14/2015Combinatorial Optimization Masters OR Primal LP in standard form + Dual minc ’ x s.t.A x = b (≥0 )(P) x ≥0 max π ’ b s.t. π ’ A ≤ c(D) π ’ free Dual

11/14/2015Combinatorial Optimization Masters OR Definition of the Dual Definition 3.1: Given an LP in general form, called the primal, the dual is defined as follows PrimalDual Min c ’ x Max π ’ b a ’ i x = b i i ε M π i free x j ≥ 0j ε N π ’ A i ≤ c i

11/14/2015Combinatorial Optimization Masters OR Complementary slackness Theorem 3.4 A pair, x, π, respectively feasible in a primal-dual pair is optimal if and only if: u i = π i (a ’ i x - b i ) = 0 for all i(1) v j = (c j - π ’ A j ) x j = 0 for all j.(2)

11/14/2015Combinatorial Optimization Masters OR Idea of a Primal-Dual Algorithm Suppose we have a dual feasible solution π. If we can find a primal feasible solution x such that x j = 0 whenever c j – π’A j > 0 then: A i x = b i for all i and hence (1) holds because x is a feasible solution to the primal (2) holds because x j = 0 whenever c j – π’A j = 0. Thus the complementary slackness relations hold, and hence x and π are optimal solutions to the primal problem (P) and the dual problem (D) resp.

11/14/2015Combinatorial Optimization Masters OR Outline of the primal dual algorithm PDRPRPD xρ π Adjustment to π

11/14/2015Combinatorial Optimization Masters OR Primal Dual Method Source: CUHK, CS5160 primaldual restricted primal restricted dual y z found x, succeed! Construct a better dual

11/14/2015Combinatorial Optimization Masters OR Getting started: finding a dual feasible π If c ≥ 0, π = 0 is a dual feasible solution. When c j < 0 for some j, introduce variable x m+1, and the additional constraint x 1 + x 2 + … + x m+1 = b m+1.(b m+1 large enough) The dual (D) then becomes max π ’ b + π m+1 b m+1 s.t. π ’ A + π m+1 ≤ c j for all j, π m+1 ≤ 0. The solution π i = 0, i=1…m, and π m+1 = min j c j < 0 is feasible for the dual (D).

11/14/2015Combinatorial Optimization Masters OR Given a dual feasible solution π Thus, we assume we have a dual feasible solution π. Consider the set J = { j : π ’ j A j = c j }. A solution x to the primal (P) is optimal iff x j =0 for all j not in J. Hence, we aim for an x feasible in Σ j=1 n a ij x j = b i i =1,..,m x j ≥ 0, for all j in J x j = 0, for all j not in J

11/14/2015Combinatorial Optimization Masters OR Restricted Primal (RP) Min ξ = Σ j=1 m x j a Σ j=1 n a ij x j + x j a = b i i =1,..,m x j ≥ 0, for all j in J(RP) x j = 0, for all j not in J x j a ≥ 0, i =1,..,m If ξ opt = 0, the corresponding optimal solution to (RP) yields an x which satisfies (together with π) the complementary slackness conditions. Thus, in the remainder we consider the case where ξ opt > 0.

11/14/2015Combinatorial Optimization Masters OR The dual of the restricted primal: DRP Min Σ j=1 m x j a s.t. Σ jεJ a ij x j +x j a = b i i =1,..,m x j ≥ 0,for all j in J(RP) x j = 0,for all j not in J x j a ≥ 0,i =1,..,m Maxπ ’ b s.t.π ’ A j ≤ 0 for all j in J π i ≤ 1,i=1…m, π i freei=1…m, We denote by ρ the optimal solution of this (DRP).

11/14/2015Combinatorial Optimization Masters OR Comparison of (D) and (DRP) (D) Maxπ ’ b s.t.π ’ A j ≤ c j j=1…n π i freei=1…m. (DRP) Maxπ ’ b s.t.π ’ A j ≤ 0 for all j in J π i ≤ 1,i=1…m, π i freei=1…m.

11/14/2015Combinatorial Optimization Masters OR A Primal Dual iteration…. Let θ ε R, and consider π * = π + θρ. Assume θ is such that π * is feasible in (D). Then π *’ b = π ’ b+ θρ ’ b. Obviously ρ ’ b = ξ opt > 0, since otherwise x together with π satisfies the complementary slackness conditions. Thus, by choosing θ > 0, π *’ b > π ’ b, yielding an improved solution of the dual (D).

11/14/2015Combinatorial Optimization Masters OR A Primal Dual iteration For π * to be feasible in (D), it must hold that: π *’ A j = π ’ A j + θρ ’ A j ≤ c j, j=1…n. From the definition of (DRP) it holds that ρ ’ A j ≤ 0 for all j in J. Thus if ρ ’ A j ≤ 0 for all j, we can choose θ = +∞.But then (D) is unbounded, and hence (P) is infeasible. (Theorem 5.1) Thus, we assume that ρ ’ A j > 0 for some j not in J.

11/14/2015Combinatorial Optimization Masters OR We are going to allow j to ‘enter the basis’ of (D)… Let θ 1 to be the maximum value such that π ’ A j + θρ ’ A j ≤ c j for all j not in J and ρ ’ A j > 0 Define π * = π + θ 1 ρ. Then π *’ b = π ’ b+ θ 1 ρ ’ b > π *’ b. A Primal Dual iteration Dual constraint j is satisfied at equality, and then by complementary slackness, primal variable x j can have nonnegative value

11/14/2015Combinatorial Optimization Masters OR The Primal Dual Algorithm Input: Feasible solution π for (D). Output: Optimal solution x for (P) if it exists. Infeasible  false; opt  false; While not (infeasible or opt) begin J  {j: π ’ A j =c j } Solve (RP) giving solution x. If ξ opt = 0 then opt  true else if ρ ’ A ≤ 0 then infeasible  true else π  π + θ 1 ρ End {while}

11/14/2015Combinatorial Optimization Masters OR Admissible columns Definition Given a solution π to the dual (D), let J = { j : π ’ A j =c j }. Then any column A j, jεJ is called an admissable columns. Theorem 5.3. A column which is in the optimal basis of (RP) and admissable in some iteration of the Primal Dual algorithm remains admissable at the start of the next iteration.

11/14/2015Combinatorial Optimization Masters OR Admissable columns Proof. If A j is in the optimal basis of (RP) then, by complementary slackness d j - ρ ’ A j = 0, where d j is the cost coefficient in the objective function of the (RP). Hence d j = 0, and thus ρ ’ A j = 0. This in turn implies that π *’ A j = π ’ A j + θρ ’ A j = π ’ A j = c j Since j was an admissable column. Thus j remains an admissable column.

11/14/2015Combinatorial Optimization Masters OR Consequence In each iteration, the optimal solution x of (RP), is also a basic feasible solution of the (RP) in the next iteration. Thus subsequent (RP)’s can be solved taking the optimal solution of the previous one as a starting point for a simplex iteration. Moreover, using an anti cycling rule, the Primal Dual algorithm is finite (see Theorem 5.4).

11/14/2015Combinatorial Optimization Masters OR A Primal Dual method for Shortest Path node-arc incidence matrix A: 1if arc k leaves node i a ij = {-1if arc k enters node i 0otherwise Primal (P): min c ’ f s.t.Σ k A k f k = 0for all i ≠ s,t. Σ k A k f k = 1i = s. f≥ 0

11/14/2015Combinatorial Optimization Masters OR Dual (D) of the shortest path problem max π s s.t. π i - π j ≤ c ij, π free, π t = 0. Admissable arcs J = { arcs (i,j) : π i - π j = c ij }.

11/14/2015Combinatorial Optimization Masters OR Restricted Primal (RP) of Shortest Path min Σ i x i a s.t.Σ kεJ A k f k +x i a = 0for all i ≠ s,t. Σ {j:(s,j)εJ} A (s,j) f (s,j) +x s a = 1 f k ≥ 0 for all k, f k = 0for all k not in J, x i a ≥ 0for all i.

11/14/2015Combinatorial Optimization Masters OR Dual of the Restricted Primal (DRP) max π s s.t. π i - π j ≤ 0,for all arcs (i,j) in J. π i ≤ 1,for all i. π t = 0, π free,

11/14/2015Combinatorial Optimization Masters OR Solving (DRP) Obviously π s ≤ 1, and since we aim to maximize π s we try π s = 1. But then all nodes i reachable by an admissable arc from s must also have π i = 1. This argument applies recursively. Similarly since π t = 0, all nodes i (recursively) reachable from the sink, must have π i = 0.

11/14/2015Combinatorial Optimization Masters OR Optimal solution ρ to (DRP) If the source can be reached from the sink by a path consisting of admissable arcs π s = 1 end hence (RP) has optimal value zero and we are done. s t 1 01

11/14/2015Combinatorial Optimization Masters OR Solving (DRP) Let θ 1 to be the maximum value such that π ’ A k + θρ ’ A k ≤ c k for all k not in J and ρ ’ A k > 0. Thus we must consider (i,j) such that ρ i -ρ j =1. Therefore θ 1 is the minimum over the aforementioned (i,j) of c ij – (π i - π j ). Interpretation on next slide…

11/14/2015Combinatorial Optimization Masters OR Interpretation of Primal Dual iteration We assume c ij > 0 for i,j = 1…m. Then the solution π i = 0 for i=1…m is feasible in (D), and selected as the starting solution. We select a non admissable arc from a green or red node i (which has ρ i = 1 to a yellow node j, which has ρ i = 0. Since arc (i,j) is non admissable it must hold that π i – π j < c ij.

11/14/2015Combinatorial Optimization Masters OR Example 2 s t

11/14/2015Combinatorial Optimization Masters OR Initial dual feasible solution 2 s t π s =0 π 2 =0 π 1 =0 π 4 =0 π 3 =0 π t =0 Admissable arcs: Ø

11/14/2015Combinatorial Optimization Masters OR Initial solution of (RP) x s a =1, x i a =0, for all i ≠ s, no variables f k for admissable columns A k.

11/14/2015Combinatorial Optimization Masters OR First DRP 2 s t ρ s =1 ρ 2 =1 ρ 1 =1 ρ 4 =1 ρ 3 =1 ρ t =0 θ 1 = c 3t + π t - π 3 = 2

11/14/2015Combinatorial Optimization Masters OR Next solution of (RP) x s a =1, x i a =0, for all i ≠ s, variables f k for admissable columns A k from node 3 to t, but primal variables remain unchanged. (and will remain unchanged, until some arc leaving s is admissable…

11/14/2015Combinatorial Optimization Masters OR Next dual feasible solution 2 s t π s =2 π 2 =2 π 1 =2 π 4 =2 π 3 =2 π t =0 Admissable arcs: (3,t)

11/14/2015Combinatorial Optimization Masters OR Next DRP 2 s t ρ s =1 ρ 2 =1 ρ 1 =1 ρ 4 =1 ρ 3 =0 ρ t =0 θ 1 = c 43 + π 3 - π 4 = 2

11/14/2015Combinatorial Optimization Masters OR Next dual feasible solution 2 s t π s =4 π 2 =4 π 1 =4 π 4 =4 π 3 =2 π t =0 Admissable arcs: (3,t), (4,3)

11/14/2015Combinatorial Optimization Masters OR Next DRP 2 s t ρ s =1 ρ 2 =1 ρ 1 =1 ρ 4 =0 ρ 3 =0 ρ t =0 θ 1 = c 24 + π 4 - π 2 = 1 = c 13 + π 3 - π 1 = 1

11/14/2015Combinatorial Optimization Masters OR Next dual feasible solution 2 s t π s =5 π 2 =5 π 1 =5 π 4 =4 π 3 =2 π t =0 Admissable arcs: (3,t), (4,3), (1,3), (2,4)

11/14/2015Combinatorial Optimization Masters OR Next DRP 2 s t ρ s =1 ρ 2 =0 ρ 1 =0 ρ 4 =0 ρ 3 =0 ρ t =0 θ 1 = c s2 + π 2 - π s = 1

11/14/2015Combinatorial Optimization Masters OR Next dual feasible solution 2 s t π s =6 π 2 =5 π 1 =5 π 4 =4 π 3 =2 π t =0 Admissable arcs: (3,t), (4,3), (1,3), (2,4),(s,2)

11/14/2015Combinatorial Optimization Masters OR Next solution of (RP) x i a =0, for all i, f s2 =1, f 24 =1, f 43 =1, f 3t =1, all other f ij = 0. Solution value 0  f is feasible in primal (P), and since c ’ x = π ’ b optimal in (P)!

11/14/2015Combinatorial Optimization Masters OR Combinatorialization… (P) has an integer cost vector c, and (0,1) right hand side. (RP) has a (0,1) cost vector and right hand side b. Therefore in (RP) the complexity is not from the numbers but from the combinatorics. Similarly (D) has a (0,1) vector b, and integer right hand side c. (DRP) has a (0,1) cost vector and right hand side. Therefore in (DRP) the complexity is not from the numbers but from the combinatorics. The primal dual algorithm solves a problem with ‘numerical’ complexity, by repeatedly solving a problem which has only ‘combinatorial’ complexity. This concept, which Papadimitriou calls ‘combinatorialization’ is frequently encountered in combinatorial optimization.

Hon Wai Leong, NUS (CS6234, Spring 2009) Page 44 Copyright © 2009 by Leong Hon Wai Additional notes from [CUHK]

Primal Dual Method Lecture 20: March 28 primaldual restricted primal restricted dual y z found x, succeed! Construct a better dual

Primal Dual Program Primal ProgramDual Program If there is a feasible primal solution x and a feasible dual solution y, then both are optimal solutions. Primal-Dual Method : An algorithm to construct such a pair of solutions.

Optimality Condition Suppose there is a feasible primal solution x, and a feasible dual solution y. How do we check that they are optimal solutions? Avoid strict inequality

Optimality Condition

Complementary Slackness Conditions Primal complementary slackness condition: Dual complementary slackness condition:

Primal Dual Method Start from a feasible dual solution. Search for a feasible primal solution satisfying complementary slackness conditions. If not, improve the objective value of dual solution. repeat

Restricted Primal Formulate this as an LP itself! Given a feasible dual solution y, how do we search for a feasible primal solution x that satisfies complementary slackness conditions? Primal complementary slackness condition: Dual complementary slackness condition:

Restricted Primal If j not in J, then we need x(j) to be zero. If i not in I, then we need Restricted primal If zero, we are done

Restricted Dual Suppose the objective of the restricted primal is not zero, what do we do? Then we want to find a better dual solution. Restricted primalRestricted dual nonzero

Restricted Dual Restricted dual nonzero Dual Program Consider y+єz as the new dual solution. Still feasible Larger value

General Framework primaldual restricted primal restricted dual y z found x, succeed! Construct a better dual

Bipartite Matching Primal complementary slackness condition:

Start from a feasible vertex cover Find a perfect matching using tight edges non-zero Consider y- єz, a better dual Hungarian Method

Remarks It is not a polynomial time method. It reduces the weighted problem to the unweighted problem, so that the restricted primal linear program is easier to solve, and often there are combinatorial algorithms to solve it. Many combinatorial algorithms, like max-flow, matching, min-cost flow, shortest path, spanning tree, …, can be derived within this framework.

Approximation Algorithm How do we adapt the primal-dual method for approximation algorithms? We want to construct a primal feasible solution x and a dual feasible solution y so that cx and by are “close”. Avoid losing too much

Avoid strict inequality Approximate Optimality Condition

Approximate Complementary Slackness Condition Primal complementary slackness condition: Dual complementary slackness condition:Only a sufficient condition

Vertex Cover Primal complementary slackness condition: Dual complementary slackness condition:

Approximate Optimality Conditions Primal complementary slackness condition: Dual complementary slackness condition: Pick only vertices that go tight. Pick only edge with one vertex in the vertex cover. This would imply a 2-approximation. This is nothing! Just focus on this!

Algorithm Pick only vertices that go tight. Algorithm (2-approximation for vertex cover) Initially, x=0, y=0 When there is an uncovered edge Pick an uncovered edge, and raise y(e) until some vertices go tight. Add all tight vertices to the vertex cover. Output the vertex cover x. Familiar? This is the greedy matching 2-approximation when every vertex has the same cost.

Hon Wai Leong, NUS (CS6234, Spring 2009) Page 65 Copyright © 2009 by Leong Hon Wai Thank you. Q & A