Approximation Algorithms based on linear programming.

Slides:



Advertisements
Similar presentations
Iterative Rounding and Iterative Relaxation
Advertisements

1 LP, extended maxflow, TRW OR: How to understand Vladimirs most recent work Ramin Zabih Cornell University.
1 LP Duality Lecture 13: Feb Min-Max Theorems In bipartite graph, Maximum matching = Minimum Vertex Cover In every graph, Maximum Flow = Minimum.
Approximation Algorithms Chapter 14: Rounding Applied to Set Cover.
Introduction to Algorithms
1 Introduction to Linear Programming. 2 Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. X1X2X3X4X1X2X3X4.
Approximation Algorithms Chapter 5: k-center. Overview n Main issue: Parametric pruning –Technique for approximation algorithms n 2-approx. algorithm.
The number of edge-disjoint transitive triples in a tournament.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Approximation Algorithms
Linear Programming and Approximation
Totally Unimodular Matrices Lecture 11: Feb 23 Simplex Algorithm Elliposid Algorithm.
Design and Analysis of Algorithms
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Randomized Algorithms and Randomized Rounding Lecture 21: April 13 G n 2 leaves
Dealing with NP-Complete Problems
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
A general approximation technique for constrained forest problems Michael X. Goemans & David P. Williamson Presented by: Yonatan Elhanani & Yuval Cohen.
Approximation Algorithms
Group Strategyproofness and No Subsidy via LP-Duality By Kamal Jain and Vijay V. Vazirani.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
CSE 421 Algorithms Richard Anderson Lecture 27 NP Completeness.
Integer Programming Difference from linear programming –Variables x i must take on integral values, not real values Lots of interesting problems can be.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Distributed Combinatorial Optimization
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
C&O 355 Lecture 2 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
V. V. Vazirani. Approximation Algorithms Chapters 3 & 22
C&O 355 Mathematical Programming Fall 2010 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Design Techniques for Approximation Algorithms and Approximation Classes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
The Integers. The Division Algorithms A high-school question: Compute 58/17. We can write 58 as 58 = 3 (17) + 7 This forms illustrates the answer: “3.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A A A A A A A Image:
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
C&O 355 Mathematical Programming Fall 2010 Lecture 5 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
C&O 355 Lecture 7 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A.
CPSC 536N Sparse Approximations Winter 2013 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAA.
OR Chapter 7. The Revised Simplex Method  Recall Theorem 3.1, same basis  same dictionary Entire dictionary can be constructed as long as we.
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
CPS Computational problems, algorithms, runtime, hardness (a ridiculously brief introduction to theoretical computer science) Vincent Conitzer.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
Approximation Algorithms Duality My T. UF.
Linear Programming Piyush Kumar Welcome to CIS5930.
COSC 3101A - Design and Analysis of Algorithms 14 NP-Completeness.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Chapter 8 PD-Method and Local Ratio (5) Equivalence This ppt is editored from a ppt of Reuven Bar-Yehuda. Reuven Bar-Yehuda.
Approximation algorithms
Linear program Separation Oracle. Rounding We consider a single-machine scheduling problem, and see another way of rounding fractional solutions to integer.
Lap Chi Lau we will only use slides 4 to 19
Topics in Algorithms Lap Chi Lau.
Approximation algorithms
Chap 9. General LP problems: Duality and Infeasibility
Analysis of Algorithms
Chapter 5. The Duality Theorem
Chapter 2. Simplex method
Presentation transcript:

Approximation Algorithms based on linear programming

Integer Programming (IP)  Integer Programming is simply Linear Programming with an added condition: All variables must be integers  Many problems can be stated as Integer Programs.  For example, the Set Cover problem can be stated as an integer program.

Weighted Set Cover as IP  A set cover of a set X is any collection of subsets of X whose union is X.  The set cover problem: given a weight w i for each subset S i, find the set cover which minimizes the total weight

 For each subset S i we will give an integer variable y i, 0 or 1, that is 1 if the subset S i is part of the cover, and 0 if not.  Then we can state the weighted set cover as IP:

Relaxed IP to LP for WSC

Weight vertex-cover as IP  Given an undirected graph G = (V, E) in which each vertex v ∈ V has an associated positive weight w(v). For any vertex cover V' V, we define the weight of the vertex cover w(V') = Σ v ∈ V ' w(v). The goal is to find a vertex cover of minimum weight.

 Suppose that we associate a variable x(v) with each vertex v ∈ V, and let us require that x(v) ∈ {0, 1} for each v ∈ V. This view gives rise to the following 0-1 integer program for finding a minimum weight vertex cover(WVC): s.t. for any edge (u, v) ∈ E for each v ∈ V

 0-1 linear programming relaxation : s.t. for any edge (u, v) ∈ E for each v ∈ V

 Any feasible solution to IP is also a feasible solution to LP. Therefore, an optimal solution to LP is a lower bound on the optimal solution to IP.

Using Relaxed LP as Approximation  Given the optimum cost to LP is OPT LP, the optimum cost to IP is OPT.  If there is a solution of cost no more than R OPT LP, then the cost is no more than R OPT.  If it is also integral then: We have R -approximation of the IP !  If not: Maybe we can convert it to integral ?  Without rising the cost too much!

][

Basic Steps 1.Write the IP describing the problem. 2.Relax the IP to get an LP. 3.Find the optimal solution to the LP. 4.Transform fraction solution into integral solution by tricky strategies. 5.Reformulate the integral values as solution to the problem.

 Note that all step are easy, i.e. could be done in polynomial time.  The tricky part is step 4. We will see three strategies doing it: Rounding Randomized rounding Primal-dual schema  The methods shown will solve the set cover, in particular, but can be used for any IP.

Approximation by LP for WVC ApproxMinWVC(G, w) 1 C ← Ø 2 compute, an optimal solution to the linear program 3 for each v ∈ V do 4 if then 5 C ← C ∪ {v} 6 return C

 Theorem Algorithm ApproxMinWVC is a polynomial-time 2-approximation algorithm for the minimum-WVC problem. Proof. Because there is a polynomial-time algorithm to solve the linear program in line 2, and because the for loop of lines 3-5 runs in polynomial time, ApproxMinWVC is a polynomial-time algorithm. Now we show that ApproxMinWVC is a 2- approximation algorithm.

Let OPT IP (I) be an optimal solution value to the minimum-weight vertex-cover problem, and let OPT LP (I) be the value of an optimal solution to the linear program. Since an optimal vertex cover is a feasible solution to the linear program, OPT LP (I) must be a lower bound on OPT IP (I), that is, OPT LP (I) ≤ OPT IP (I)

Next, we claim that by rounding the fractional values of the variables, we produce a set C that is a vertex cover and satisfies w(C) ≤ 2 OPT LP (I). To see that C is a vertex cover, consider any edge (u, v) ∈ E. By constraint, we know that x(u) + x(v) ≥ 1, which implies that at least one of and is at least 1/2. Therefore, at least one of u and v will be included in the vertex cover, and so every edge will be covered.

Now we consider the weight of the cover. We have

So we have (1/2)A(I) ≤ OPT LP (I) ≤ OPT IP (I). Namely, R=2.

Approximation by LP for WSC  Definition of f :  In other words, f is the frequency of the most popular element

Rounding ApproxMinWSC(X, F, w) 1 C ← Ø 2 compute, an optimal solution to the linear program 3 for each S j do 4 if then 5 C ← C ∪ {S j } 6 return C

 Prove that rounding method produces Set Cover Proof. Assume by contradiction that there is an element x i such that, then according to rounding method, And therefore: But this violates the LP constraints.

Theorem Rounding is f-approximation Algorithm Proof. The algorithm is a polynomial time. Furthermore,  The first inequality holds, since

Randomized rounding  Maximum Satisfiability Solve linear program to determine coin biases.  Satisfiability vs. MAX-SAT Satisfiability: decision problem, NP-complete MAX-SAT: optimization problem, NP-hard  Let P j = indices of variables that occur un-negated in clause C j. N j = indices of variables that occur negated in clause C j.

Let denote the optimal solution of the above LP. LP( Linear Programming relaxation) relax y j, z j ∈ {0,1} to y j, z j ∈ [0, 1]. Let y* j, z* j obtained by solving the LP. Rounding Step: Set x i = 1 with probability by independently

RandomRoundingLP(I) 1 convert MAX-SAT into IP 2 convert IP into LP 3compute the optimal solution to LP 4 for i←1 to m do 5 set with probability 6 set with probability 7 return

 Theorem The algorithm is an e/(e-1)≈ approximation algorithm for MAX-SAT. Fact 1. For any nonnegative a 1, …, a k, the geometric mean is not greater than the arithmetic mean, i.e. Fact 2. If f(x) in [a,b] is a concave function, namely,, and, then for any

Proof Consider an arbitrary clause C j. Fact 1

Since is a concave function

Since is a concave function

Let A(I) = weight of clauses that are satisfied.

 Corollary If each clause has length at least l, then

Maximum Satisfiability: Best of Two  Observation: Two approximation algorithms are complementary. – Johnson's algorithm works best when clauses are long. – LP rounding algorithm works best when clauses are short. John(I) RandomRou ndingLP(I) k1-2 -k 1-(1-1/k) k

 How can we exploit this? – Run both algorithms and output better of two. – Re-analyze to get 4/3-approximation algorithm. – Better performance than either algorithm individually!

Max-k-SATBestoftwo(I) 1 ( ) ← Johnson(I) 2 ( ) ← RandomRoundingLP(I) 3 if then 4 return 5 else 6 return

 Lemma. For any integer Proof.

 Theorem The Max-k-SATBestoftwo(I) algorithm is a 4/3-approximation algorithm for MAX-SAT Proof.

Duality  Duality is a very important property. In an optimization problem, the identification of a dual problem is almost always coupled with the discovery of a polynomial-time algorithm. Duality is also powerful in its ability to provide a proof that a solution is indeed optimal.

 Given a linear program (LP) in which the objective is to maximize, we shall describe how to formulate a dual linear program (DLP) in which the objective is to minimize and whose optimal value is identical to that of the original linear program. When referring to dual linear programs, we call the original linear program the primal.  Given a primal LP, if DLP is the dual LP of LP, then LP is the dual LP of DLP.

 Given a primal linear program (LP) in standard form, we define the dual linear program (DLP) as Subject to

Primal-dual Primal: Subject to Dual:

 Now suppose we want to develop a lower bound on the optimal value of this LP. One way to do this is to find constraints that “look like” Z, for some using the constraints in the LP. To do this, note that any convex combination of constraints from the LP is also a valid constraint. Therefore, if we have non-negative multipliers y i on the constraints, we get a new constraint which is satisfied by all feasible solutions to primal LP.

 That is, if for all i,, then  Note that we require the y i ’s to be non-negative, because multiplying an inequality by a negative number switches the sign of the inequality. Consider the above equation, if we ensure we will obtain a lower bound of on the optimal value of the primal LP.

Switching the order of summation, we get and can ensure this sum is at most by requiring the Putting it all together, if the y i are non-negative and satisfy constraint, then

 We start with a primal dual pair (X, Y), where X is a primal variable, which is not necessarily feasible, while Y is the dual variable, which is not necessarily optimal.  At each step of the algorithm, we attempt to make Y “more optimal” and X “more feasible”; the algorithm stops when X becomes feasible.

Primal-dual algorithms  Approximation algorithms based on LP require the solution of a LP with a possibly large number of constrains. Therefore it is computationally expensive. Another approach known as primal-dual allows us to obtain an approximate solution more efficiently.  The chief idea is that any dual feasible solution is a good lower bound on the minimization primal problem.

Primal-Dual algorithm PrimalDualAlgorithm() 1 write down an LP relaxation of the problem, and find its dual. Try to find some intuitive meaning for the dual variables. 2 start with vectors X = 0, Y = 0, which will be dual feasible, but primal infeasible. 3 repeat (a) increase the dual values y i in some controlled fashion until some dual constraint(s) goes tight, while always maintaining the dual feasibility of Y. (b) Select some subset of the tight dual constraints, and increase the primal variable X corresponding to them by an integral amount. until the primal is feasible, 4 for the analysis, prove that the output pair of vectors (X, Y) satisfies for as small a value of as possible. Keep this goal in mind when deciding how to raise the dual and primal variables.

Constructing the Dual: An Example  For a weighted vertex cover, the dual of the previously defined LP is the following program DLP :

 Consider vertex cover. If we could bound the cost of some vertex cover C by ρ∑y uv for some dual feasible y uv, then we immediately obtain ρ- approximation algorithm by weak duality  Note that the solution in which all y uv are zero is a feasible solution with value 0 of DLP. Also note that there is no dual for an integer program; we are taking the dual of LP relaxation of the primal IP.

PrimalDualWVC(G) 1 for each dual variable y uv do y uv ←0 2 C ←0 3 while C is not a vertex cover do 4 select some edge (u,v) not covered by C 5 increase y uv till one of the end-points is hit. i.e., y uv =w(v) or y uv =w(u) 6 if y uv =w(v) then C ← C ∪ {v} 7 else C ← C ∪ {u} 8 return C

 Theorem Given a graph G with non-negative weights, PrimalDualWVC(G) is a 2-approximation algorithm. Proof Let C be the solution obtained by PrimalDualWVC(G). By construction C is a feasible solution. We observe that for every v ∈ C we have Therefore

Since C is the subset of V, Since every edge of E counts two times in So The theorem follows.

Homework Experiments: 1. Implement ApproxMinWVC 2. Implement ApproxMinWSC