Presentation is loading. Please wait.

Presentation is loading. Please wait.

Approximation Algorithms based on linear programming.

Similar presentations


Presentation on theme: "Approximation Algorithms based on linear programming."— Presentation transcript:

1 Approximation Algorithms based on linear programming

2 Integer Programming (IP)  Integer Programming is simply Linear Programming with an added condition: All variables must be integers  Many problems can be stated as Integer Programs.  For example, the Set Cover problem can be stated as an integer program.

3 Weighted Set Cover as IP  A set cover of a set X is any collection of subsets of X whose union is X.  The set cover problem: given a weight w i for each subset S i, find the set cover which minimizes the total weight

4  For each subset S i we will give an integer variable y i, 0 or 1, that is 1 if the subset S i is part of the cover, and 0 if not.  Then we can state the weighted set cover as IP:

5 Relaxed IP to LP for WSC

6 Weight vertex-cover as IP  Given an undirected graph G = (V, E) in which each vertex v ∈ V has an associated positive weight w(v). For any vertex cover V' V, we define the weight of the vertex cover w(V') = Σ v ∈ V ' w(v). The goal is to find a vertex cover of minimum weight.

7  Suppose that we associate a variable x(v) with each vertex v ∈ V, and let us require that x(v) ∈ {0, 1} for each v ∈ V. This view gives rise to the following 0-1 integer program for finding a minimum weight vertex cover(WVC): s.t. for any edge (u, v) ∈ E for each v ∈ V

8  0-1 linear programming relaxation : s.t. for any edge (u, v) ∈ E for each v ∈ V

9  Any feasible solution to IP is also a feasible solution to LP. Therefore, an optimal solution to LP is a lower bound on the optimal solution to IP.

10 Using Relaxed LP as Approximation  Given the optimum cost to LP is OPT LP, the optimum cost to IP is OPT.  If there is a solution of cost no more than R OPT LP, then the cost is no more than R OPT.  If it is also integral then: We have R -approximation of the IP !  If not: Maybe we can convert it to integral ?  Without rising the cost too much!

11 ][

12 Basic Steps 1.Write the IP describing the problem. 2.Relax the IP to get an LP. 3.Find the optimal solution to the LP. 4.Transform fraction solution into integral solution by tricky strategies. 5.Reformulate the integral values as solution to the problem.

13  Note that all step are easy, i.e. could be done in polynomial time.  The tricky part is step 4. We will see three strategies doing it: Rounding Randomized rounding Primal-dual schema  The methods shown will solve the set cover, in particular, but can be used for any IP.

14 Approximation by LP for WVC ApproxMinWVC(G, w) 1 C ← Ø 2 compute, an optimal solution to the linear program 3 for each v ∈ V do 4 if then 5 C ← C ∪ {v} 6 return C

15  Theorem Algorithm ApproxMinWVC is a polynomial-time 2-approximation algorithm for the minimum-WVC problem. Proof. Because there is a polynomial-time algorithm to solve the linear program in line 2, and because the for loop of lines 3-5 runs in polynomial time, ApproxMinWVC is a polynomial-time algorithm. Now we show that ApproxMinWVC is a 2- approximation algorithm.

16 Let OPT IP (I) be an optimal solution value to the minimum-weight vertex-cover problem, and let OPT LP (I) be the value of an optimal solution to the linear program. Since an optimal vertex cover is a feasible solution to the linear program, OPT LP (I) must be a lower bound on OPT IP (I), that is, OPT LP (I) ≤ OPT IP (I)

17 Next, we claim that by rounding the fractional values of the variables, we produce a set C that is a vertex cover and satisfies w(C) ≤ 2 OPT LP (I). To see that C is a vertex cover, consider any edge (u, v) ∈ E. By constraint, we know that x(u) + x(v) ≥ 1, which implies that at least one of and is at least 1/2. Therefore, at least one of u and v will be included in the vertex cover, and so every edge will be covered.

18 Now we consider the weight of the cover. We have

19 So we have (1/2)A(I) ≤ OPT LP (I) ≤ OPT IP (I). Namely, R=2.

20 Approximation by LP for WSC  Definition of f :  In other words, f is the frequency of the most popular element

21 Rounding ApproxMinWSC(X, F, w) 1 C ← Ø 2 compute, an optimal solution to the linear program 3 for each S j do 4 if then 5 C ← C ∪ {S j } 6 return C

22  Prove that rounding method produces Set Cover Proof. Assume by contradiction that there is an element x i such that, then according to rounding method, And therefore: But this violates the LP constraints.

23 Theorem Rounding is f-approximation Algorithm Proof. The algorithm is a polynomial time. Furthermore,  The first inequality holds, since

24 Randomized rounding  Maximum Satisfiability Solve linear program to determine coin biases.  Satisfiability vs. MAX-SAT Satisfiability: decision problem, NP-complete MAX-SAT: optimization problem, NP-hard  Let P j = indices of variables that occur un-negated in clause C j. N j = indices of variables that occur negated in clause C j.

25

26 Let denote the optimal solution of the above LP. LP( Linear Programming relaxation) relax y j, z j ∈ {0,1} to y j, z j ∈ [0, 1]. Let y* j, z* j obtained by solving the LP. Rounding Step: Set x i = 1 with probability by independently

27 RandomRoundingLP(I) 1 convert MAX-SAT into IP 2 convert IP into LP 3compute the optimal solution to LP 4 for i←1 to m do 5 set with probability 6 set with probability 7 return

28  Theorem The algorithm is an e/(e-1)≈1.582- approximation algorithm for MAX-SAT. Fact 1. For any nonnegative a 1, …, a k, the geometric mean is not greater than the arithmetic mean, i.e. Fact 2. If f(x) in [a,b] is a concave function, namely,, and, then for any

29 Proof Consider an arbitrary clause C j. Fact 1

30 Since is a concave function

31 Since is a concave function

32 Let A(I) = weight of clauses that are satisfied.

33  Corollary If each clause has length at least l, then

34 Maximum Satisfiability: Best of Two  Observation: Two approximation algorithms are complementary. – Johnson's algorithm works best when clauses are long. – LP rounding algorithm works best when clauses are short. John(I) RandomRou ndingLP(I) k1-2 -k 1-(1-1/k) k 10.51.0 20.75 30.8750.704 40.9380.684 50.9690.672

35  How can we exploit this? – Run both algorithms and output better of two. – Re-analyze to get 4/3-approximation algorithm. – Better performance than either algorithm individually!

36 Max-k-SATBestoftwo(I) 1 ( ) ← Johnson(I) 2 ( ) ← RandomRoundingLP(I) 3 if then 4 return 5 else 6 return

37  Lemma. For any integer Proof.

38

39  Theorem The Max-k-SATBestoftwo(I) algorithm is a 4/3-approximation algorithm for MAX-SAT Proof.

40

41 Duality  Duality is a very important property. In an optimization problem, the identification of a dual problem is almost always coupled with the discovery of a polynomial-time algorithm. Duality is also powerful in its ability to provide a proof that a solution is indeed optimal.

42  Given a linear program (LP) in which the objective is to maximize, we shall describe how to formulate a dual linear program (DLP) in which the objective is to minimize and whose optimal value is identical to that of the original linear program. When referring to dual linear programs, we call the original linear program the primal.  Given a primal LP, if DLP is the dual LP of LP, then LP is the dual LP of DLP.

43  Given a primal linear program (LP) in standard form, we define the dual linear program (DLP) as Subject to

44 Primal-dual Primal: Subject to Dual:

45  Now suppose we want to develop a lower bound on the optimal value of this LP. One way to do this is to find constraints that “look like” Z, for some using the constraints in the LP. To do this, note that any convex combination of constraints from the LP is also a valid constraint. Therefore, if we have non-negative multipliers y i on the constraints, we get a new constraint which is satisfied by all feasible solutions to primal LP.

46  That is, if for all i,, then  Note that we require the y i ’s to be non-negative, because multiplying an inequality by a negative number switches the sign of the inequality. Consider the above equation, if we ensure we will obtain a lower bound of on the optimal value of the primal LP.

47 Switching the order of summation, we get and can ensure this sum is at most by requiring the Putting it all together, if the y i are non-negative and satisfy constraint, then

48  We start with a primal dual pair (X, Y), where X is a primal variable, which is not necessarily feasible, while Y is the dual variable, which is not necessarily optimal.  At each step of the algorithm, we attempt to make Y “more optimal” and X “more feasible”; the algorithm stops when X becomes feasible.

49 Primal-dual algorithms  Approximation algorithms based on LP require the solution of a LP with a possibly large number of constrains. Therefore it is computationally expensive. Another approach known as primal-dual allows us to obtain an approximate solution more efficiently.  The chief idea is that any dual feasible solution is a good lower bound on the minimization primal problem.

50 Primal-Dual algorithm PrimalDualAlgorithm() 1 write down an LP relaxation of the problem, and find its dual. Try to find some intuitive meaning for the dual variables. 2 start with vectors X = 0, Y = 0, which will be dual feasible, but primal infeasible. 3 repeat (a) increase the dual values y i in some controlled fashion until some dual constraint(s) goes tight, while always maintaining the dual feasibility of Y. (b) Select some subset of the tight dual constraints, and increase the primal variable X corresponding to them by an integral amount. until the primal is feasible, 4 for the analysis, prove that the output pair of vectors (X, Y) satisfies for as small a value of as possible. Keep this goal in mind when deciding how to raise the dual and primal variables.

51 Constructing the Dual: An Example  For a weighted vertex cover, the dual of the previously defined LP is the following program DLP :

52  Consider vertex cover. If we could bound the cost of some vertex cover C by ρ∑y uv for some dual feasible y uv, then we immediately obtain ρ- approximation algorithm by weak duality  Note that the solution in which all y uv are zero is a feasible solution with value 0 of DLP. Also note that there is no dual for an integer program; we are taking the dual of LP relaxation of the primal IP.

53 PrimalDualWVC(G) 1 for each dual variable y uv do y uv ←0 2 C ←0 3 while C is not a vertex cover do 4 select some edge (u,v) not covered by C 5 increase y uv till one of the end-points is hit. i.e., y uv =w(v) or y uv =w(u) 6 if y uv =w(v) then C ← C ∪ {v} 7 else C ← C ∪ {u} 8 return C

54  Theorem Given a graph G with non-negative weights, PrimalDualWVC(G) is a 2-approximation algorithm. Proof Let C be the solution obtained by PrimalDualWVC(G). By construction C is a feasible solution. We observe that for every v ∈ C we have Therefore

55 Since C is the subset of V, Since every edge of E counts two times in So The theorem follows.

56 Homework Experiments: 1. Implement ApproxMinWVC 2. Implement ApproxMinWSC


Download ppt "Approximation Algorithms based on linear programming."

Similar presentations


Ads by Google