Presentation is loading. Please wait.

Presentation is loading. Please wait.

Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.

Similar presentations


Presentation on theme: "Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum."— Presentation transcript:

1 Linear Program Set Cover

2 Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum cost subcollection of S that covers all elements of U.

3 Frequency Define the frequency f i of an element e i to be the number of sets it is in. Let f = max i=1,…,n f i.

4 IP (Set Cover)

5 LP-relaxation (Set Cover)

6 Algorithm LP-rounding 1.Find an optimal solution to the LP-relaxation. 2.Pick all sets S for which x S ≥ 1/f in this solution.

7 f-approximation Theorem 10.1 Algorithm LP-rounding achieves an approximation factor of f for the set cover problem. Proof. Consider an arbitrary element e. Each element is in at most f sets. e  U: (1)   x S ≥ 1/f (e  S)  e is covered. We have x S ≥ 1/f for every picked set S. Therefore the cost of the solution is at most f times the cost of fractional cover.

8 2-approximation Corollary 10.2 Algorithm LP-rounding achieves an approximation factor of f for the vertex cover problem.

9 Tight example (hypergraph) V1V1 V2V2 V3V3 VkVk V=V 1  V 2  …  V k n k hyperedges Each hyperedge picks one vertex from each V i. In the set cover instance, elements correspond to hyperedges and sets correspond to vertices.

10 Primal and Dual programs

11 The 1-st LP-Duality Theorem The primal program has finite optimum iff its dual has finite optimum. Moreover, if x*=(x 1 *,…, x n *) and y*=(y 1 *,…, y m *) are optimal solutions for the primal and dual programs, respectively, then

12 Weak Duality Theorem If x = (x 1,…, x n ) and y = (y 1,…, y m ) are feasible solutions for the primal and dual programs, respectively, then Proof. Since y is dual feasible and x j are nonnegative, Similarly, since x is primal feasible and y i are nonnegative,

13 Weak Duality Theorem(2) We obtain By the 1-st LP-Duality theorem, x and y are both optimal solutions iff both inequalities hold with equality. Hence we get the following result about the structure of optimal solutions.

14 The 2-nd LP-Duality Theorem Let x and y be primal and dual feasible solutions, respectively. Then, x and y are both optimal iff all of the following conditions are satisfied:

15 Primal-Dual Schema The primal-dual schema is the method of choice for designing approximation algorithms since it yields combinatorial algorithms with good approximation factors and good running times. We will first present the central ideas behind this schema and then use it to design a simple f factor algorithm for set cover, where f is the frequency of the most frequent element.

16 Complementary slackness conditions Primal complementary slackness conditions Dual complementary slackness conditions

17 Proposition 10.3 If x and y are primal and dual feasible solutions satisfying the conditions stated above then

18 Proof

19 Primal-Dual Scheme The algorithm starts with a primal infeasible solution and dual feasible solution; these are usually trivial solutions x = 0 and y = 0. It iteratively improves the feasibility of the primal solution, and the optimality of the dual solution, ensuring that in the end a primal feasible solution is obtained and all conditions stated above, with a suitable choice of α and β are satisfied. The primal solution is always extended integrally, thus ensuring that the final solution is integral. The improvements to the primal and the dual go hand-in-hand : the current primal solution is used to determine the improvement to the dual, and vice versa. Finally, the cost of the dual solution is used as a lower bound on OPT.

20 LP-relaxation (Set Cover)

21 Dual Program (Set Cover)

22 α = 1, β = f Primal complementary slackness conditions Dual complementary slackness conditions Set will be said to be tight, if Pick only tight sets in the cover. Each element having a nonzero dual value can be covered at most f times.

23 Primal-Dual Algorithm 0) Input (U, Ω, c: Ω → Q + ) 1) x ← 0, y ← 0. 2) Until all elements are covered, do: Pick an uncovered element, say e, and raise y e until some set goes tight. Pick all tight sets in the cover and update x. Declare all the elements occurring in these sets as “covered”. 3) Output (x)

24 f-factor approximation Theorem 10.4 Primal-Dual Algorithm achieves an approximation factor of f.

25 Proof The algorithm terminates when all elements are covered (feasibility). Only tight sets are picked in the cover by the algorithm. значения y e элементов, которые в них входят больше не изменяются (feasibility and primal condition). Each element having a nonzero dual value can be covered at most f times (dual condition). By proposition 10.3 the approximation factor is f.

26 Tight example 1 enen e n-1 e2e2 e1e1 1 1 1+ε e n+1 f = n

27 Maximum Satisfiability (MAX-SAT) Given a conjunctive normal form formula f on Boolean variables x 1,…, x n, and nonnegative weights w c, for each clause c of f. Find a truth assignment to the Boolean variables that maximizes the total weight of satisfied clauses.

28 Clauses Each clause is a disjunction of literals; each literal being either a Boolean variable or its negation. Let size(c) denote the size of clause c, i.e., the number of literals in it. We will assume that the sizes of clauses in f are arbitrary. A clause is said to be satisfied if one of the unnegated variables is set to true or one of the negated variables is set to false.

29 Terminology Random variable W will denote the total weight of satisfied clauses. For each clause c  f, random variable W c denotes the weight contributed by clause c to W.

30 Johnson’s Algorithm 0) Input (x 1,…, x n, f, w: f → Q + ) 1) Set each Boolean variable to be True independently with probability 1/2. 3) Output the resulting truth assignment, say τ.

31 A good algorithm for large clauses For k ≥ 1, define α k =1–2 –k. Lemma 10.5 If size(c)=k, then E[W c ]=α k w c. Proof. Clause c is not satisfied by τ iff all its literals are set to False. The probability of this event is 2 –k. Corollary 10.6 E[W] ≥ ½ OPT. Proof. For k ≥ 1, α k ≥ ½. By linearity of expectation,

32 Conditional Expectation Let a 1,…, a i be a truth assignment to x 1,…, x i. Lemma 10.7 E[W| x 1 = a 1,…, x i = a i ] can be computed in polynomial time.

33 Proof Let an assignment of variables x 1,…, x i is fixed say x 1 = a 1,…, x i = a i. Let φ be the Boolean formula, on variables x i+1,…, x n, obtained for this node via self-reducibility. The expected weight of satisfied clauses of φ under a random truth assignment to the variables x i+1,…, x n can be computed in polynomial time. Adding to this the total weight of clauses of f already satisfied by the partial assignment x 1 = a 1,…, x i = a i gives the answer.

34 Derandomazing Theorem 10.8 We can compute, in polynomial time, an assignment x 1 =a 1,…, x n =a n such that W(a 1,…, a n ) ≥ E[W].

35 Proof E[W| x 1 =a 1,…, x i =a i ] = E[W| x 1 =a 1,…, x i =a i, x i+1 = True]/2 + + E[W| x 1 =a 1,…, x i =a i, x i+1 = False]/2 It follows that either E[W| x 1 =a 1,…, x i =a i, x i+1 = True] ≥ E[W| x 1 =a 1,…, x i =a i ], or E[W| x 1 =a 1,…, x i =a i, x i+1 = False] ≥ E[W| x 1 =a 1,…, x i =a i ]. Take an assignment with larger expectation. The procedure requires n iterations. Lemma 10.7 implies that each iteration can be done in polynomial time.

36 Remark Let us show that the technique outlined above can, in principle, be used to derandomize more complex randomized algorithms. Suppose the algorithm does not set the Boolean variables independently of each other. Now, E[W| x 1 =a 1,…, x i =a i ] = E[W| x 1 =a 1,…, x i =a i, x i+1 = True] ·Pr[x i+1 = True| x 1 =a 1,…, x i =a i ] + E[W| x 1 =a 1,…, x i =a i, x i+1 = False] ·Pr[x i+1 = False| x 1 =a 1,…, x i =a i ]. The sum of the two conditional probabilities is again 1, since the two events are exhaustive. Pr[x i+1 = True| x 1 =a 1,…, x i =a i ]+Pr[x i+1 = False| x 1 =a 1,…, x i =a i ]=1.

37 Conclusion So, the conditional expectation of the parent is still a convex combination of the conditional expectations of the two children. Thus, either E[W| x 1 =a 1,…, x i =a i, x i+1 = True] ≥ E[W| x 1 =a 1,…, x i =a i ], or E[W| x 1 =a 1,…, x i =a i, x i+1 = False] ≥ E[W| x 1 =a 1,…, x i =a i ]. If we can determine, in polynomial time, which of the two children has a larger value, we can again derandomize the algorithm.

38 A good algorithm for small clauses We design an integer program for MAX-SAT. For each clause c  f, let denote the set of Boolean variables occurring nonnegated (negated) in c. The truth assignment is encoded by y. Picking y i = 1 (y i = 0) denotes setting x i to True (False). The constraint for clause c ensures that z c can be set to 1 only if at least one of the literals occurring in c is set to True, i.e., if clause c is satisfied by the picked truth assignment.

39 ILP of MAX-SAT

40 LP-relaxation of MAX-SAT

41 Algorithm LP-MAX-SAT 0) Input (x 1,…, x n, f, w: f → Q + ) 1)Solve LP-relaxation of MAX-SAT. Let (y*, z*) denote the optimal solution. 2)Independently set x i to True with probability y i *. 3) Output the resulting truth assignment, say τ.

42 Expected weight of disjunction For k ≥ 1, define β k =1– (1 –1/k) k. Lemma 10.9 If size(c)=k, then E[W c ] ≥ β k w c z * (c).

43 Proof We may assume w.l.o.g. that all literals in c appear nonnegated. Further, by renaming variables, we may assume

44 Proof 0 1 z g(z)g(z) g(z) =1– (1 – z/k) k β k =1– (1 – 1/k) k g(z)=βk·zg(z)=βk·z

45 1–1/e Corollary 10.10 E[W] ≥ β k OPT (if all clauses are of size at most k). Proof. Notice that β k is a decreasing function of k. Thus, if all clauses are of size at most k,

46 (1–1/e)-factor approximation Since we obtain the following result. Theorem 10.11 Algorithm LP-MAX-SAT is a (1–1/e)-factor algorithm for MAX-SAT.

47 A (¾)-factor algorithm We will combine the two algorithms as follows. Let b be the flip of a fair coin. If b = 0, run the Johnson algorithm, and if b = 1, run Algorithm LP-MAX-SAT. Let z* be the optimal solution of LP on the given instance. Lemma 10.12 E[W c ] ≥ (3/4)w c z * (c).

48 E[W c ] ≥ (3/4)w c z * (c) Let size(c)=k. Л 8.5  E[W c |b=0] = α k w c ≥ α k w c z * (c) Л 8.9  E[W c |b=1] ≥ β k w c z * (c) E[W c ] = (1/2)(E[W c |b=0]+ E[W c |b=1]) ≥ ≥ (1/2)w c z * (c)(α k +β k ) α 1 + β 1 = α 2 + β 2 = 3/2 k ≥ 3, α k + β k ≥ 7/8 + (1– 1/e) > 3/2 E[W c ] ≥ (3/4)w c z * (c)

49 E[W] By linearity of expectation, Finally, consider the following deterministic algorithm.

50 Goemans-Williamson Algorithm 0. Input (x 1,…, x n, f, w: f → Q + ) 1.Use the Johnson algorithm to get a truth assignment, τ 1. 2.Use Algorithm LP-MAX-SAT to get a truth assignment, τ 2. 3.Output the better of the two assignments.

51 (3/4)- approximation Theorem 10.13 Goemans-Williamson Algorithm is a deterministic factor 3/4 approximation algorithm for MAX-SAT.

52 Exercise Consider the following instance I of MAX-SAT problem. –Each clause has two or more literals. –If clause has exactly two literals it has at least one nonnegated variable. Consider Algorithm Random(p): set each Boolean variable to be True independently with probability p. Determine the value of p for which Algorithm Random(p) finds the best solution for the instance I in the worst case.


Download ppt "Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum."

Similar presentations


Ads by Google