Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. 2005 Pearson-Addison Wesley. All rights reserved.

Similar presentations


Presentation on theme: "1 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. 2005 Pearson-Addison Wesley. All rights reserved."— Presentation transcript:

1 1 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.

2 2 Approximation Algorithms Q. Suppose I need to solve an NP-hard problem. What should I do? A. Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. n Solve problem to optimality. n Solve problem in poly-time. n Solve arbitrary instances of the problem.  -approximation algorithm. n Guaranteed to run in poly-time. n Guaranteed to solve arbitrary instance of the problem n Guaranteed to find solution within ratio  of true optimum. Challenge. Need to prove a solution's value is close to optimum, without even knowing what optimum value is!

3 11.1 Load Balancing

4 4 Load Balancing Input. m identical machines; n jobs, job j has processing time t j. n Job j must run contiguously on one machine. n A machine can process at most one job at a time. Def. Let J(i) be the subset of jobs assigned to machine i. The load of machine i is L i =  j  J(i) t j. Def. The makespan is the maximum load on any machine L = max i L i. Load balancing. Assign each job to a machine to minimize makespan.

5 5 List-scheduling algorithm. n Consider n jobs in some fixed order. n Assign job j to machine whose load is smallest so far. Implementation. O(n log m) using a priority queue. Load Balancing: List Scheduling List-Scheduling(m, n, t 1,t 2, …,t n ) { for i = 1 to m { L i  0 J(i)   } for j = 1 to n { i = argmin k L k J(i)  J(i)  {j} L i  L i + t j } return J(1), …, J(m) } jobs assigned to machine i load on machine i machine i has smallest load assign job j to machine i update load of machine i

6 6 Load Balancing: List Scheduling Analysis Theorem. [Graham, 1966] Greedy algorithm is a 2-approximation. n First worst-case analysis of an approximation algorithm. n Need to compare resulting solution with optimal makespan L*. Lemma 1. The optimal makespan L*  max j t j. Pf. Some machine must process the most time-consuming job. ▪ Lemma 2. The optimal makespan Pf. n The total processing time is  j t j. n One of m machines must do at least a 1/m fraction of total work. ▪

7 7 Load Balancing: List Scheduling Analysis Theorem. Greedy algorithm is a 2-approximation. Pf. Consider load L i of bottleneck machine i. n Let j be last job scheduled on machine i. n When job j assigned to machine i, i had smallest load. Its load before assignment is L i - t j  L i - t j  L k for all 1  k  m. j 0 L = L i L i - t j machine i blue jobs scheduled before j

8 8 Load Balancing: List Scheduling Analysis Theorem. Greedy algorithm is a 2-approximation. Pf. Consider load L i of bottleneck machine i. n Let j be last job scheduled on machine i. n When job j assigned to machine i, i had smallest load. Its load before assignment is L i - t j  L i - t j  L k for all 1  k  m. n Sum inequalities over all k and divide by m: n m (L i - t j )   L k n Now▪ Lemma 2 Lemma 1

9 9 Load Balancing: List Scheduling Analysis Q. Is our analysis tight? A. Essentially yes. Ex: m machines, m(m-1) jobs length 1 jobs, one job of length m machine 2 idle machine 3 idle machine 4 idle machine 5 idle machine 6 idle machine 7 idle machine 8 idle machine 9 idle machine 10 idle list scheduling makespan = 19 m = 10

10 10 Load Balancing: List Scheduling Analysis Q. Is our analysis tight? A. Essentially yes. Ex: m machines, m(m-1) jobs length 1 jobs, one job of length m m = 10 optimal makespan = 10

11 11 Load Balancing: LPT Rule Longest processing time (LPT). Sort n jobs in descending order of processing time, and then run list scheduling algorithm. LPT-List-Scheduling(m, n, t 1,t 2, …,t n ) { Sort jobs so that t 1 ≥ t 2 ≥ … ≥ t n for i = 1 to m { L i  0 J(i)   } for j = 1 to n { i = argmin k L k J(i)  J(i)  {j} L i  L i + t j } return J(1), …, J(m) } jobs assigned to machine i load on machine i machine i has smallest load assign job j to machine i update load of machine i

12 12 Load Balancing: LPT Rule Observation. If at most m jobs, then list-scheduling is optimal. Pf. Each job put on its own machine. ▪ Lemma 3. If there are more than m jobs, L*  2 t m+1. Pf. n Consider first m+1 jobs t 1, …, t m+1. n Since the t i 's are in descending order, each takes at least t m+1 time. n There are m+1 jobs and m machines, so by pigeonhole principle, at least one machine gets two jobs. ▪ Theorem. LPT rule is a 3/2 approximation algorithm. Pf. Same basic approach as for List Scheduling. ▪

13 13 Lemma 3 ( by observation, can assume number of jobs > m ) Let t j be the last task assigned to the worst machine. Observe that j >= m +1. So by Lemma 3 : t j <= t m+1 <= (1/2) L* Now, repeat the same reasoning of List Scheduling, and get:

14 14 Load Balancing: LPT Rule Q. Is our 3/2 analysis tight? A. No. Theorem. [Graham, 1969] LPT rule is a 4/3-approximation. Pf. More sophisticated analysis of same algorithm. Q. Is Graham's 4/3 analysis tight? A. Essentially yes. Ex (At home): m machines, n = 2m+1 jobs, 2 jobs of length m+1, 2 of m+2, …, 2m-1 and one job of length m.

15 15 Bin Packing Bin Packing. Input: I = {a 1, a 2,…,a n }, a i < [0,1]; Solution: Partition B={B1,..,Bk} (Bins) of I into k subsets of size at most 1; Goal: Minimize k. Thm. 1 Bin Packing is NP-hard. Approximation algorithms ?

16 16 Bin Packing 1° STEP: Lower bound on the Optimum k*. Since each bin can have at most load 1  Lemma 2. k* >= S where S = Σ a i (Liquid solution)

17 17 Bin Packing Algorithm NEXT FIT: - 1° item is assigned to Bin 1; - Generic item i is assigned to the last used Bin if there is space otherwise open a new Bin and put it inside. Thm. 3 NEXT FIT is a 2-APX algorithm for B.P. Proof. The sum of items into 2 consecutive open bins is larger than 1. So, K(NEXT FIT) < 2 * S = 2 * Σ a i From Lemma 2 = k* >= S, we get the thesis.

18 18 Bin Packing Remark. The bound 2 for NEXT FIT is almost tight. Consider instances such as: 4n items: 1/2, 1/n, 1/2, 1/n,…,1/2, 1/n; HomeWork: Analyze the apx ratio

19 19 Bin Packing How to improve NEXT FIT ? Two ideas: - Order the Items w.r.t. non-increasing size - For every new Item, try ALL open Bins before open a new one ! If there is a good one, choose the first Bin. = FIRST FIT DECREASING ALGORITHM : FFD

20 20 Bin Packing Lemma 4. FFD is 1.5 apx algorithm for Bin Packing. Proof. Assume I = {a1,…,an} is ordered (non-increasing size) and Let’s partition I into: A = {ai | ai > 2/3}; B= {bi | 1/2 < bi <= 2/3}; C = {ci | 1/3 < ci <= 1/2 } ; D = {di | di <= 1/3} Claim 1. IF there is at least one Bin with only D-items THEN there is at most one bin (the last one) with load < 2/3. In this case the 1.5 apx is proved: Σ k-1 Sj (k-1) * Sj 2/3 From Lemma 2 : k* >= S

21 21 Bin Packing - The apx solution : K bins with load at least 2/3 (forget the last bin)  worst-case: each bin has load = 2/3  has free space 1/3 - The Liquid/optimal solution: it can use this free space and save bins: k  k* (free space) (1/3) k* must be >= (2/3) (k-k*) (the rest of liquid) so k <= (3/2) k*

22 22 Bin Packing So we can assume that NO BIN j exists that has ONLY D-items. Claim 2. In this case, FFD finds the optimal solution. Proof. Wlog may consider the new instance in which all D-Items are discarded. Since the number of bins is the same! So we can analyze the New instance! - A-Items cannot be matched with any other item (= optimal) - no Bin can contain more than 2 Items (= optimal) - B-Items are processed by first and they are matched with C-Items (= optimal) - Then the remaining C-items are matched among themselves

23 23 Euclidean-TSP We consider a complete weighted graph G(V,E,w) where w : E  R + satisfies the Δ-Inequality : w(x,z) <= w(x,y)+w(y,z). Euclidean-TSP = TSP restricted to Euclidean Graphs. THM. Euclidean TSP is 2-Approximable Proof. Claim 1 (Lower Bound on the Optimum) TSP(G) >= MST(G) Proof of the Claim. A Tour (without one edge) is a spanning tree!

24 24 Euclidean TSP Idea: Use any MST T and then transform it into a TOUR !!! TAKE any MST and start by any node. Follow the tree according The DEPTH FIRST SEARCH - Every edge is used at most twice  2 * MST (2-apx ok!) - Transform into a tour: Whenever you have to come back to a visited node you jump to the next unvisited node and use Δ-Inequality.

25 25 GENERAL TSP: APX-HARDNESS THM. If there is a c-apx poly-time algorithm for Min-TSP for some constant c, then P=NP. Proof. The GAP technique. Assume that a c-apx algorithm exists for TSP. Strong Reduction from Hamiltonian Circuit to TSP: Given an (unweighted) graph G(V,E) we construct the following complete weighted graph G’(V,E’,w): w(e) = 1 if e € E and w(e) = 1 + c n otherwise

26 26 TSP = APX HARDNESS Claim 1: G admits an Hamiltonian Circuit iff G’ admits a Tour of size n Claim 2: If there is no H.C. then the minimum Tour has size > (n-1) + (1+cn) = n + cn = n (c+1) We can use the c-apx alg. to DECIDE the existence of H.C. in G: - If H.C. exists then OPT Tour = n and ANY other Tour > (c+1)n. So, c-apx algo must find the OPT Tour of size n. Say YES for HC - If H.C. does not exist then the c-apx algorithm will find a Tour of size at least (c+1)n. Say NO for HC

27 11.2 Center Selection

28 28 center r(C) Center Selection Problem Input. Set of n sites s 1, …, s n and integer k > 0. Center selection problem. Select k centers C so that maximum distance from a site to nearest center is minimized. site k = 4

29 29 Center Selection Problem Input. Set of n sites s 1, …, s n and integer k > 0. Center selection problem. Select k centers C so that maximum distance from a site to nearest center is minimized. Notation. n dist(x, y) = distance between x and y. n dist(s i, C) = min c  C dist(s i, c) = distance from s i to closest center. n r(C) = max i dist(s i, C) = smallest covering radius. Goal. Find set of centers C that minimizes r(C), subject to |C| = k. Distance function properties. n dist(x, x) = 0(identity) n dist(x, y) = dist(y, x)(symmetry) n dist(x, y)  dist(x, z) + dist(z, y)(triangle inequality)

30 30 center site Center Selection Example Ex: each site is a point in the plane, a center can be any point in the plane, dist(x, y) = Euclidean distance. Remark: search can be infinite! r(C)

31 31 Greedy Algorithm: A False Start Greedy algorithm. Put the first center at the best possible location for a single center, and then keep adding centers so as to reduce the covering radius each time by as much as possible. Remark: arbitrarily bad! greedy center 1 k = 2 centers site center

32 32 Center Selection: Greedy Algorithm Greedy algorithm. Repeatedly choose the next center to be the site farthest from any existing center. Observation. Upon termination all centers in C are pairwise at least r(C) apart. Pf. By construction of algorithm. Greedy-Center-Selection(k, n, s 1,s 2, …,s n ) { C =  repeat k times { Select a site s i with maximum dist(s i, C) Add s i to C } return C } site farthest from any center

33 33 Center Selection: Analysis of Greedy Algorithm Theorem. Let C* be an optimal set of centers. Then r(C)  2r(C*). Pf. (by contradiction) Assume r(C*) < ½ r(C). n For each site c i in C, consider ball of radius ½ r(C) around it. n Exactly one c i * in each ball; let c i be the site paired with c i *. n Consider any site s and its closest center c i * in C*. n dist(s, C)  dist(s, c i )  dist(s, c i *) + dist(c i *, c i )  2r(C*). n Thus r(C)  2r(C*). ▪ C* sites ½ r(C) cici ci*ci* s  r(C*) since c i * is closest center ½ r(C)  -inequality

34 34 Center Selection Theorem. Let C* be an optimal set of centers. Then r(C)  2r(C*). Theorem. Greedy algorithm is a 2-approximation for center selection problem. Remark. Greedy algorithm always places centers at sites, but is still within a factor of 2 of best solution that is allowed to place centers anywhere. Question. Is there hope of a 3/2-approximation? 4/3? e.g., points in the plane Theorem. Unless P = NP, there no  -approximation for center-selection problem for any  < 2.

35 11.4 The Pricing Method: Vertex Cover

36 36 Weighted Vertex Cover Definition. Given a graph G = (V, E), a vertex cover is a set S  V such that each edge in E has at least one end in S. Weighted vertex cover. Given a graph G with vertex weights, find a vertex cover of minimum weight. 4 9 2 2 4 9 2 2 weight = 2 + 2 + 4 weight = 11

37 37 Pricing Method Pricing method. Each edge must be covered by some vertex. Edge e = (i, j) pays price p e  0 to use vertex i and j. Fairness. Edges incident to vertex i should pay  w i in total. Lemma. For any vertex cover S and any fair prices p e :  e p e  w(S). Pf. ▪ 4 9 2 2 sum fairness inequalities for each node in S each edge e covered by at least one node in S

38 38 Pricing Method Pricing method. Set prices and find vertex cover simultaneously. Weighted-Vertex-Cover-Approx(G, w) { foreach e in E p e = 0 while (  edge i-j such that neither i nor j are tight) select such an edge e increase p e as much as possible until i or j tight } S  set of all tight nodes return S }

39 39 Pricing Method vertex weight Figure 11.8 price of edge a-b

40 40 Pricing Method: Analysis Theorem. Pricing method is a 2-approximation. Pf. n Algorithm terminates since at least one new node becomes tight after each iteration of while loop. n Let S = set of all tight nodes upon termination of algorithm. S is a vertex cover: if some edge i-j is uncovered, then neither i nor j is tight. But then while loop would not terminate. n Let S* be optimal vertex cover. We show w(S)  2w(S*). all nodes in S are tight S  V, prices  0 fairness lemma each edge counted twice

41 11.6 LP Rounding: Vertex Cover

42 42 Weighted Vertex Cover Weighted vertex cover. Given an undirected graph G = (V, E) with vertex weights w i  0, find a minimum weight subset of nodes S such that every edge is incident to at least one vertex in S. 3 6 10 7 A E H B DI C F J G 6 16 10 7 23 9 10 9 33 total weight = 55 32

43 43 Weighted Vertex Cover: IP Formulation Weighted vertex cover. Given an undirected graph G = (V, E) with vertex weights w i  0, find a minimum weight subset of nodes S such that every edge is incident to at least one vertex in S. Integer programming formulation. n Model inclusion of each vertex i using a 0/1 variable x i. Vertex covers in 1-1 correspondence with 0/1 assignments: S = {i  V : x i = 1} n Objective function: maximize  i w i x i. n Must take either i or j: x i + x j  1.

44 44 Weighted Vertex Cover: IP Formulation Weighted vertex cover. Integer programming formulation. Observation. If x* is optimal solution to (ILP), then S = {i  V : x* i = 1} is a min weight vertex cover.

45 45 Integer Programming INTEGER-PROGRAMMING. Given integers a ij and b i, find integers x j that satisfy: Observation. Vertex cover formulation proves that integer programming is NP-hard search problem. even if all coefficients are 0/1 and at most two variables per inequality

46 46 Linear Programming Linear programming. Max/min linear objective function subject to linear inequalities. n Input: integers c j, b i, a ij. n Output: real numbers x j. Linear. No x 2, xy, arccos(x), x(1-x), etc. Simplex algorithm. [Dantzig 1947] Can solve LP in practice. Ellipsoid algorithm. [Khachian 1979] Can solve LP in poly-time.

47 47 LP Feasible Region LP geometry in 2D. x 1 + 2x 2 = 6 2x 1 + x 2 = 6 x 2 = 0 x 1 = 0

48 48 Weighted Vertex Cover: LP Relaxation Weighted vertex cover. Linear programming formulation. Observation. Optimal value of (LP) is  optimal value of (ILP). Pf. LP has fewer constraints. Note. LP is not equivalent to vertex cover. Q. How can solving LP help us find a small vertex cover? A. Solve LP and round fractional values. ½½ ½

49 49 Weighted Vertex Cover Theorem. If x* is optimal solution to (LP), then S = {i  V : x* i  ½} is a vertex cover whose weight is at most twice the min possible weight. Pf. [S is a vertex cover] n Consider an edge (i, j)  E. n Since x* i + x* j  1, either x* i  ½ or x* j  ½  (i, j) covered. Pf. [S has desired cost] n Let S* be optimal vertex cover. Then LP is a relaxation x* i  ½

50 50 Weighted Vertex Cover Theorem. 2-approximation algorithm for weighted vertex cover. Theorem. [Dinur-Safra 2001] If P  NP, then no  -approximation for  < 1.3607, even with unit weights. Open research problem. Close the gap. 10  5 - 21

51 * 11.7 Load Balancing Reloaded

52 52 Generalized Load Balancing Input. Set of m machines M; set of n jobs J. n Job j must run contiguously on an authorized machine in M j  M. n Job j has processing time t j. n Each machine can process at most one job at a time. Def. Let J(i) be the subset of jobs assigned to machine i. The load of machine i is L i =  j  J(i) t j. Def. The makespan is the maximum load on any machine = max i L i. Generalized load balancing. Assign each job to an authorized machine to minimize makespan.

53 53 Generalized Load Balancing: Integer Linear Program and Relaxation ILP formulation. x ij = time machine i spends processing job j. LP relaxation.

54 54 Generalized Load Balancing: Lower Bounds Lemma 1. Let L be the optimal value to the LP. Then, the optimal makespan L*  L. Pf. LP has fewer constraints than IP formulation. Lemma 2. The optimal makespan L*  max j t j. Pf. Some machine must process the most time-consuming job. ▪

55 55 Generalized Load Balancing: Structure of LP Solution Lemma 3. Let x be solution to LP. Let G(x) be the graph with an edge from machine i to job j if x ij > 0. Then G(x) is acyclic. Pf. (deferred) G(x) acyclic job machine can transform x into another LP solution where G(x) is acyclic if LP solver doesn't return such an x G(x) cyclic x ij > 0

56 56 Generalized Load Balancing: Rounding Rounded solution. Find LP solution x where G(x) is a forest. Root forest G(x) at some arbitrary machine node r. n If job j is a leaf node, assign j to its parent machine i. n If job j is not a leaf node, assign j to one of its children. Lemma 4. Rounded solution only assigns jobs to authorized machines. Pf. If job j is assigned to machine i, then x ij > 0. LP solution can only assign positive value to authorized machines. ▪ job machine

57 57 Generalized Load Balancing: Analysis Lemma 5. If job j is a leaf node and machine i = parent(j), then x ij = t j. Pf. Since i is a leaf, x ij = 0 for all j  parent(i). LP constraint guarantees  i x ij = t j. ▪ Lemma 6. At most one non-leaf job is assigned to a machine. Pf. The only possible non-leaf job assigned to machine i is parent(i). ▪ job machine

58 58 Generalized Load Balancing: Analysis Theorem. Rounded solution is a 2-approximation. Pf. n Let J(i) be the jobs assigned to machine i. n By Lemma 6, the load L i on machine i has two components: – leaf nodes – parent(i) n Thus, the overall load L i  2L*. ▪ Lemma 5 Lemma 1 (LP is a relaxation) LP Lemma 2 optimal value of LP

59 59 Flow formulation of LP. Observation. Solution to feasible flow problem with value L are in one- to-one correspondence with LP solutions of value L. Generalized Load Balancing: Flow Formulation 

60 60 Lemma 3. Let (x, L) be solution to LP. Let G(x) be the graph with an edge from machine i to job j if x ij > 0. We can find another solution (x', L) such that G(x') is acyclic. Pf. Let C be a cycle in G(x). n Augment flow along the cycle C. n At least one edge from C is removed (and none are added). n Repeat until G(x') is acyclic. Generalized Load Balancing: Structure of Solution 3 4 4 3 2 3 1 2 6 5 G(x) 3 4 4 3 3 4 1 6 5 G(x') augment along C flow conservation maintained

61 61 Conclusions Running time. The bottleneck operation in our 2-approximation is solving one LP with mn + 1 variables. Remark. Can solve LP using flow techniques on a graph with m+n+1 nodes: given L, find feasible flow if it exists. Binary search to find L*. Extensions: unrelated parallel machines. [Lenstra-Shmoys-Tardos 1990] n Job j takes t ij time if processed on machine i. n 2-approximation algorithm via LP rounding. n No 3/2-approximation algorithm unless P = NP.

62 11.8 Knapsack Problem

63 63 Polynomial Time Approximation Scheme PTAS. (1 +  )-approximation algorithm for any constant  > 0. n Load balancing. [Hochbaum-Shmoys 1987] n Euclidean TSP. [Arora 1996] Consequence. PTAS produces arbitrarily high quality solution, but trades off accuracy for time. This section. PTAS for knapsack problem via rounding and scaling.

64 64 Knapsack Problem Knapsack problem. n Given n objects and a "knapsack." n Item i has value v i > 0 and weighs w i > 0. n Knapsack can carry weight up to W. n Goal: fill knapsack so as to maximize total value. Ex: { 3, 4 } has value 40. 1 Value 18 22 28 1 Weight 5 6 62 7 Item 1 3 4 5 2 W = 11 we'll assume w i  W

65 65 Knapsack is NP-Complete KNAPSACK : Given a finite set X, nonnegative weights w i, nonnegative values v i, a weight limit W, and a target value V, is there a subset S  X such that: SUBSET-SUM : Given a finite set X, nonnegative values u i, and an integer U, is there a subset S  X whose elements sum to exactly U? Claim. SUBSET-SUM  P KNAPSACK. Pf. Given instance (u 1, …, u n, U) of SUBSET-SUM, create KNAPSACK instance:

66 66 Knapsack Problem: Dynamic Programming 1 Def. OPT(i, w) = max value subset of items 1,..., i with weight limit w. n Case 1: OPT does not select item i. – OPT selects best of 1, …, i–1 using up to weight limit w n Case 2: OPT selects item i. – new weight limit = w – w i – OPT selects best of 1, …, i–1 using up to weight limit w – w i Running time. O(n W). n W = weight limit. n Not polynomial in input size!

67 67 Knapsack Problem: Dynamic Programming II Def. OPT(i, v) = min weight subset of items 1, …, i that yields value exactly v. n Case 1: OPT does not select item i. – OPT selects best of 1, …, i-1 that achieves exactly value v n Case 2: OPT selects item i. – consumes weight w i, new value needed = v – v i – OPT selects best of 1, …, i-1 that achieves exactly value v Running time. O(n V*) = O(n 2 v max ). n V* = optimal value = maximum v such that OPT(n, v)  W. n Not polynomial in input size! V*  n v max

68 68 Knapsack: FPTAS Intuition for approximation algorithm. n Round all values up to lie in smaller range. n Run dynamic programming algorithm on rounded instance. n Return optimal items in rounded instance. W = 11 original instancerounded instance W = 11 1 Value 18 22 28 1 Weight 5 6 62 7 Item 1 3 4 5 2 934,221 Value 17,810,013 21,217,800 27,343,199 1 Weight 5 6 5,956,342 2 7 Item 1 3 4 5 2

69 69 Knapsack: FPTAS Knapsack FPTAS. Round up all values: – v max = largest value in original instance –  = precision parameter –  = scaling factor =  v max / n Observation. Optimal solution to problems with or are equivalent. Intuition. close to v so optimal solution using is nearly optimal; small and integral so dynamic programming algorithm is fast. Running time. O(n 3 /  ). n Dynamic program II running time is, where

70 70 Knapsack: FPTAS Knapsack FPTAS. Round up all values: Theorem. If S is solution found by our algorithm and S* is any other feasible solution then Pf. Let S* be any feasible solution satisfying weight constraint. always round up solve rounded instance optimally never round up by more than  |S|  n n  =  v max, v max   i  S v i DP alg can take v max

71 Extra Slides

72 72 Machine 2 Machine 1adf bceg yes Load Balancing on 2 Machines Claim. Load balancing is hard even if only 2 machines. Pf. NUMBER-PARTITIONING  P LOAD-BALANCE. ad f bc ge length of job f TimeL0 machine 1 machine 2 NP-complete by Exercise 8.26

73 73 Center Selection: Hardness of Approximation Theorem. Unless P = NP, there is no  -approximation algorithm for metric k-center problem for any  < 2. Pf. We show how we could use a (2 -  ) approximation algorithm for k- center to solve DOMINATING-SET in poly-time. n Let G = (V, E), k be an instance of DOMINATING-SET. n Construct instance G' of k-center with sites V and distances – d(u, v) = 2 if (u, v)  E – d(u, v) = 1 if (u, v)  E n Note that G' satisfies the triangle inequality. n Claim: G has dominating set of size k iff there exists k centers C* with r(C*) = 1. n Thus, if G has a dominating set of size k, a (2 -  )-approximation algorithm on G' must find a solution C* with r(C*) = 1 since it cannot use any edge of distance 2. see Exercise 8.29


Download ppt "1 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. 2005 Pearson-Addison Wesley. All rights reserved."

Similar presentations


Ads by Google