Presentation is loading. Please wait.

Presentation is loading. Please wait.

Approximation Algorithms

Similar presentations


Presentation on theme: "Approximation Algorithms"— Presentation transcript:

1 Approximation Algorithms
Chapter 15 Approximation Algorithms

2 Introduction of Approximation Algorithm
There are many hard combinatorial optimization problems that cannot be solved efficiently using backtracking or randomization. combinatorial optimization problems: find the best solution out of finitely many possibilities. An approximation algorithm will give a reasonable solution that approximates an optimal solution. A marking characteristic of (most of) approximation algorithms is that they are fast. One should not be optimistic, however, about finding an efficient approximation algorithm, as there are hard problems for which even the existence of a reasonable approximation algorithm is unlikely unless NP=P

3 Combinatorial Optimization Problem (COP)
INPUT: Instance I to the COP. Feasible Set: FEAS(I) = the set of all feasible (or valid) solutions for instance I, usually expressed by a set of constraints. Objective Cost Function: Instance I includes a description of the objective cost function, Cost[I] that maps each solution S (feasible or not) to a real number or ±. Goal: Optimize (i.e., minimize or maximize) the objective cost function. Optimum Set: OPT(I) = { Sol  FEAS(I) | Cost[I] (Sol)  Cost[I] (Sol’), Sol’FEAS(I) } the set of all minimum cost feasible solutions for instance I Combinatorial: Means the problem structure implies that only a finite number of solutions need to be examined to find the optimum. OUTPUT: A solution Sol  OPT(I), or report that FEAS(I) = .

4 COP Examples “Easy” (polynomial-time solvable): Shortest (simple) Path Minimum Spanning Tree Graph Matching “NP-Hard” (no known polynomial-time solution): Longest (simple) Path Traveling Salesman Vertex Cover Set Cover K-Cluster 0/1 Knapsack

5 Approximation Algorithm for NP-Hard COP
Algorithm A: polynomial time on any input (bit-) size n. SA feasible solution obtained by algorithm A SOPT optimum solution. C(SA) > 0 cost of solution obtained by algorithm A C(SOPT) > 0 cost of optimum solution r = r(n) > 1 (worst-case) approximation ratio A is a r-approximation algorithm if for all instances: r =max(C(SOPT) / C(SA), C(SA) / C(SOPT)) for Minimization for Maximization

6 Design Methods Greedy Methods [e.g., Weighted Set Cover, Clustering] Cost Scaling or Relaxation [e.g., 0/1 Knapsack] Constraint Relaxation [e.g., Weighted Vertex Cover] Combination of both [e.g., Lagrangian relaxation] LP-relaxation of Integer Linear Program (ILP): LP-rounding method LP primal-dual method [e.g., the Pricing Method] Trimmed Dynamic Programming [e.g., Euclidean-TSP] Local Search Heuristics [e.g., 2-exchange in TSP] Tabu Search Genetic Heuristics Simulated Annealing •••

7 Analysis Method Establish cost lower/upper bounds LB and UB such that: LB  C(SA)  UB LB  C(SOPT)  UB UB  r LB Minimization: LB  C(SOPT)  C(SA) = UB  r LB  r C(SOPT)  C(SOPT)  C(SA)  r C(SOPT) Also: C(SA)  (1+e) C(SOPT) [e>0 relative error. r = 1+e ] Maximization: LB = C(SA)  C(SOPT)  UB  r LB = r C(SA)  C(SA)  C(SOPT)  r C(SA) Also: C(SA)  (1-e) C(SOPT) [ 1>e>0 relative error. 1/ r = 1-e ]

8 Classes of Approximation Methods
Difference bounds algorithm: |C(SA) - C(SOPT) |K r-approximation algorithm A (relative performance bounds): (1) runs in time polynomial in the input (bit-) size, and (2) 1/r  C(SA) / C(SOPT)  r PTAS: Polynomial-Time Approximation Scheme: Additional input parameter e (as desired relative error bound) (1) it finds a solution with relative error at most e, and (2) for any fixed e, it runs in time polynomial in the input (bit-) size. [For example, O(n1/e).] FPTAS: Fully Polynomial-Time Approximation Scheme: (1) is a PTAS, and (2) running time is polynomial in both the input (bit-) size and in 1/e. [For example, O( 1/e n2).]

9 Difference bounds The most we can hope from an approximation algorithm is that the difference between the value of the optimal solution and the value of the solution obtained by the approximation algorithm is always constant For all instances I of the problem, the most desirable solution that can be obtained by an approximation algorithm A is such that |C(SA)-C(SOPT)|K, for some constant K. But there are very few NP-hard optimization problems for which approximation algorithms with difference bounds are known.

10 NP-hard COPs Considered here
Bin Packing Problem Weighted Vertex Cover Problem Weighted Set Cover Problem Traveling Salesman Problem K-Cluster Problem 0/1 Knapsack Problem Subset-sum Problem

11 The bin packing problem
Given a collection of items u1,...,un of size s1,...,sn, where each sj is between 0 and 1, we are required to pack these items into the minimum number of bins of unit capacity. Four heuristics: FF, BF, FFD, BFD Theorem 15.1: For all instances I of the BIN PACKING problem Theorem 15.2: For all instances I of the BIN PACKING problem

12 Weighted Vertex Cover Problem (WVCP)
Input: an undirected graph G(V,E) with vertex weights w(V) w(v)>0 is weight of vertex vV. Output: a vertex cover C: a subset CV that “hits” (or covers) all edges, i.e., (u,v)  E, u  C or v  C (or both). Goal: minimize weight of vertex cover C: Our textbook considers the un-weighted case only, i.e., w(v) = 1 vV. a b e d c 4 3 2 8 6

13 Weighted Vertex Cover Problem (WVCP)
Input: an undirected graph G(V,E) with vertex weights w(V) w(v)>0 is weight of vertex vV. Output: a vertex cover C: a subset CV that “hits” (or covers) all edges, i.e., (u,v)  E, u  C or v  C (or both). Goal: minimize weight of vertex cover C: Our textbook considers the un-weighted case only, i.e., w(v) = 1 vV. 4 3 a b COPT = { a, d, e} W(COPT) = = 12 e 2 c d 8 6

14 WVCP as an ILP 0/1 variables on vertices: (P1) WVCP as an ILP:
(P2) LP Relaxation:

15 WVCP LB & UB (P2) LP Relaxation: (P3) Dual LP:
OPTcost (P1)  OPTcost (P2) = OPTcost (P3)  feasible cost(P3) min relaxation LP Duality max problem LB: cost of any feasible solution to (P3) UB: feasible VC by the pricing method (LP primal-dual)

16 Approx WVCP by Pricing Method
Define dual (real) price variables p(u,v) for each (u,v)  E Price Invariant (PI): [maintain (P3) feasibility]: Interpretation: a vertex (server) vC covers its incident edges (clients). The weight (cost) of v is distributed as charged price among these clients, without over-charging. We say vertex v is tight if its inequality (3) is met as equality, i.e., We say (the price of) an edge (u,v) is final if u or v is tight.

17 Approx WVCP by Pricing Method
ALGORITHM Approximate-Vertex-Cover (G(V,E), w(V)) for each edge (u,v)E do p(u,v)  0 for each edge (u,v)E do finalize (u,v), i.e., increase p(u,v) until u or v becomes tight C  { v  V | v is tight } return C end a b e d c 4 3 2 8 6 Finalize edge:

18 Approx WVCP by Pricing Method
ALGORITHM Approximate-Vertex-Cover (G(V,E), w(V)) for each edge (u,v)E do p(u,v)  0 for each edge (u,v)E do finalize (u,v), i.e., increase p(u,v) until u or v becomes tight C  { v  V | v is tight } return C end 4 3 Finalize edge: (a,b) : b becomes tight 3 a b e 2 c d 8 6

19 Approx WVCP by Pricing Method
ALGORITHM Approximate-Vertex-Cover (G(V,E), w(V)) for each edge (u,v)E do p(u,v)  0 for each edge (u,v)E do finalize (u,v), i.e., increase p(u,v) until u or v becomes tight C  { v  V | v is tight } return C end 4 3 Finalize edge: (a,b) : b becomes tight (a,c) : a becomes tight 3 a b 1 e 2 c d 8 6

20 Approx WVCP by Pricing Method
ALGORITHM Approximate-Vertex-Cover (G(V,E), w(V)) for each edge (u,v)E do p(u,v)  0 for each edge (u,v)E do finalize (u,v), i.e., increase p(u,v) until u or v becomes tight C  { v  V | v is tight } return C end 4 3 Finalize edge: (a,b) : b becomes tight (a,c) : a becomes tight (a,d) : no change 3 a b 1 e 2 c d 8 6

21 Approx WVCP by Pricing Method
ALGORITHM Approximate-Vertex-Cover (G(V,E), w(V)) for each edge (u,v)E do p(u,v)  0 for each edge (u,v)E do finalize (u,v), i.e., increase p(u,v) until u or v becomes tight C  { v  V | v is tight } return C end 4 3 Finalize edge: (a,b) : b becomes tight (a,c) : a becomes tight (a,d) : no change (b,e) : no change 3 a b 1 e 2 c d 8 6

22 Approx WVCP by Pricing Method
ALGORITHM Approximate-Vertex-Cover (G(V,E), w(V)) for each edge (u,v)E do p(u,v)  0 for each edge (u,v)E do finalize (u,v), i.e., increase p(u,v) until u or v becomes tight C  { v  V | v is tight } return C end 4 3 Finalize edge: (a,b) : b becomes tight (a,c) : a becomes tight (a,d) : no change (b,e) : no change (c,d) : d becomes tight 3 a b 1 e 2 c d 6 8 6

23 Approx WVCP by Pricing Method
ALGORITHM Approximate-Vertex-Cover (G(V,E), w(V)) for each edge (u,v)E do p(u,v)  0 for each edge (u,v)E do finalize (u,v), i.e., increase p(u,v) until u or v becomes tight C  { v  V | v is tight } return C end 4 3 Finalize edge: (a,b) : b becomes tight (a,c) : a becomes tight (a,d) : no change (b,e) : no change (c,d) : d becomes tight (d,e) : no change 3 a b 1 e 2 c d 6 C = { a,b,d} , W(C) = = 13 COPT = {a,d,e}, W(COPT) = = 12. 8 6

24 This is a 2-approximation algorithm
THEOREM: The algorithm has the following properties: (1) Correctness: outputs a feasible vertex cover C (2) Running Time: polynomial (in fact linear time) (3) Approx Bound: W(C)  2 W(COPT) (and r = 2 is tight)

25 Weighted Set Cover Problem (WSCP)
Input: a set X = {x1, x2 , … , xn} of n elements, a family F = { S1, S2, , … , Sm} of m subsets of X, and w(F): weights w(S)>0, for each SF. Pre-condition: F covers X, i.e., X = SF S = S1  S2  …  Sm. Output: a subset C  F that covers X, i.e., X = SC S. Goal: minimize weight of set cover C: Set Cover (SC) generalizes Vertex Cover (VC): In VC, elements (clients) are edges, and sets (servers) correspond to vertices. The set corresponding to a vertex v is the set of edges incident to v.

26 Example X F x1 w(S1)=8 S1 8 x2 x1 x2 x3 x4 S2 9 w(S2)=9 x3 x5 x6 x7 x8
12 w(S3)=12 x5 w(S4)=5 w(S5)=10 S4 x6 5 x7 S5 10 x8

27 Example COPT = { S2, S3, S5 } W(COPT ) = 9 + 12 + 10 = 31 X F x1
w(S1)=8 S1 8 x2 x1 x2 x3 x4 S2 9 w(S2)=9 x3 x5 x6 x7 x8 x4 S3 12 w(S3)=12 x5 w(S4)=5 w(S5)=10 S4 x6 5 x7 COPT = { S2, S3, S5 } W(COPT ) = = 31 S5 10 x8

28 Approx WSCP by Pricing Method
ALGORITHM Greedy-Set-Cover (X, F, w(F)) U  X (* uncovered elements *) C   (* set cover *) while U   do select SF that minimizes price p = w(S) / |SU| U  U  S C  C  {S} return C end Definition [for the sake of analysis]: Price p(x) charged to an element xX is the price (at line 4) at the earliest iteration that x gets covered (i.e., removed from U at line 5). [Note: p(x) is charged to x only once and at the first time x gets covered.]

29 Example run of the algorithm
Price X F Iteration : p = w(S) / |SU| x1 S1 8 x2 S2 9 x3 x4 S3 12 x5 S4 x6 5 x7 S5 10 x8

30 Example run of the algorithm
Price X F Iteration : p = w(S) / |SU| x1 S1 8/4 9/4 12/3 5/2 10/3 8 x2 S2 9 x3 x4 S3 12 x5 S4 x6 5 x7 S5 10 x8

31 Example run of the algorithm
Price X F Iteration : p = w(S) / |SU| x1 2 S1 8/4 9/4 12/3 5/2 10/3 8 2 x2 S2 9 2 x3 2 x4 S3 12 x5 S4 x6 5 x7 S5 10 x8

32 Example run of the algorithm
Price X F Iteration : p = w(S) / |SU| x1 2 S1 8/4 9/4 12/3 5/2 10/3 9/2 12/1 5/1 10/2 8 2 x2 S2 9 2 x3 2 x4 S3 12 x5 S4 x6 5 x7 S5 10 x8

33 Example run of the algorithm
Price X F Iteration : p = w(S) / |SU| x1 2 S1 8/4 9/4 12/3 5/2 10/3 9/2 12/1 5/1 10/2 8 2 x2 S2 9 2 x3 2 x4 S3 12 x5 9/2 S4 x6 5 9/2 x7 S5 10 x8

34 Example run of the algorithm
Price X F Iteration : p = w(S) / |SU| x1 2 S1 8/4 9/4 12/3 5/2 10/3 9/2 12/1 5/1 10/2 12/1 5/1 10/1 8 2 x2 S2 9 2 x3 2 x4 S3 12 x5 9/2 S4 x6 5 9/2 x7 S5 10 x8

35 Example run of the algorithm
Price X F Iteration : p = w(S) / |SU| x1 2 S1 8/4 9/4 12/3 5/2 10/3 9/2 12/1 5/1 10/2 12/1 5/1 10/1 8 2 x2 S2 9 2 x3 2 x4 S3 12 x5 9/2 S4 x6 5 9/2 x7 S5 10 x8 5

36 Example run of the algorithm
Price X F Iteration : p = w(S) / |SU| x1 2 S1 8/4 9/4 12/3 5/2 10/3 9/2 12/1 5/1 10/2 12/1 5/1 10/1 12/1 8 2 x2 S2 9 2 x3 2 x4 S3 12 x5 9/2 S4 x6 5 9/2 x7 S5 10 x8 5

37 Example run of the algorithm
Price X F Iteration : p = w(S) / |SU| x1 2 S1 8/4 9/4 12/3 5/2 10/3 9/2 12/1 5/1 10/2 12/1 5/1 10/1 12/1 8 2 x2 S2 9 2 x3 2 x4 S3 12 x5 9/2 S4 x6 5 9/2 12 x7 S5 10 x8 5

38 Example run of the algorithm
Price X F Iteration : p = w(S) / |SU| x1 2 S1 8/4 9/4 12/3 5/2 10/3 9/2 12/1 5/1 10/2 12/1 5/1 10/1 12/1 8 2 x2 S2 9 2 x3 Si p(xi) = W(C) = 34 2 x4 S3 12 x5 9/2 S4 x6 5 9/2 12 x7 S5 10 x8 5 C = { S1, S2, S4, S3}, W(C) = = 34 COPT = {S2, S3, S5}, W(COPT) = = 31

39 This is an H(n)-approximation algorithm

40 This is an H(n)-approximation algorithm
LEMMA: SF  xS p(x)  w(S) H(|S|). THEOREM: The Greedy-Set-Cover algorithm has the following properties: (1) Correctness: outputs a feasible set cover C (2) Running Time: polynomial (3) Approx Bound: W(C)  H(dmax) W(COPT)  H(n) W(COPT)

41 Traveling Salesman Problem (TSP)
Input: An nn positive distance matrix D=(dij), i,j = 1..n, where dij is the travel distance from city i to city j. Output: A traveling salesman tour T. T starts from the home city (say, city 1) and visits each city exactly once and returns to home city. Goal: minimize total distance traveled on tour T: TOPT  (1, 3, 4, 2)  ((1, 3), (3,4), (4,2), (2,1)) C(TOPT) = d13 + d34 + d42 + d = = 15

42 Some Classes of TSP General TSP : distance matrix D is arbitrary
Metric-TSP: D satisfies the metric axioms Euclidean-TSP: n cities as n points in the plane with Euclidean inter-city distances These are all NP-hard. Related Problems: Minimum Spanning Tree Hamiltonian Cycle Graph Matching Eulerian Graphs

43 Hamiltonian Cycle Problem (HCP)
HCP: Given a graph G(V,E), does G have a Hamiltonian Cycle (HC)? HC is any simple spanning cycle of G, i.e., a cycle that goes through each vertex exactly once. HCP is known to be NP-hard. Non-Hamiltonian (Peterson graph) Hamiltonian (skeleton of dodecahedron)

44 General TSP THEOREM: Let r >1 be any constant r-approximation of general TSP is also NP-hard. [So, there is no polynomial-time r-approximation algorithm for general TSP, unless P=NP.]

45 Metric & Euclidean TSP Metric Traveling Salesman Problem (metric-TSP):
special case of general TSP (distance matrix is metric) NP-hard 2-approximation 1.5-approximation Euclidean Traveling Salesman Problem (ETSP): special case of Metric-TSP (with Euclidean metric) PTAS

46 Eulerian Graph Definition: A graph G(V,E) is Eulerian if it has an Euler walk Euler Walk is a cycle that goes through each edge of G exactly once. An Eulerian graph G FACT 1: A graph G is Eulerian if and only if (a) G is connected and (b) every vertex of G has even degree. FACT 2: An Euler walk of an Eulerian graph can be found in linear time.

47 2-approximation of metric-TSP
Rosenkrants-Stearns-Lewis [1974] C(T)  2  C(TOPT) Minimum Spanning Tree (MST) Euler walk around double-MST Bypass repeated nodes on the Euler walk. FACT 1: Triangle inequality implies bypassing nodes cannot increase length of walk. FACT 2: LB = C(MST)  C(TOPT)  C(T) = UB  2  C(MST) = 2 LB.

48 r = 2 is tight (even for Euclidean instances)
Tour T Tour TOPT Euclidean Instance Euler walk of double MST MST

49 A perfect matching M of weight 5+2+3+6=16 in graph G.
Graph Matching Definition: A matching M in a graph G(V,E) is a subset of the edges of G such that no two edges in M are incident to a common vertex. Weight of M is the sum of its edge weights. A perfect matching is one in which every vertex is matched. 7 5 3 8 9 6 1 3 4 2 7 A perfect matching M of weight =16 in graph G. FACT: Minimum weight maximum cardinality matching can be obtained in polynomial time [Jack Edmonds 1965].

50 1.5-approximation for metric-TSP
C(T)  1.5  C(TOPT) [Christofides, 1976] MST Odd degree nodes in MST M = Minimum weight perfect matching on odd-degree nodes E = MST + M is Eulerian Find an Euler walk of E. Bypass repeated nodes on the Euler walk to get a TSP tour T. FACTS: Any graph has even # of odd degree nodes. (Hence, M exists.) C(MST)  C(TOPT) C(M)  0.5  C(TOPT) C(E) = C(MST) + C(M)  1.5  C(TOPT) C(T)  C(E)  1.5  C(TOPT)

51 C(M)  0.5  C(TOPT) Odd degree node in MST TOPT

52 C(M)  0.5  C(TOPT) C(M)  C(M1) Odd degree node in MST M1 TOPT

53 C(M)  0.5  C(TOPT) C(M)  C(M2) Odd degree node in MST M2 TOPT

54 C(M)  0.5  C(TOPT) C(M)  C(M1) C(M)  C(M2)
2  C(M)  C(M1) + C(M2)  C(TOPT) Odd degree node in MST M1 M2 TOPT

55 r = 1.5 is tight (even for Euclidean instances)
MST: MST + M: Tour T: Tour TOPT :

56 The K-Cluster Problem Input: Points X = {x1, x2 , … , xn} with underlying distance metric d(xi, xj), i,j=1..n, and positive integer K. Output: A partition of X into K clusters C1, C2, , … , Ck. Goal: minimize the longest diameter of the clusters: An Euclidean version: given n points in the plane, find K equal & minimum diameter circular disks that collectively cover the n points. This Euclidean version is also NP-hard. n=17 points

57 The K-Cluster Problem Input: Points X = {x1, x2 , … , xn} with underlying distance metric d(xi, xj), i,j=1..n, and positive integer K. Output: A partition of X into K clusters C1, C2, , … , Ck. Goal: minimize the longest diameter of the clusters: An Euclidean version: given n points in the plane, find K equal & minimum diameter circular disks that collectively cover the n points. This Euclidean version is also NP-hard. n=17 points in K=4 clusters:

58 Greedy Approximation IDEA: (1) Pick K points { m1 , m2 , … , mK } from X as cluster “centers”. Greedily & incrementally pick cluster centers, each farthest from the previously selected ones. (2) Assign remaining points of X to the cluster with closest center. (Break ties arbitrarily.) ALGORITHM Approximate-K-Cluster (X, d, K) Pick any point m1X as the first cluster center for i  2 .. K do Let miX be the point farthest from { m1 , … , mi-1 } ( i.e., mi maximizes r(i) = min j<i d(mi , mj) ) for i  1 .. K do Ci  { xX | mi is the closest center to x } (* break ties arbitrarily *) return the K clusters { C1, C2 , … , Ck } end

59 Greedy Approximation Example
n = 20, K = 4

60 Greedy Approximation Example
n = 20, K = 4 m1 Pick the first center arbitrarily

61 Greedy Approximation Example
n = 20, K = 4 m1 m2 Pick the next center farthest from previous ones

62 Greedy Approximation Example
n = 20, K = 4 m1 m2 m3 Pick the next center farthest from previous ones

63 Greedy Approximation Example
n = 20, K = 4 m4 m1 m2 m3 Pick the next center farthest from previous ones

64 Greedy Approximation Example
n = 20, K = 4 m4 m1 m2 m3 Assign each point to its closest center

65 Greedy Approximation Example
n = 20, K = 4 m4 C4 m1 C1 C2 m2 C3 m3 Form K clusters

66 Greedy Approximation Example
n = 20, K = 4 m4 C4 m1 C1 C2 m2 C3 m3 Objective cost = largest cluster diameter

67 This is a 2-approximation algorithm
Definition: Let x* X be the point farthest from { m1 , m2 , … , mK}, i.e., x* = mK+1, if we wanted K+1 centers. Let r* = r(K+1) = min { d(x* , mj) | j = 1 .. K }. LEMMA: The algorithm has the following properties: (a) Every point is within distance at most r* of its cluster center. (b) The K+1 points { m1 , m2 , … , mK , mK+1=x*} are all at a distance at least r* from each other. THEOREM: Approximate-K-Cluster is a 2-approximation poly-time algorithm.

68 0/1 Knapsack Problem Input: n items with weights w1, w2, … , wn and values v1, v2, … , vn, and knapsack weight capacity W (all positive integers). Output: A subset S of the items (to be placed in the knapsack) whose total weight does not exceed the knapsack capacity. Goal: maximize the total value of items in S: Later we will consider a special case of this problem called the Subset Sum Problem (SSP). In SSP, wi = vi, for i=1..n. Both problems are NP-hard.

69 0/1 vs Fractional KP

70 The Fractional KP FACT: FKP optimum solution can be obtained in O(nlogn) time. Proof (sketch): Greedy strategy: consider items in decreasing order of v i / w i. Place the items in the knapsack in that order until it is filled up. Only the last item placed in the knapsack may be fractionally selected. The first step can be done by sorting in O(n log n) time. Exercise: Complete the proof, and show an instance for which this greedy strategy fails to find the exact 01KP solution (when the fractional item is discarded).

71 01KP by Dynamic Programming
Item value vi and weight wi , for i = 1..n. For i = 0 .. n and C = 0 .. W, define: SOPT (i,C) = Max value subset of items {1..i } with knapsack capacity C VOPT (i,C) = total value of items in SOPT (i,C). Return SOPT (n,W). FACT: (a) This DP1 finds an exact solution to 01KP in O(nW) time. (b) There is a similar DP2 with O(nV) time (V = sum of the n item values) by recurring on total value rather than weight capacity. (c) Both algorithms take exponential-time!

72 01KP approximation Greedy Algorithm
Input: 2n+1 positive integers corresponding to item weights {w1...wn}, item values {v1...vn} and the knapsack capacity W Output: A subset Z of the items whose total size is at most W 1. Renumber the items so that v1/w1...vn/wn 2. j0, K0, V0, Z{} 3. while j<n and K<W jj+1 if wjW-K then ZZ{uj} KK+wj VV+vj end if 10. end while RKNAPSACKGREEDY=?

73 01KP approximation Greedy Algorithm
Input: 2n+1 positive integers corresponding to item weights {w1...wn}, item values {v1...vn} and the knapsack capacity W Output: A subset Z of the items whose total size is at most W 1. Renumber the items so that v1/w1...vn/wn 2. j0, K0, V0, Z{} 3. while j<n and K<W jj+1 if wjW-K then ZZ{uj} KK+wj VV+vj end if 10. end while 11. Let Z’={us}, where us is an item of maximizing value 12. if Vvs then return Z 13. else return Z’ RKNAPSACKGREEDY=2

74 Let =1/k for some positive integer k
Let =1/k for some positive integer k. Algorithm A consists of two steps. First, choose a subset of at most k items and put them in the knapsack. Then run Algorithm KNAPSACKGREEDY on the remaining items in order to complete the packing. These two steps are repeated times, once for each subset of size j, 0jk Theorem 15.4(PTAS): Let =1/k for some k1. Then the running time of Algorithm A is O(knk+1) and its performance ratio is 1+

75 01KP approximation by scaling values
Consider the O(nV) time DP2 solution. Scale down (with rounding) the item values by some factor s  1. Don’t alter weights or knapsack capacity. This does not affect the set of feasible solutions to 01KP. Running time is scaled down to O(nVs). How much is the value of each feasible solution scaled down? The optimum subset of items may not be optimum in the scaled version! Example: v1 = 327,901,682 , v2 = 605,248, , v3 = 451,773, s = 1/1,000, Scaled values: û1 = , û2 = , û3 = 451

76 FPTAS for 01KP by scaling values
ALGORITHM Approximate-01KP (v[1..n], w[1..n], W, e) vmax  max { v[i] | i = 1..n } s  n / ( e vmax ) (* scaling factor, why*) for i  1 .. n do û[i]   s v[i]  (* item values scaled down *) SA  DP2 (û[1..n], w[1..n], W) (* subset of items selected by DP2 *) return SA end THEOREM: Approximate-01KP has the following properties: (1) Correctness: outputs a feasible solution SA . (2) Running Time: O(n3 /e). (3) Approx Bound: V(SA)  (1-e) V(SOPT).

77 The subset-sum problem
The subset-sum problem is a special case of the knapsack problem in which the item values are identical to their sizes. Given n items of sizes s1...sn and a positive integer C, the knapsack capacity, the objective is to find a subset of the items that maximize the total sum of their sizes without exceeding the knapsack capacity C

78 12. if x>T[i,j] then T[i,j]x 13. end if 14. end for 15. end for
Algorithm 15.4 SUBSETSUM Input: A set of items U={u1...un} with sizes s1...sn and a knapsack capacity C Output: The maximum value of the function uiSsi subject to uiSsiC for some subset of items SU 1. for i0 to n T[i,0]0 3. end for 4. for j0 to C T[0,j]0 6. end for 7. for i1 to n for j1 to C T[i,j]T[i-1,j] if sij then xT[i-1,j-si]+si if x>T[i,j] then T[i,j]x end if end for 15. end for 16. return T[n,C] Time: (nC)

79 Approximation algorithm A
Let =1/k for some positive integer k and K=C/[2(k+1)n]. First we set C’=C/K and sj’=sj/K, to obtain a new instance I’ Next we apply Algorithm SUBSETSUM on I’. The running time is now reduced to (nC/K)=(kn2)=(n2/) OPT(I)-KOPT(I’)Kn or A(I)OPT(I)-C/(2(k+1)) RA1+1/k


Download ppt "Approximation Algorithms"

Similar presentations


Ads by Google