Download presentation
Presentation is loading. Please wait.
Published byLynne Stokes Modified over 9 years ago
1
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial Approximation Schemes
2
15.1 Introduction There are many hard combinatorial optimization problems that cannot be solved efficiently using backtracking or randomization. An alternative in this case for tackling some of these problems is to devise an approximation algorithm, given that we will be content with a “reasonable” solution that approximates an optimal solution. A marking characteristic of approximation algorithms is that they are fast, as they are mostly greedy heuristics.
3
15.2 Basic Definitions A combinatorial optimization problem II is either a minimization problem or a maximization problem. It consists of three components: (1) A set D of instance. (2) For each instance I D , there is a finite set S (I) of candidate solutions for I. (3) Associated with each solution S (I) to an instance I in D , there is a value f ( ) called the solution value for .
4
15.2 Basic Definitions If is a minimization problem, then an optimal solution * for an instance I D has the property that for all S (I), f ( * ) f ( ). The same with a maximization problem. An approximation algorithm A for an optimization problem is a (polynomial time) algorithm such that given an instance I D . It outputs some solution S (I). We will denote by A(I) the value f ( ).
5
Example 15.1 Consider the problem BIN PACKING: Given a collection of items of sizes between 0 and 1, it is required to pack these items into the minimum number bins of unit capacity. Obviously, this is a minimization problem. The set of instances D Π consists of all sets I={s 1, s 2, …, s n }, such that for all j, 1≤j≤n, s j is between 0 and 1. The set of solutions S Π consists of a set of subsetsσ={B 1, B 2, …,B k } which is a disjoint of I such that for all j, 1≤j≤k, Given a solutionσ, its value f(σ) is simply | σ|=k. An optimal solution for this problem is that solution σ having the least cardinality. Let A be (the trivial) algorithm that assigns one bin for each item. Then, by definition, A is an approximation algorithm. Clearly, this is not a good approximation algorithm. 15.2
6
15.3 Difference Bounds Perhaps, the most we can hope from an approximation algorithm is that the difference between the value of the optimal solution and the value of the solution obtained by the approximation algorithm is always constant. I.e., for all instances I of the problem, the most desirable solution that can be obtained by an approximation algorithm A is such that |A(I)- OPT(I)| K, for some constant K. There are very few NP-hard optimization problems for which approximation algorithms with difference bounds are known.
7
Planar graph coloring Let G=(V,E) be a planar graph. By the Four Color Theorem, every planar graph is four-colorable. It is fairly easy to determine whether a graph is 2- colorable or not. Given an instance I of G, an approximation algorithm A may proceed as follows. Assume G is nontrivial, i.e., it has at least one edge. Determine if the graph is2-colorable. If it is, then output 2; otherwise output 4. If G is 2-colorable, then |A(I)- OPT(I)|=0. If it is not 2-colorable, then |A(I)- OPT(I)| 1. (with difference bounds) 15.3.1
8
Hardness result: the knapsack problem Knapsack problem: Given n items {u 1,u 2,…,u n } with integer sizes s 1,s 2,…,s n and integer values v 1,v 2,…,v n, and a knapsack capacity C that is a positive integer, the problem s to fill the knapsack with some of these items whose total size is at most C and whose total value is maximum. In other words, find a subset S U such that 15.3.2
9
Hardness result: the knapsack problem Suppose there is an approximation algorithm A to solve the knapsack problem with difference bound K. Given an instance I, we can use algorithm A to output an optimal solution as follows. Construct a new instance I' such that for all j, 1 j n, s' j = s j, v' j = (K+1)v j. It is easy to see that any solution to I' is a solution to I and vice versa. So This means that A always gives the optimal solution, I.e., it solves the knapsack problem. 15.3.2
10
15.4 Relative Performance Bounds Approximation ratio: Let be a minimization problem and I an instance of . Let A be an approximation algorithm to solve . We define the approximation ratio R A (I) to be If is a maximization problem, then we define R A (I) to be
11
15.4 Relative Performance Bounds Absolute performance ratio: the absolute performance ratio R A for the approximation algorithm A is defined by R A =inf{r| R A (I) r for all instance I D } Asymptotic performance ratio for the approximation algorithm A is defined by
12
15.4.1 The bin packing problem Problem: Given a collection of items u 1,u 2,…,u n of size s 1,s 2,…,s n, where each s j is between 0 and 1, we are required to pack these items into the minimum number of bins of unit capacity.
13
15.4.1 The bin packing problem heuristics for the bin packing problem: First Fit(FF). In this method, the bins are indexed as 1,2, …. All bins are initially empty. The items are considered for packing in the order u 1,u 2,…,u n. To pack item u i, find the least index j such that bin j contains at most 1-s j, and add item u i to the items packed in bin j. Best Fit(BF). This method is the same as the FF method except that when item u i is to be packed, we look for that bin, which is filled to level l<=1-s j and l is as large as possible.
14
15.4.1 The bin packing problem First Fit Decreasing(FFD). In this method, the items are first ordered by decreasing order of size, and then packed using the FF method. Best Fit Decreasing(BFD). In this method, the items are first ordered by decreasing order of size, and the packed using the BF method.
15
Theorem 15.1 For all instances I of the BIN PACKING problem, 15.4.1
16
Theorem 15.2 For all instances I of the BIN PACKING problem, 15.4.1
17
15.4.2 The Euclidean traveling salesman problem Problem: Given a set S of n points in the plane, find a tour on these points of shortest length. Here a tour is a circular path that visits every point exactly once. This problem is a special case of the traveling salesman problem, and is commonly referred to as the EUCLIDEAN MINIMUM SPANNING TREE(EMST).
18
15.4.2 The Euclidean traveling salesman problem Solution: Nearest Neighbor(NN): Let p1 be an arbitrary starting point. An intuitive method would proceed in a greedy manner, visiting first that point closest to p1, say p2, and then that point which is closest to p2, and so on. This method is referred to as the nearest neighbor(NN) heuristic, and the performance ratio is R NN (I)=NN(I)/OPT(I)=O(logn)
19
15.4.2 The Euclidean traveling salesman problem Solution: Minimum matching(MM) heuristic (see algorithm 15.1) The performance ratio of this algorithm is 3/2, i.e., R MM (I)=MM(I)/OPT(I)<3/2
20
Algorithm 15.1 ETSPAPPROX Input: An instance I of EUCLIDEAN MINIMUM SPANNING TREE Output: A tourτfor instance I. 1. Find a minimum spanning tree T of S. 2. Identify the set X of odd degree in T. 3. Find a minimum weight matching M on X. 4. Find an Eulerian tour τ e in 5. Traverse τ e edge by edge and by pass each previously visited vertex. Let τ be the resulting tour. 15.4.2
21
15.4.3 The vertex cover problem Problem: a Vertex cover C in a graph G=(V,E) is a set of vertices such that each edge in E is incident to at least one vertex in C. The problem of deciding whether a graph contains a vertex cover of size k, where k is a positive integer, is NP-complete.
22
15.4.3 The vertex cover problem Solution: Repeat the following step until E becomes empty. Pick an edge e arbitrarily and add one of its endpoints, say v, to the vertex cover. Next, delete e and all other edges incident to v. this is an approximation algorithm that outputs a vertex cover. However, it can be shown that the performance ratio of this algorithm is unbounded. Surprisingly, if when considering an edge e, we add both of its endpoints to the vertex cover, then the performance ratio becomes 2.
23
Algorithm 15.2 VCOVERAPPROX Input: An undirected graph G = (V, E). Output: A vertex cover C for G. 1. C {} 2. while E ≠{} 3. Let e = (u, v) be any edge in E. 4. C 5. Remove e and all edges incident to u or v from E. 6. end while 15.4.3
24
15.4.4 Hardness result: the traveling salesman problem In thelast sections, we have presented approximation algorithms with reasonable performance ratios. It turns out, that there are many problems that do not admit bounded performance ratios. For example, the problems COLORING, CLIQUE, INDEPENDENT SET and the general TRAVELING SALESMAN problem have no known approximation algorithm with bounded ratios.
25
Theorem 15.3 There is no approximation algorithm A for the problem TRAVELING SALESMAN with R A < ∞ unless NP = P. 15.4.4
26
15.5 Polynomial Approximation Schemes So far we have seen that for some NP-hard problems there exist approximation algorithms with bounded approximation ratio. On the other hand, for some problems, it is impossible to devise an approximation algorithm with bounded ratio unless NP=P. On the other extreme, it turns out that there are problems for which there exists a series of approximation algorithms whose performance ratio converges to 1. Examples of these problems include the problems KNAPSACK, SUBSET- SUM and MULTIPROCESS SCHEDULING.
27
Definition 15.1 An approximation scheme for an optimization problem is a family of algorithm {A ε |ε>0} such that R Aε ≤1+ε 15.5
28
Definition 15.2 A polynomial approximation scheme (PAS) is an approximation scheme {A ε }, where each algorithm A ε runs in time that is polynomial in the length of the input instance I. 15.5
29
15.5.1 The knapsack problem Problem: Let U={u 1,u 2,…,u n } be a set of items to be packed in a knapsack of size C. for 1<=j<=n, let s j and v j e the size and value of the jth item, respectively. The objective is to fill the knapsack with some items in U whose total size is at most C and such that their total value is maximum.
30
15.5.1 The knapsack problem Solution: Consider the greedy algorithm that first orders the items by decreasing value to size ratio (v j /s j ), and then considers the items one by one for packing. If the current item fits in the available space, then it is included, otherwise the next item is considered. The procedure terminates as soon as all items have been considered, or no more items can be included in the knapsack. The performance ratio of this greedy algorithm is unbounded. A simple modification of the above algorithm results in a performance ratio of 2.
31
Algorithm 15.3 KNAPSACKGREEDY Input: 2n+1 positive integers corresponding to item sizes {s 1, s 2, …, s n }, item values {v1, v2, …, vn} and the knapsack capacity C. Output: A subset Z of the items whose total sizes is at most C. 1. Renumber the items so that v 1 /s 1 ≥v 2 /s 2 ≥…≥v n /s n 2. j 0; K 0; V 0; Z {} 3. while j<n and K<C 4. j j+1 5. if s j ≤C-K then 6. Z 7. K K+s j 8. V V+v j 9. end if 10. end while 11. Let Z’ = {u s }, where u s is an item of maximum size. 12. if V ≥ v s, then return Z 13. else return Z’. 15.5.1
32
Theorem 15.4 Let ε=1/k, for some k≥1. Then, the running time of Algorithm A ε is O(kn k+1 ), and its performance ratio is 1 +ε. 15.5.1
33
15.6 Fully Polynomial Approximation Schemes The polynomial approximation scheme described in Sec. 15.5 runs in time that is exponential in 1/ , the reciprocal of the desired error bound. In this section, we demonstrate an approximation scheme in which the approximation algorithm runs in time that is also polynomial in 1/ .
34
Definition 15.3 A fully polynomial approximation scheme (FPAS) is an approximation scheme {A ε }, where each algorithm A ε runs in time that is polynomial in both the length of the input instance and 1/ε. 15.6
35
Definition 15.4 A pseudopolynomial time algorithm is an algorithm that runs in time that is polynomial in the value of L, where L is the largest number in the instance. 15.6
36
15.6.1 The subset-sum problem The subset-sum problem is a special case of the knapsack problem in which the item values are identical to their sizes. Given n items of sizes s1, s2, …, sn, and a positive integer C, the knapsack capacity, the objective is to find a subset of the items that maximizes the total sum of their sizes without exceeding the knapsack capacity C. It is shown below as algorithm SUMSETSUM.
37
Algorithm 15.4 SUBSETSUM Input: A set of items U = {u 1, u 2, …, u n }, with sizes s 1, s 2, …, s n and a knapsack capacity C. Output: The maximum value of the function subject to for some subset of items 1. for i 0 to n 2. T[i, 0] = 0 3. end for 4. for j 0 to C 5. T[0, j] = 0 6. end for 7. for i 1 to n 8. for j 1 to C 9. T[i, j] T[i-1, j] 10. if s i ≤j then 11. x T[i-1, j-s i ] + s i 12. if x > T[i, j] then T[i, j] x 13. end if 14. end for 15. end for 16. return T[n,C] 15.6.1
38
15.6.1 The subset-sum problem Now, we develop an approximation algorithm A ε where ε=1/k for some positive integer k. The algorithm is such that for any instance I, Let
39
15.6.1 The subset-sum problem First, we set To obtain a new instance I'. Next, we apply algorithm SUBSETSUM on I'. The running time is now reduced to (nC/K)= (kn 2 ) Now we estimate the error in the approximate solution. Thus, the algorithm’s performance ratio is 1+ , and its running time is (n 2 / ).
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.