Polynomial time approximation scheme Lecture 17: Mar 13.

Slides:



Advertisements
Similar presentations
NP-Completeness.
Advertisements

Shortest Vector In A Lattice is NP-Hard to approximate
Triangle partition problem Jian Li Sep,2005.  Proposed by Redstar in Algorithm board in Fudan BBS.  Motivated by some network design strategy.
Knapsack Problem Section 7.6. Problem Suppose we have n items U={u 1,..u n }, that we would like to insert into a knapsack of size C. Each item u i has.
Greedy Algorithms Greed is good. (Some of the time)
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
1 EE5900 Advanced Embedded System For Smart Infrastructure Static Scheduling.
Combinatorial Algorithms
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
PTAS for Bin-Packing. Special Cases of Bin Packing 1. All item sizes smaller than Claim 1: Proof: If then So assume Therefore:
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Greedy vs Dynamic Programming Approach
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Approximation Algorithms
Branch and Bound Searching Strategies
Parameterized Approximation Scheme for the Multiple Knapsack Problem by Klaus Jansen (SODA’09) Speaker: Yue Wang 04/14/2009.
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Approximation Algorithms
1 Vertex Cover Problem Given a graph G=(V, E), find V' ⊆ V such that for each edge (u, v) ∈ E at least one of u and v belongs to V’ and |V’| is minimized.
1 Combinatorial Dominance Analysis The Knapsack Problem Keywords: Combinatorial Dominance (CD) Domination number/ratio (domn, domr) Knapsack (KP) Incremental.
More Counting Lecture 16: Nov 9 A B …… f. Counting Rule: Bijection If f is a bijection from A to B, then |A| = |B| A B …… f.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Chapter 11: Limitations of Algorithmic Power
Dynamic Programming 0-1 Knapsack These notes are taken from the notes by Dr. Steve Goddard at
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
Approximation Algorithms
10/31/02CSE Greedy Algorithms CSE Algorithms Greedy Algorithms.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
More Counting Lecture 16: Nov 9 A B …… f. This Lecture We will study how to define mappings to count. There will be many examples shown. Bijection rule.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Chapter 11 Limitations of Algorithm Power. Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples:
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Approximation schemes Bin packing problem. Bin Packing problem Given n items with sizes a 1,…,a n  (0,1]. Find a packing in unit-sized bins that minimizes.
1 Approximation Through Scaling Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij.
Design Techniques for Approximation Algorithms and Approximation Classes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
The Complexity of Optimization Problems. Summary -Complexity of algorithms and problems -Complexity classes: P and NP -Reducibility -Karp reducibility.
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial.
TECH Computer Science NP-Complete Problems Problems  Abstract Problems  Decision Problem, Optimal value, Optimal solution  Encodings  //Data Structure.
Approximation Algorithms
CSC 413/513: Intro to Algorithms Greedy Algorithms.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
1 Counting by Mapping Lecture 15: Nov 4 A B …… f.
Algorithm Design Methods (II) Fall 2003 CSE, POSTECH.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
CS 3343: Analysis of Algorithms Lecture 18: More Examples on Dynamic Programming.
The bin packing problem. For n objects with sizes s 1, …, s n where 0 < s i ≤1, find the smallest number of bins with capacity one, such that n objects.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
Branch and Bound Searching Strategies
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
1 The instructor will be absent on March 29 th. The class resumes on March 31 st.
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
Hard Problems Some problems are hard to solve.  No polynomial time algorithm is known.  E.g., NP-hard problems such as machine scheduling, bin packing,
Approximation algorithms
The Theory of NP-Completeness
Computability and Complexity
The Subset Sum Game Revisited
Polynomial time approximation scheme
The Theory of NP-Completeness
Presentation transcript:

Polynomial time approximation scheme Lecture 17: Mar 13

Polynomial Time Approximation Scheme (PTAS) We have seen the definition of a constant factor approximation algorithm. The following is something even better. A An algorithm A is an approximation scheme if for every є > 0, A A runs in polynomial time (which may depend on є) and return a solution: SOL ≤ (1+є)OPT for a minimization problem SOL ≥ (1-є)OPT for a maximization problem A For example, A may run in time n 100/є. There is a time-accuracy tradeoff.

Knapsack Problem A set of items, each has different size and different value. We only have one knapsack. Goal: to pick a subset which can fit into the knapsack and maximize the value of this subset.

Knapsack Problem (The Knapsack Problem) Given a set S = {a 1, …, a n } of objects, with specified sizes and profits, size(a i ) and profit(a i ), and a knapsack capacity B, find a subset of objects whose total size is bounded by B and total profit is maximized. Assume size(a i ), profit(a i ), and B are all integers. We’ll design an approximation scheme for the knapsack problem.

Greedy Methods Sort by object size in non-decreasing order: General greedy method: Sort the objects by some rule, and then put the objects into the knapsack according to this order. Sort by profit in non-increasing order: Sort by profit/object size in non-increasing order: Greedy won’t work.

Exhaustive Search n objects, total 2 n possibilities, view it as a search tree. choose object 1 not choose object 1 choose object 2not choose object 2choose object 2not choose object 2 At the bottom we could calculate the total size and total profit, and choose the optimal subset.

Exhaustive Search size(a1)=2, profit(a1)=4 size(a2)=3, profit(a2)=5 size(a3)=2, profit(a3)=3 size(a4)=1, profit(a4)=2 (0,0) (Total size, total profit) choose object 1 not choose object 1 (2,4)(0,0) (5,9)(2,4) (4,7)(5,9)(7,12) (3,5) (5,8)(2,3) There are many redundancies.

Exhaustive Search size(a1)=2, profit(a1)=4 size(a2)=3, profit(a2)=5 size(a3)=2, profit(a3)=3 size(a4)=1, profit(a4)=2 (0,0) (Total size, total profit) choose object 1 not choose object 1 (2,4)(0,0) (5,9)(2,4) (4,7)(5,9)(7,12) (3,5) (5,8)(2,3) There are many redundancies. (4,7)(5,9) (6,11)(4,7)(3,5)

Exhaustive Search size(a1)=2, profit(a1)=4 size(a2)=3, profit(a2)=5 size(a3)=2, profit(a3)=3 size(a4)=1, profit(a4)=2 (0,0) (Total size, total profit) choose object 1 not choose object 1 (2,4)(0,0) (5,9)(2,4) (4,7)(5,9)(7,12) (3,5) (5,8)(2,3) There are many redundancies. (4,7)(5,9) (6,11)(4,7)(3,5)

The Idea Consider two subproblems P and Q at the same level (i.e. same number of objects have been considered).  If size(P)=size(Q) and profit(P)=profit(Q), just compute either one.  If size(P)=size(Q) and profit(P)>profit(Q), just compute P.  If profit(P)=profit(Q) and size(P)>size(Q), just compute Q. Important: the history doesn’t matter (i.e. which subset we chose to achieve profit(P) and size(P)).

Dynamic Programming Dynamic programming is just exhaustive search with polynomial number of subproblems. We only need to compute each subproblem once, and each subproblem is looked up at most a polynomial number of times, and so the total running time is at most a polynomial.

Dynamic Programming for Knapsack Suppose we have considered object 1 to object i. We want to remember what profits are achievable. For each achievable profit, we want to minimize the size. Let S(i,p) denote a subset of {a1,…,ai} whose total profit is exactly p and total size is minimized. Let A(i,p) denote the size of the set S(i,p) (A(i,p) = ∞ if no such set exists). For example, A(1,p) = size(a1) if p=profit(a1), Otherwise A(1,p) = ∞ (if p ≠ profit(a1)).

Recurrence Formula How to compute A(i+1,p) if we know A(i,q) for all q? Idea: we either choose object i+1 or not. If we do not choose object i+1: then A(i+1,p) = A(i,p). Remember: A(i,p) denote the minimize size to achieve profit p using objects from 1 to i. If we choose object i+1: then A(i+1,p) = size(a i+1 ) + A(i,p-profit(a i+1 )) if p > profit(a i+1 ). A(i+1,p) is the minimum of these two values.

size(a1)=2, profit(a1)=4; size(a2)=3, profit(a2)=5; size(a3)=2, profit(a3)=3; size(a4)=1, profit(a4)=2 An Example Optimal Solution: max{ p | A(n,p) ≤ B} where B is the size of the knapsack ∞∞∞2∞∞∞∞∞∞∞∞∞∞ 0∞∞∞∞∞∞∞∞∞∞∞∞∞∞ 0∞∞∞∞∞∞∞∞∞∞∞∞∞∞ 0∞∞∞∞∞∞∞∞∞∞∞∞∞∞ Remember: A(i,p) denote the minimize size to achieve profit p using objects from 1 to i

size(a1)=2, profit(a1)=4; size(a2)=3, profit(a2)=5; size(a3)=2, profit(a3)=3; size(a4)=1, profit(a4)=2 An Example A(i+1,p) = min{A(i,p), size(a i+1 ) + A(i,p-profit(a i+1 ))}. A(2,p) = min{A(1,p), A(1,p-5)+3} ∞∞∞2∞∞∞∞∞∞∞∞∞∞ 0∞∞∞∞∞∞∞∞∞∞∞∞∞∞ 0∞∞∞∞∞∞∞∞∞∞∞∞∞∞ 0∞∞∞∞∞∞∞∞∞∞∞∞∞∞

size(a1)=2, profit(a1)=4; size(a2)=3, profit(a2)=5; size(a3)=2, profit(a3)=3; size(a4)=1, profit(a4)=2 An Example A(i+1,p) = min{A(i,p), size(a i+1 ) + A(i,p-profit(a i+1 ))} ∞∞∞2∞∞∞∞∞∞∞∞∞∞ 0∞∞∞23∞∞∞5∞∞∞∞∞ 0∞∞∞∞∞∞∞∞∞∞∞∞∞∞ 0∞∞∞∞∞∞∞∞∞∞∞∞∞∞ A(3,p) = min{A(2,p), A(2,p-3)+2}.

size(a1)=2, profit(a1)=4; size(a2)=3, profit(a2)=5; size(a3)=2, profit(a3)=3; size(a4)=1, profit(a4)=2 An Example A(i+1,p) = min{A(i,p), size(a i+1 ) + A(i,p-profit(a i+1 ))} ∞∞∞2∞∞∞∞∞∞∞∞∞∞ 0∞∞∞23∞∞∞5∞∞∞∞∞ 0∞∞223∞455∞∞7∞∞ 0∞∞∞∞∞∞∞∞∞∞∞∞∞∞ A(4,p) = min{A(3,p), A(3,p-2)+1}.

size(a1)=2, profit(a1)=4; size(a2)=3, profit(a2)=5; size(a3)=2, profit(a3)=3; size(a4)=1, profit(a4)=2 An Example A(i+1,p) = min{A(i,p), size(a i+1 ) + A(i,p-profit(a i+1 ))}. A(4,p) = min{A(3,p), A(3,p-2)+1} ∞∞∞2∞∞∞∞∞∞∞∞∞∞ 0∞∞∞23∞∞∞5∞∞∞∞∞ 0∞∞223∞455∞∞7∞∞ 0∞ ∞

An Example ∞∞∞2∞∞∞∞∞∞∞∞∞∞ 0∞∞∞23∞∞∞5∞∞∞∞∞ 0∞∞223∞455∞∞7∞∞ 0∞ ∞ Optimal Solution: max{ p | A(n,p) ≤ B} where B is the size of the knapsack. Remember: A(i,p) denote the minimize size to achieve profit p using objects from 1 to i. For example, if B=8, OPT=14, if B=7, OPT=12, if B=6, OPT=11.

Running Time For the dynamic programming algorithm, there are n rows and at most nP columns. Each entry can be computed in constant time (look up two entries). So the total time complexity is O(n 2 P). The input has 2n numbers, say each is at most P. So the input has total length 2nlog(P). The running time is not polynomial if P is very large (compared to n).

Approximation Algorithm We know that the knapsack problem is NP-complete. Can we use the dynamic programming technique to design approximation algorithm?

Scaling Down Idea: to scale down the numbers and compute the optimal solution in this modified instance  Suppose P ≥ 1000n.  Then OPT ≥ 1000n.  Now scale down each element by 100 times (profit*:=profit/100).  Compute the optimal solution using this new profit.  Can’t distinguish between element of size, say 2199 and  Each element contributes at most an error of 100.  So total error is at most 100n.  This is at most 1/10 of the optimal solution.  However, the running time is 100 times faster.

Approximation Scheme Goal: to find a solution which is at least (1- є)OPT for any є > 0. Approximation Scheme for Knapsack 1.Given є > 0, let K = єP/n, where P is the largest profit of an object. 2.For each object ai, define profit*(ai) = profit(ai)/K. 3.With these as profits of objects, using the dynamic programming algorithm, find the most profitable set, say S’. 4.Output S’ as the approximate solution.

Quality of Solution Theorem. Let S denote the set returned by the algorithm. Then, profit(S) ≥ (1- є)OPT. Proof. Let O denote the optimal set. For each object a, because of rounding down, K·profit*(a) can be smaller than profit(a), but by not more than K. Since there are at most n objects in O, profit(O) – K·profit*(O) ≤ nK. Since the algorithm return an optimal solution under the new profits, profit(S) ≥ K·profit*(S) ≥ K·profit*(O) ≥ profit(O) – nK = OPT – єP ≥ (1 – є)OPT because OPT ≥ P.

Running Time For the dynamic programming algorithm, there are n rows and at most n P/K columns. Each entry can be computed in constant time (look up two entries). So the total time complexity is O(n 2 P/K ) = O(n 3 / є). Therefore, we have an approximation scheme for Knapsack.

Approximation Scheme Quick Summary 1.Modify the instance by rounding the numbers. 2.Use dynamic programming to compute an optimal solution S in the modified instance. 3.Output S as the approximate solution.

Bin Packing (Bin Packing) Given n items with sizes 0<= a1,a2,…,an <= 1, find a packing in unit-sized bins that minimizes the number of bins used. e.g. Paper cutting. Greedy algorithm gives a 2-approximation. Theorem. For any 0 < є <= 1/2, there is a polynomial time algorithm, which finds a packing using at most (1+2є)OPT + 1 bins.

Exhaustive Search How do you solve the problem optimally? Suppose each bin can pack at most M items, and there are only K different item sizes. What is the running time of this algorithm? Try all possible bin configuration, and then try all possible combination of bins.

Counting Doughnut Selections There are five kinds of doughnuts. How many different ways to select a dozen doughnuts? A ::= all selections of a dozen doughnuts Hint: define a bijection! 00 (none) Chocolate Lemon Sugar Glazed Plain

Counting Doughnut Selections A ::= all selections of a dozen doughnuts Define a bijection between A and B Each doughnut is represented by a 0, and four 1’s are used to separate five types of doughnuts. B::= all 16-bit binary strings with exactly four 1’s. 00 (none) Chocolate Lemon Sugar Glazed Plain

Counting Doughnut Selections c chocolate, l lemon, s sugar, g glazed, p plain maps to 0 c 10 l 10 s 10 g 10 p B::= all 16-bit binary strings with exactly four 1’s. A ::= all selections of a dozen doughnuts

Exhaustive Search Suppose each bin can pack at most M items, and there are only K different item sizes. What is the running time of this algorithm? At mostbin configurations At most combinations of bins! This is polynomial time if M and K are constants!

Throw away all small items of size at most є. Reduction 1 How to make sure that each bin uses at most M items? Suppose there is a (1+є)-approximation when there are no small items, then we can finish the packing with at most (1+2є)OPT+1 bins. Pack all the small items into the remaining space, and open new bins if necessary. Let M be the number of bins used. Then OPT >= (M-1)(1 - є).

How to make sure that there are at most K distinct item sizes? Reduction 2 Round the item size! Round up Round down Maintain feasibility but may use more bins Will not use more bins but may not be feasible

Reduction 2 Round up Round down Maintain feasibility but may use more bins Will not use more bins but may not be feasible Prove that the “round up” solution is not much worse than OPT by comparing it to the “round down” solution. K groups, each of size n/K

Reduction 2 Round up Round down Maintain feasibility but may use more bins Will not use more bins but may not be feasible Suppose there is a feasible “round down” solution, construct an almost feasible “round up” solution with the last n/K items not packed. K groups, each of size n/K

Reduction 2 Round up Round down Maintain feasibility but may use more bins Will not use more bins but may not be feasible Then n/K <= n є 2 <= єOPT In the worst case we use n/K bins more than the optimal. Set K = 1/є 2

1.Remove small items of size < є Round up to obtain constant (1/є 2 ) number of item sizes (Reduction 2) Find optimal packing (exhaustive search, doughnuts) Use this packing for original item sizes Pack small items back greedily (Reudction 1) Algorithm

Minimum makespan scheduling Euclidean TSP Euclidean minimum Steiner tree PTAS But most problems do not admit a PTAS unless P=NP. Project proposal: due March 20 Sign up for meeting.