Polynomial time approximation scheme

Slides:



Advertisements
Similar presentations
NP-Completeness.
Advertisements

Shortest Vector In A Lattice is NP-Hard to approximate
MCS 312: NP Completeness and Approximation algorithms Instructor Neelima Gupta
Knapsack Problem Section 7.6. Problem Suppose we have n items U={u 1,..u n }, that we would like to insert into a knapsack of size C. Each item u i has.
Chapter 5 Fundamental Algorithm Design Techniques.
Greedy Algorithms Basic idea Connection to dynamic programming Proof Techniques.
PTAS for Bin-Packing. Special Cases of Bin Packing 1. All item sizes smaller than Claim 1: Proof: If then So assume Therefore:
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Greedy vs Dynamic Programming Approach
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Approximation Algorithms
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
1 Combinatorial Dominance Analysis The Knapsack Problem Keywords: Combinatorial Dominance (CD) Domination number/ratio (domn, domr) Knapsack (KP) Incremental.
Polynomial time approximation scheme Lecture 17: Mar 13.
Computability and Complexity 24-1 Computability and Complexity Andrei Bulatov Approximation.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Approximation Algorithms
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Approximation schemes Bin packing problem. Bin Packing problem Given n items with sizes a 1,…,a n  (0,1]. Find a packing in unit-sized bins that minimizes.
1 Approximation Through Scaling Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij.
Design Techniques for Approximation Algorithms and Approximation Classes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial.
TECH Computer Science NP-Complete Problems Problems  Abstract Problems  Decision Problem, Optimal value, Optimal solution  Encodings  //Data Structure.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
Computer Science Day 2013, May Distinguished Lecture: Andy Yao, Tsinghua University Welcome and the 'Lecturer of the Year' award.
NP-Complete Problems. Running Time v.s. Input Size Concern with problems whose complexity may be described by exponential functions. Tractable problems.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Instructor Neelima Gupta Table of Contents Factor 2 algorithm for Bin Packing Factor 2 algorithm for Minimum Makespan Scheduling Reference:
The bin packing problem. For n objects with sizes s 1, …, s n where 0 < s i ≤1, find the smallest number of bins with capacity one, such that n objects.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
Spring 2008The Greedy Method1. Spring 2008The Greedy Method2 Outline and Reading The Greedy Method Technique (§5.1) Fractional Knapsack Problem (§5.1.1)
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
1 The instructor will be absent on March 29 th. The class resumes on March 31 st.
Conceptual Foundations © 2008 Pearson Education Australia Lecture slides for this course are based on teaching materials provided/referred by: (1) Statistics.
Approximation algorithms
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
The Theory of NP-Completeness
The Subset-sum Problem
8.3.2 Constant Distance Approximations
Introduction to Approximation Algorithms
Hard Problems Some problems are hard to solve.
Merge Sort 7/29/ :21 PM The Greedy Method The Greedy Method.
Approximation algorithms
Analysis and design of algorithm
Computability and Complexity
ICS 353: Design and Analysis of Algorithms
The Subset Sum Game Revisited
CS 3343: Analysis of Algorithms
Bin Fu Department of Computer Science
Merge Sort 11/28/2018 2:18 AM The Greedy Method The Greedy Method.
The Greedy Method Spring 2007 The Greedy Method Merge Sort
Merge Sort 11/28/2018 8:16 AM The Greedy Method The Greedy Method.
Approximate Algorithms
Lecture 11 Overview Self-Reducibility.
PTAS for Bin-Packing.
Merge Sort 1/17/2019 3:11 AM The Greedy Method The Greedy Method.
Dynamic Programming Dynamic Programming 1/18/ :45 AM
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
NP-Complete Problems.
The Theory of NP-Completeness
MCS 312: NP Completeness and Approximation algorthms
Merge Sort 5/2/2019 7:53 PM The Greedy Method The Greedy Method.
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Presentation transcript:

Polynomial time approximation scheme

Polynomial Time Approximation Scheme (PTAS) We have seen the definition of a constant factor approximation algorithm. The following is something even better. An algorithm A is an approximation scheme if for every є > 0, A runs in polynomial time (which may depend on є) and return a solution: SOL ≤ (1+є)OPT for a minimization problem SOL ≥ (1-є)OPT for a maximization problem For example, A may run in time n100/є. There is a time-accuracy tradeoff.

Knapsack Problem A set of items, each has different size and different value. We only have one knapsack. Goal: to pick a subset which can fit into the knapsack and maximize the value of this subset.

Knapsack Problem (The Knapsack Problem) Given a set S = {a1, …, an} of objects, with specified sizes and profits, size(ai) and profit(ai), and a knapsack capacity B, find a subset of objects whose total size is bounded by B and total profit is maximized. Assume size(ai), profit(ai), and B are all integers. We’ll design an approximation scheme for the knapsack problem.

Greedy Methods General greedy method: Sort the objects by some rule, and then put the objects into the knapsack according to this order. Sort by object size in non-decreasing order: Sort by profit in non-increasing order: Sort by profit/object size in non-increasing order: Greedy won’t work.

Dynamic Programming for Knapsack Suppose we have considered object 1 to object i. We want to remember what profits are achievable. For each achievable profit, we want to minimize the size. Let S(i,p) denote a subset of {a1,…,ai} whose total profit is exactly p and total size is minimized. Let A(i,p) denote the size of the set S(i,p) (A(i,p) = ∞ if no such set exists). For example, A(1,p) = size(a1) if p=profit(a1), Otherwise A(1,p) = ∞ (if p ≠ profit(a1)).

Recurrence Formula Remember: A(i,p) denote the minimize size to achieve profit p using objects from 1 to i. How to compute A(i+1,p) if we know A(i,q) for all q? Idea: we either choose object i+1 or not. If we do not choose object i+1: then A(i+1,p) = A(i,p). If we choose object i+1: then A(i+1,p) = size(ai+1) + A(i,p-profit(ai+1)) if p > profit(ai+1). A(i+1,p) is the minimum of these two values.

Running Time The input has 2n numbers, say each is at most P. So the input has total length 2nlog(P). For the dynamic programming algorithm, there are n rows and at most nP columns. Each entry can be computed in constant time (look up two entries). So the total time complexity is O(nP). The running time is not polynomial if P is very large (compared to n).

Approximation Algorithm We know that the knapsack problem is NP-complete. Can we use the dynamic programming technique to design approximation algorithm?

Scaling Down Idea: to scale down the numbers and compute the optimal solution in this modified instance Suppose P ≥ 1000n. Then OPT ≥ 1000n. Now scale down each element by 100 times (profit*:=profit/100). Compute the optimal solution using this new profit. Can’t distinguish between element of size, say 2199 and 2100. Each element contributes at most an error of 100. So total error is at most 100n. This is at most 1/10 of the optimal solution. However, the running time is 100 times faster.

Approximation Scheme Goal: to find a solution which is at least (1- є)OPT for any є > 0. Approximation Scheme for Knapsack Given є > 0, let K = єP/n, where P is the largest profit of an object. For each object ai, define profit*(ai) = profit(ai)/K . With these as profits of objects, using the dynamic programming algorithm, find the most profitable set, say S’. Output S’ as the approximate solution.

Quality of Solution Theorem. Let S denote the set returned by the algorithm. Then, profit(S) ≥ (1- є)OPT. Proof. Let O denote the optimal set. For each object a, because of rounding down, K·profit*(a) can be smaller than profit(a), but by not more than K. Since there are at most n objects in O, profit(O) – K·profit*(O) ≤ nK. Since the algorithm return an optimal solution under the new profits, profit(S) ≥ K·profit*(S) ≥ K·profit*(O) ≥ profit(O) – nK = OPT – єP ≥ (1 – є)OPT because OPT ≥ P.

Running Time For the dynamic programming algorithm, there are n rows and at most n P/K columns. Each entry can be computed in constant time (look up two entries). So the total time complexity is O(n2 P/K ) = O(n3/ є). Therefore, we have an approximation scheme for Knapsack.

Approximation Scheme Quick Summary Modify the instance by rounding the numbers. Use dynamic programming to compute an optimal solution S in the modified instance. Output S as the approximate solution.

PTAS Bin Packing Minimum makespan scheduling Euclidean TSP Euclidean minimum Steiner tree But most problems do not admit a PTAS unless P=NP.