1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.

Slides:



Advertisements
Similar presentations
Weighted Matching-Algorithms, Hamiltonian Cycles and TSP
Advertisements

Lectures on NP-hard problems and Approximation algorithms
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
Approximation Algorithms
Design and Analysis of Algorithms Approximation algorithms for NP-complete problems Haidong Xue Summer 2012, at GSU.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
NP-Complete Problems Polynomial time vs exponential time
CS774. Markov Random Field : Theory and Application Lecture 17 Kyomin Jung KAIST Nov
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Computational problems, algorithms, runtime, hardness
Approximation Algorithms
Approximation Algorithms: Combinatorial Approaches Lecture 13: March 2.
Linear Programming and Approximation
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
Approximation Algorithms1. 2 Outline and Reading Approximation Algorithms for NP-Complete Problems (§13.4) Approximation ratios Polynomial-Time Approximation.
1 Approximation Algorithms CSC401 – Analysis of Algorithms Lecture Notes 18 Approximation Algorithms Objectives: Typical NP-complete problems Approximation.
Vertex Cover, Dominating set, Clique, Independent set
Approximation Algorithms
1 Traveling Salesman Problem (TSP) Given n £ n positive distance matrix (d ij ) find permutation  on {0,1,2,..,n-1} minimizing  i=0 n-1 d  (i),  (i+1.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2006 Lecture 7 Monday, 4/3/06 Approximation Algorithms.
2-Layer Crossing Minimisation Johan van Rooij. Overview Problem definitions NP-Hardness proof Heuristics & Performance Practical Computation One layer:
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Vertex cover problem S  V such that for every {u,v}  E u  S or v  S (or both)
Algoritmi on-line e risoluzione di problemi complessi Carlo Fantozzi
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 7 Tuesday, 4/7/09 Approximation Algorithms.
An introduction to Approximation Algorithms Presented By Iman Sadeghi.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Approximation Algorithms
Approximation Algorithms: Bristol Summer School 2008 Seffi Naor Computer Science Dept. Technion Haifa, Israel TexPoint fonts used in EMF. Read the TexPoint.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
1 Approximation Algorithm Instructor: yedeshi
C&O 355 Mathematical Programming Fall 2010 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
1 Approximation Through Scaling Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij.
APPROXIMATION ALGORITHMS VERTEX COVER – MAX CUT PROBLEMS
Design Techniques for Approximation Algorithms and Approximation Classes.
Approximating Minimum Bounded Degree Spanning Tree (MBDST) Mohit Singh and Lap Chi Lau “Approximating Minimum Bounded DegreeApproximating Minimum Bounded.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Advanced Algorithm Design and Analysis (Lecture 13) SW5 fall 2004 Simonas Šaltenis E1-215b
Approximation Algorithms for Knapsack Problems 1 Tsvi Kopelowitz Modified by Ariel Rosenfeld.
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial.
Approximation Algorithms
1 Steiner Tree Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
Computer Science Day 2013, May Distinguished Lecture: Andy Yao, Tsinghua University Welcome and the 'Lecturer of the Year' award.
Partitioning Graphs of Supply and Demand Generalization of Knapsack Problem Takao Nishizeki Tohoku University.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
CPS Computational problems, algorithms, runtime, hardness (a ridiculously brief introduction to theoretical computer science) Vincent Conitzer.
1 Chapter 11 Approximation Algorithms Slides by Kevin Wayne Pearson-Addison Wesley. All rights reserved.
Algorithms for hard problems Introduction Juris Viksna, 2015.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 7 Tuesday, 4/6/10 Approximation Algorithms 1.
Approximation algorithms
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
8.3.2 Constant Distance Approximations
An introduction to Approximation Algorithms Presented By Iman Sadeghi
Approximation Algorithms
Approximation algorithms
Introduction to Approximation Algorithms
Approximation Algorithms
Approximation Algorithms
Computability and Complexity
Introduction to Approximation algorithms
Coverage Approximation Algorithms
Approximation Algorithms
Presentation transcript:

1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A AAA

2 What to do if a problem is NP-complete?  We have already seen many options to deal with NP- complete problems.  FTP, special cases, exact exponential-time algorithms, heuristics.  In other courses: local search, ILP, constraint programming, …  Approximation algorithms are one of these options.  An approximation algorithm is a heuristic with a performance guarantee.  We consider polynomial-time approximation algorithms.  Non-optimal solutions, but with some performance guarantee compared to the optimal solution.  Also useful as a starting point for other approaches:  Local search, branch and bound.

3 Different forms of approximation algorithms (outline of lecture) Qualities of polynomial-time approximation algorithms: 1. Absolute constant difference.  |OPT – ALG| · c 2. APX: Constant-factor approximation.  Approximation ratio: ALG/OPT · c for minimisation problems.  Approximation ratio: OPT/ALG · c for maximisation problems. 3. f(n)-APX: Approximation by a factor of f(n).  f(n) depends only one the size of the input. 4. PTAS: Polynomial-time approximation scheme.  Approximation ratio 1+ ² for any ² > 0, while the algorithm runs in polynomial time for any fixed ². 5. FPTAS: Fully polynomial-time approximation scheme.  Approximation ratio 1+ ² for any ² > 0, while the algorithm runs in time polynomial in n and 1/ ².

4 Absolute constant difference  Algorithms that run in worst-case polynomial time, with:  |OPT – ALG| · c  Example: Planar graph colouring.  Algorithm that tests 1 and 2-colourability and if no solution found always outputs 4.  Algorithm has an error of at most 1.  These examples are rare.  Not so difficult to show:  TSP cannot be approximated by a constant factor unless PNP.

5 Constant-factor approximation  Approximation ratio:  ALG/OPT · c for minimisation problems.  OPT/ALG · c for maximisation problems.  Ratio always bigger or equal to 1.  Class of problems with a constant factor approximation algorithm: APX.  Notions of APX-completeness also exist.  Examples of constant-factor approximation algorithms from earlier lectures:  2-approximations: MST based algorithms for metric TSP. (TSP with triangle inequality).  1.5-approximation: Christofides algorithm for metric TSP.

6 Approximation for vertex cover Approximation algorithm for vertex cover: 1. Let E’ = E, C = ; 2. While E’  ; 1. Let {u,v} be any edge from E’ 2. C = C [ {u,v} 3. Remove every edge incident to u of v from E’. 3. Return C  Runs in polynomial time.  Returns a vertex cover.  How good is this vertex cover?

7 2-approximation for vertex cover Theorem:  The algorithm on the previous slide is a 2-approximation. Proof:  Let A be the set of edges which endpoints we picked.  OPT ¸ |A|, because every edge in A must be covered.  ALG = 2|A| · 2OPT, hence ALG/OPT · 2.

8 Approximation by a factor of f(n)  Approximation ratio of f(n).  Approximation ratio depends on the size of the input (and can be very bad) Set Cover  Given: finite set U and a family F of subsets of U.  Question: Does there exists a subset C of F of size at most k such that [ S2C S = U? Greedy algorithm:  While U  ; :  Select S 2 F that maximises |S Å U|  Add S to C and let U := U \ S

9 An (ln(n)+1)-approximation algorithm for set cover Theorem:  The greedy set cover algorithm is an (ln(n)+1)-approximation. Proof:  The algorithm runs in polynomial time and returns a set cover.  Let S i be the i th set from F picked by the algorithm.  We assign cost c x to elements from x:  Let x be firstly covered by set S i while running the algorithm.  And, let X i = S i \ (S 1 [ S 2 [... [ S i-1 )  Define: c x = 1 / |X i |.  Now we have: ...

10 Proof of the (ln(n)+1)-approximation algorithm for set cover  The n th Harmonic number H(n) is defined as:  From mathematics (bounding using integrals):  On the next two slides, we will prove that for any S 2 F:  From this we derive our approximation ratio:  Where C* is the optimal set cover.

11 Bounding the cost of any set S  What remains is to prove that for any S 2 F:  Remind:  If x firstly covered by S i, then c x = 1 / |X i |  Where: X i = S i \ (S 1 [ S 2 [... [ S i-1 )  For any S 2 F, let u i be the number of uncovered items in S when the algorithm selects S i.  Take the smallest value k s.t. u k = 0.  With the last equality due to the selection by the algorithm.  Then...

12 Bounding the cost of any set S  To prove:  Last slide:  Thus:  Remind:  Q.E.D.

13 PTAS  A Polynomial Time Approximation Scheme is an algorithm, that gets two inputs: the “usual” input X, and a real value >0.  For each fixed >0, the algorithm  Uses polynomial time  Is an (1+)-approximation algorithm

14 FPTAS  A Fully Polynomial Time Approximation Scheme is an algorithm, that gets two inputs: the “usual” input X, and a real value >0, and  For each fixed >0, the algorithm  Is an (1+)-approximation algorithm  The algorithm uses time that is polynomial in the size of X and 1/.

15 Example: Knapsack  Given: n items each with a positive integer weight w(i) and integer value v(i) (1 ≤ i ≤ n), and an integer B.  Question: select items with total weight at most B such that the total value is as large as possible.

16 Dynamic Programming for Knapsack  Let P be the maximum value of any item.  We can solve the problem in O(n 2 P) time with dynamic programming:  Tabulate M(i,Q): minimum total weight of a subset of items 1, …, i with total value Q for Q at most nP  M(0,0) = 0  M(0,Q) = ∞, if Q>0  M(i+1,Q) = min{ M(i,Q), M(i,Q-v(i+1)) + w(i+1) }  This algorithm is clearly correct and runs in the given time.

17 Scaling for Knapsack  Take input for knapsack  Throw away all items that have weight larger than B (they are never used)  Let c be some constant  Build new input: do not change weights, but set new values v’(i) = v(i) / c   Solve scaled instance with DP optimally  Output this solution: approximates solution for original instance

18 The Question is….  How do we set c, such that  Approximation ratio good enough  Polynomial time

19 Approximation ratio  Consider optimal solution Y for original problem, value OPT  Value in scaled instance: at least OPT/c – n  At most n items, for each v(i)/c - v(i)/c  <1  So, DP finds a solution of value at least OPT/c –n for scaled problem  So, value of approximate solution for original instance is at least c*(OPT/c –n) = OPT - nc

20 Setting c  Set c = P/(2n)  This is an FPTAS  Running time:  Largest value of an item is at most P/c = P / (P/(2n)) = 2n/.  Running time is O(n 2 * 2n/) = O(n 3 /)  Approximation: … next

21 -approximation  Note that each item is a solution (we removed items with weight more than B).  So OPT ≥ P.  Algorithm gives solution of value at least: OPT – nc = OPT – n( P / (2n) ) = OPT – /2 P  OPT / (OPT – /2 P)≤ OPT / (OPT – /2 OPT) = 1/(1-/2) ≤ 1+  QED.

22 2-approximation for minimum weight vertex cover  Minimum weight vertex cover:  Vertex cover where each vertex has a weight.  We look for the minimum weight vertex cover.  2-approximation for vertex cover no longer works.  We may select very heavy vertices using that algorithm.  Consider the following ILP:  It’s LP relaxation is ILP with the last constraint replaced by:  0 · x(v) · 1

23 2-approximation algorithm for minimum weight vertex cover Algorithm:  Compute the optimal solution to the LP relaxation.  Output all v with x(v) ¸ ½.  Algorithm runs in polynomial time.  Linear programming can be solved in polynomial time.  Not by the simplex algorithm!!  Ellipsoid method / interior point methods.  Algorithm returns a vertex cover.  For every edge, sum of incident vertices at least 1.  Hence at least one of the vertex variables at least ½.

24 Proof of 2-approximation algorithm for minimum weight vertex cover  Let z* be the solution to the LP.  Because any vertex cover is a solution to the LP we have:  Also, we can bound ALG in terms of z*:  Hence:  QED

25 Conclusion Qualities of polynomial-time approximation algorithms: 1. Absolute constant difference.  Planar graph colouring. 2. APX: Constant-factor approximation.  TSP with triangle inequality.  Vertex cover. 3. f(n)-APX: Approximation by a factor of f(n).  Set cover. 4. PTAS: Polynomial-time approximation scheme. 5. FPTAS: Fully polynomial-time approximation scheme.  Scaling for Knapsack.