Approximation algorithms

Slides:



Advertisements
Similar presentations
Weighted Matching-Algorithms, Hamiltonian Cycles and TSP
Advertisements

Approximation Algorithms for TSP
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Combinatorial Algorithms
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Approximation Algorithms
Approximation Algorithms: Combinatorial Approaches Lecture 13: March 2.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
1 Traveling Salesman Problem (TSP) Given n £ n positive distance matrix (d ij ) find permutation  on {0,1,2,..,n-1} minimizing  i=0 n-1 d  (i),  (i+1.
The Theory of NP-Completeness
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2006 Lecture 7 Monday, 4/3/06 Approximation Algorithms.
Time Complexity.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2009 Lecture 7 Tuesday, 4/7/09 Approximation Algorithms.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
1 NP-Complete Problems (Fun part) Polynomial time vs exponential time –Polynomial O(n k ), where n is the input size (e.g., number of nodes in a graph,
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
The Theory of NP-Completeness 1. What is NP-completeness? Consider the circuit satisfiability problem Difficult to answer the decision problem in polynomial.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Advanced Algorithm Design and Analysis (Lecture 13) SW5 fall 2004 Simonas Šaltenis E1-215b
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
EMIS 8373: Integer Programming NP-Complete Problems updated 21 April 2009.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
Computer Science Day 2013, May Distinguished Lecture: Andy Yao, Tsinghua University Welcome and the 'Lecturer of the Year' award.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Unit 9: Coping with NP-Completeness
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
NP Completeness Piyush Kumar. Today Reductions Proving Lower Bounds revisited Decision and Optimization Problems SAT and 3-SAT P Vs NP Dealing with NP-Complete.
CSC 413/513: Intro to Algorithms
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
Design and Analysis of Approximation Algorithms
The Theory of NP-Completeness
NP-Completeness (2) NP-Completeness Graphs 4/13/2018 5:22 AM x x x x x
More NP-Complete and NP-hard Problems
P & NP.
Chapter 10 NP-Complete Problems.
Hard Problems Some problems are hard to solve.
EMIS 8373: Integer Programming
Richard Anderson Lecture 26 NP-Completeness
Optimization problems such as
Approximation Algorithms
NP-Completeness (2) NP-Completeness Graphs 7/23/ :02 PM x x x x
NP-Completeness (2) NP-Completeness Graphs 7/23/ :02 PM x x x x
NP-Completeness Proofs
Richard Anderson Lecture 26 NP-Completeness
Hard Problems Introduction to NP
Approximation algorithms
Algorithms for hard problems
P and NP CISC4080, Computer Algorithms CIS, Fordham Univ.
Approximation Algorithms
Computability and Complexity
Approximation Algorithms for TSP
ICS 353: Design and Analysis of Algorithms
NP-Completeness (2) NP-Completeness Graphs 11/23/2018 2:12 PM x x x x
Richard Anderson Lecture 25 NP-Completeness
NP-Completeness NP-Completeness Graphs 12/6/2018 1:58 PM x x x x x x x
Richard Anderson Lecture 28 NP-Completeness
Approximation Algorithms
Polynomial time approximation scheme
P, NP and NP-Complete Problems
No Guarantee Unless P equals NP
The Theory of NP-Completeness
P, NP and NP-Complete Problems
Approximation Algorithms
NP-Completeness (2) NP-Completeness Graphs 7/9/2019 6:12 AM x x x x x
Lecture 24 Vertex Cover and Hamiltonian Cycle
Presentation transcript:

Approximation algorithms Given minimization problem (e.g. min vertex cover, TSP,…) and an efficient algorithm that always returns some feasible solution. The algorithm is said to have approximation ratio  if for all instances, cost(sol. found)/cost(optimal sol.)  

Approximation algorithms Given maximization problem (e.g. MAXSAT, MAXCUT) and an efficient algorithm that always returns some feasible solution. The algorithm is said to have approximation ratio  if for all instances, cost(optimal sol.)/cost(sol. found)  

General design/analysis trick We do not directly compare our solution to the optimal one – this would be difficult as we typically have no clue about the optimal solution. Instead, we compare our solution to some lower bound (for minimization problems) for the optimal solution. Our approximation algorithm often works by constructing some relaxation providing such a lower bound and turning the relaxed solution into a feasible solution without increasing the cost too much.

Traveling Salesman Problem (TSP) Given n  n positive distance matrix (dij) find permutation  on {0,1,2,..,n-1} minimizing i=0n-1 d(i), (i+1 mod n) The special case of dij being actual distances on a map is called the Euclidean TSP. (NOTE: input is the points on the map – the distances are implicit!) The special case of dij satistying the triangle inequality is called Metric TSP.

Lower bound/relaxation What are suitable relaxations of traveling salesman tours? Ideas from branch and bound: Cycle covers and minimum spanning trees. We can turn a minimum spanning tree into a traveling salesman tour without increasing the cost too much.

Approximation Ratio Theorem: Approx-TSP-Tour has approximation ratio 2.

Improvements Best known algorithm for metric TSP (Christofides algorithm) has approximation factor 3/2. Uses minimum spanning tree, but transforms the tree to an Eulerian graph by adding a matching instead of duplicating all edges. Why do we only consider metric case?

Appoximating general TSP is NP-hard If there is an efficient approximation algorithm for TSP with any approximation factor  then P=NP. Proof: We use a modification of the reduction of hamiltonian cycle to TSP.

Reduction Given instance (V,E) of hamiltonian cycle, construct TSP instance (V,d) as follows: d(u,v) = 1 if (u,v)  E d(u,v) =  |V| + 1 otherwise. Suppose we have an efficient approximation algorithm for TSP with approximation ratio . Run it on instance (V,d). (V,E) has a hamiltonian cycle if and only if the returned solution has length at most  |V|.

Remarks The reduction we constructed is called a gap creating reduction. Gap creating reductions have been constructed for many NP-hard optimization problems, proving a limit to their approximability by efficient algorithms. For instance, it is known that min vertex cover does not have an algorithm with approximation ratio better than 1.36. This reduction and most gap creating reductions are much harder to construct that the TSP reduction. Constructing them has been a major theme for complexity theory in the 1990’s.

General design/analysis trick Our approximation algorithm often works by constructing some relaxation providing a lower bound and turning the relaxed solution into a feasible solution without increasing the cost too much. The LP relaxation of the ILP formulation of the problem is a natural choice. We may then round the optimal LP solution.

Not obvious that it will work….

Min weight vertex cover Given an undirected graph G=(V,E) with non- negative weights w(v) , find the minimum weight subset C ⊆ V that covers E. Min vertex cover is the case of w(v)=1 for all v.

ILP formulation Find (xv)v ∈ V minimizing  wv xv so that xv ∈ Z For all (u,v) ∈ E, xu + xv ≥ 1.

LP relaxation Find (xv)v ∈ V minimizing  wv xv so that xv ∈ R For all (u,v) ∈ E, xu + xv ≥ 1.

Relaxation and Rounding Solve LP relaxation. Round the optimal solution x* to an integer solution x: xv = 1 iff x*v ≥ ½. The rounded solution is a cover: If (u,v) ∈ E, then x*u + x*v ≥ 1 and hence at least one of xu and xv is set to 1.

Quality of solution found Let z* =  wv xv* be cost of optimal LP solution.  wv xv ≤ 2  wv xv*, as we only round up if xv* is bigger than ½. Since z* ≤ cost of optimal ILP solution, our algorithm has approximation ratio 2.

Relaxation and Rounding Relaxation and rounding is a very powerful scheme for getting approximate solutions to many NP-hard optimization problems. In addition to often giving non-trivial approximation ratios, it is known to be a very good heuristic, especially the randomized rounding version. Randomized rounding of x ∈ [0,1]: Round to 1 with probability x and 0 with probability 1-x.

Min set cover Given set system S1, S2, …, Sm ⊆ X, find smallest possible subsystem covering X.

Greedy algorithm for min set cover

Approximation Ratio Greedy-Set-Cover does not give a constant approximation ratio Even true for Greedy-Vertex-Cover! Quick analysis: Approximation ratio ln(n) Refined analysis: Approximation ratio Hs where s is the size of the largest set and Hs = 1/1 + 1/2 + 1/3 + .. 1/s is the s’th harmonic number. s may be small on concrete instances. H3 = 11/6 < 2.

Approximation Schemes Some optimization problems can be approximated very well, with approximation ratio 1+ for any >0. An approximation scheme takes an additional input, >0, and outputs a solution within 1+ of optimal.

PTAS and FPTAS An approximation scheme is a Polynomial Time Approximation Scheme, if for every fixed >0, the algorithm runs in polynomial time in the input length n. An approximation scheme is a Fully Polynomial Time Approximation Scheme, if the algorithm runs in time polynomial in n and in 1/.

Knapsack problem Given n items with weights w1,...,wn , values v1,...,vn and weight limit W, fit items within weight limit maximizing total value.

FPTAS for Knapsack From tutorials: We have a pseudo-polynomial time algorithm for Knapsack in time O(n2V), where V is largest value. We use this in step 4.

MAXE3SAT Given Boolean formula in CNF form with exactly three distinct variables per clause find an assignment satisfying as many clauses as possible.

Randomized algorithm Flip a fair coin for each variable. Assign the truth value of the variable according to the coin toss. Claim: The expected number of clauses satisfied is at least 7/8 m where m is the total number of clauses. We say that the algorithm has an expected approximation ratio of 8/7.

Analysis Let Yi be a random variable which is 1 if the i’th clause gets satisfied and 0 if not. Let Y be the total number of clauses satisfied. Pr[Yi =1] = 1 if the i’th clause contains some variable and its negation. Pr[Yi = 1] = 1 – (1/2)3 = 7/8 if the i’th clause does not include a variable and its negation. E[Yi] = Pr[Yi = 1] ≥ 7/8. E[Y] = E[ Yi] =  E[Yi] ≥ (7/8) m

Remarks It is possible to derandomize the algorithm, achieving a deterministic approximation algorithm with approximation ratio 8/7. Approximation ratio 8/7 -  is not possible for any constant  > 0 unless P=NP. Very hard to show (shown in 1997).

More Inapproximability Unless P=NP, we can not have approximation algorithms guaranteeing the following approximation ratios: Problem Ratio Vertex Cover 1.36 Set Cover cln(n), for some c>0 TSP Any ratio Metric TSP 220/219