Dealing with NP-Complete Problems

Slides:



Advertisements
Similar presentations
Algorithm Design Methods Spring 2007 CSE, POSTECH.
Advertisements

Branch-and-Bound Technique for Solving Integer Programs
5-1 Chapter 5 Tree Searching Strategies. 5-2 Satisfiability problem Tree representation of 8 assignments. If there are n variables x 1, x 2, …,x n, then.
Branch & Bound Algorithms
CS420 lecture one Problems, algorithms, decidability, tractability.
Greedy vs Dynamic Programming Approach
Sum of Subsets and Knapsack
Computational problems, algorithms, runtime, hardness
Instructor Neelima Gupta Table of Contents Lp –rounding Dual Fitting LP-Duality.
Approximation Algorithms
Branch and Bound Searching Strategies
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Mathematical Modeling and Optimization: Summary of “Big Ideas”
Computational Methods for Management and Economics Carla Gomes
Approximation Algorithms
CSC401 – Analysis of Algorithms Lecture Notes 12 Dynamic Programming
MAE 552 – Heuristic Optimization Lecture 26 April 1, 2002 Topic:Branch and Bound.
1 Branch and Bound Searching Strategies 2 Branch-and-bound strategy 2 mechanisms: A mechanism to generate branches A mechanism to generate a bound so.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Chapter 11: Limitations of Algorithmic Power
Ch 13 – Backtracking + Branch-and-Bound
Branch and Bound Algorithm for Solving Integer Linear Programming
DAST 2005 Week 4 – Some Helpful Material Randomized Quick Sort & Lower bound & General remarks…
Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Review of Reservoir Problem OR753 October 29, 2014 Remote Sensing and GISc, IST.
Backtracking.
LP formulation of Economic Dispatch
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
1 Lecture 4 Maximal Flow Problems Set Covering Problems.
Integer programming Branch & bound algorithm ( B&B )
Linear Programming David Kauchak cs161 Summer 2009.
Decision Procedures An Algorithmic Point of View
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Scott Perryman Jordan Williams.  NP-completeness is a class of unsolved decision problems in Computer Science.  A decision problem is a YES or NO answer.
Mathematical Modeling and Optimization: Summary of “Big Ideas”
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
CALTECH CS137 Winter DeHon CS137: Electronic Design Automation Day 12: February 13, 2002 Scheduling Heuristics and Approximation.
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial.
CSC 413/513: Intro to Algorithms NP Completeness.
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
CSC401: Analysis of Algorithms CSC401 – Analysis of Algorithms Chapter Dynamic Programming Objectives: Present the Dynamic Programming paradigm.
WOOD 492 MODELLING FOR DECISION SUPPORT
Honors Track: Competitive Programming & Problem Solving Optimization Problems Kevin Verbeek.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Divide and Conquer Optimization problem: z = max{cx : x  S}
December 14, 2015 Design and Analysis of Computer Algorithm Pradondet Nilagupta Department of Computer Engineering.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Optimization Problems
Lecture.6. Table of Contents Lp –rounding Dual Fitting LP-Duality.
Algorithm Design Methods 황승원 Fall 2011 CSE, POSTECH.
CS 3343: Analysis of Algorithms Lecture 19: Introduction to Greedy Algorithms.
5-1 Copyright © 2013 Pearson Education Integer Programming: The Branch and Bound Method Module C.
CSCE350 Algorithms and Data Structure Lecture 21 Jianjun Hu Department of Computer Science and Engineering University of South Carolina
Young CS 331 D&A of Algo. NP-Completeness1 NP-Completeness Reference: Computers and Intractability: A Guide to the Theory of NP-Completeness by Garey and.
Branch and Bound Searching Strategies
Approximation Algorithms based on linear programming.
1 Chapter 5 Branch-and-bound Framework and Its Applications.
P & NP.
Algorithm Design Methods
Backtracking And Branch And Bound
Branch and Bound.
Exam 2 LZW not on syllabus. 73% / 75%.
Dynamic Programming Merge Sort 1/18/ :45 AM Spring 2007
Greedy Algorithms Alexandra Stefan.
Algorithm Design Methods
Backtracking and Branch-and-Bound
ENGM 435/535 Integer Programming.
Algorithm Design Methods
Dynamic Programming Merge Sort 5/23/2019 6:18 PM Spring 2008
Algorithm Design Methods
Presentation transcript:

Dealing with NP-Complete Problems Roman Ginis Mani Chandy

What to do with NP-Complete Problems? Good sense (and humility) suggest that we probably can’t find algorithms that take polynomial time in the worst case for NP-complete problems. Because if we did find a polynomial solution to an NP-complete problem then P = NP, and we have found polynomial solutions for all problems in NP.

What to do with NP-Completeness? Worst case solutions may be exponential but solution may take only polynomial time on the average. Fast average time solutions are good, but in many cases we don’t have fast average time solutions either.

What to do? Find solutions that work well much of the time, even if we can’t prove that average solution time is fast. Find solutions that are within specified bounds of the optimal. Example: You are looking for any solution to a traveling salesman problem which is provably within 20% of optimal.

An Approach: Branch and Bound An example: the knapsack problem. Recall what the knapsack problem is: Given a knapsack with capacity C Given N objects, where the j-th object has weight W[j] and value V[j]. Put objects into knapsack to maximize value of knapsack contents without exceeding its capacity.

0 - 1 Knapsack Problem Assume that all parameters (capacity, values, weights) are positive integers. Mathematical formulation max (sum over all j of V[j]*x[j]) subject to (sum over all j of W[j]*x[j]) <= C where x[j] is 0 or 1 x[j] = 1 if and only if the object is placed in the knapsack.

Branch and Bound for Knapsack The Knapsack Problem is NP-Complete. Suppose we want to find a solution within 20% of optimal, or We want to run an algorithm for 2 days and then pick the best solution we have found so far, and we’d like the algorithm to tell us that it is within P% of the optimal solution where P is determined by the algorithm.

Bounds If we are maximizing, and we have a solution with value V, and we want to prove that this solution is within 20% of the optimal, then we can do so by proving that V is within 20% of an upper bound to the optimal solution. Maximizing --- find upper bound Minimizing --- find lower bound

Finding bounds One way: relax the constraints. Suppose the given problem is Z = max f(x) subject to x in set B. We relax the constraint by requiring x to be in a set D, where B is contained in D. The relaxed problem is Z’ = max f(x) subject to x in set D. What is the relationship between Z and Z’?

Relaxing Constraints D B Z is the best solution in set B Z’ is the best solution in set D B

Solutions to Relaxed Problems The optimal solution to a relaxed problem is at least as good as the optimal solution to the original problem because an optimal solution to the original problem is also a feasible solution to the relaxed problem. So we get bounds by relaxing constraints: maximizing: upper bounds minimizing: lower bounds

Finding Bounds We want tight bounds because the tighter the bounds the smaller the amount we can claim our solution is from the optimal. Example: We have a feasible solution with value 100 and our upper bound is 110, so we know that the optimal solution is within 10% of the optimal. What can we claim if our bound is 200?

Finding bounds We want to compute bounds quickly because, as we shall see shortly, we will be computing bounds repeatedly. We have to evaluate the tradeoff between computing very tight bounds and computing bounds rapidly, and there is no obvious way of doing this.

Bounds for Knapsack We find a bound for the knapsack problem by relaxing its constraints. One way to relax constraints is to relax the requirement that objects are indivisible. In the given problem you are not allowed to put a fraction of an object into a knapsack. In relaxed problem fractional objects are ok.

Bounds for Knapsack Given problem Relaxed problem max (sum over j of V[j]*x[j]) subject to (sum over j of W[j]*x[j]) <= C x[j] = 0 or 1. Relaxed problem same as given problem except: 0 <= x[j] <= 1

The Cheesecake Problem The relaxed knapsack problem is called the cheesecake problem. Why? How would you solve the cheesecake problem fast?

The Cheesecake Name It’s called the cheesecake problem because you can think of the knapsack as your stomach, and the objects as cheesecakes. A value of a cheesecake is the pleasure you get out of eating it. The weight of a cheesecake is … well it’s weight.

Solutions to the Cheesecake Order objects in decreasing value-density, i.e., by value divided by weight. Assume this ordering is 1, 2, 3,… the natural order. The maximum happiness you get out of one bite (one gram) of a cheesecake is from cheesecake 1, then cheesecake 2, …. Optimum solution: eat cheesecakes in order 1,2, 3, … until you can’t eat any more.

Solutions to Cheesecake Go in increasing order of cheesecakes. Initial weight of cheesecakes in stomach: 0. If next cheesecake fits in stomach then eat it all else eat the fraction that fills stomach. Proof of optimality? Any other solution can be shown to be non-optimal by perturbing it

The Solution Tree We can represent all solutions to the knapsack problem as a tree. (Remember the trees we constructed to describe solutions by non-deterministic Turing machines?) root X[1] = 0 X[1] = 1 X[2] = 1 X[2]=0 X[2]= 1 X[2] = 0 X[3]=0 X[3]=1 X[3]=0 X[3]=1 X[3]=0 X[3]=1 X[3]=0 X[3]=1 000 001 010 011 100 101 110 111

The Solutions Tree We construct the tree by making a decision about the value of x[j] for some j, setting the value to 0 or to 1. At each level of the tree we set the value of the same variable x[j]. There are 2n leaves of the tree, so generating the whole tree will definitely take exponential time.

A Brute Force Approach Generate the whole solutions tree. Each leaf corresponds to a solution (which may not be feasible because the total weight exceeds capacity). If a solution is infeasible, discard it. Keep track of the best feasible solution seen so far.

Central Question Is there some way to find an optimal solution, or a solution that is guaranteed to be within a specified percentage of optimal, without generating the whole tree? Is there some way to use bounds to keep the tree “pruned?”

The Approach As we generate the tree, we will compute an upper bound for each node of the tree. Bound associated with a node is an upper bound on all solutions that can be generated by expanding that node of the tree. A feasible solution that is at least as good as the best bound is an optimal solution, as we shall see.

Tree Generations Suppose V = [7, 5, 3] W = [6, 5, 4] C = 9 Can we use bounds to get better solutions?

The Solution Tree Which node of the tree should we expand next? root X[1] = 0 X[1] = 1 infeasible X[2] = 1 X[2]=0 X[2]= 1 X[2] = 0 Bound = - infinity Bound =3 Bound = 8 X[3]=0 X[3]=1 Bound = 7 Bound = - infinity

The Central Idea Always expand that leaf of the (partially-expanded) tree with the best bound. Note: An upper bound on the optimal solution is the minimum value of the bounds associated with all the leaves of the partially-expanded tree. More next class.