Presentation is loading. Please wait.

Presentation is loading. Please wait.

MIT and James Orlin1 NP-completeness 204.302 in 2005.

Similar presentations


Presentation on theme: "MIT and James Orlin1 NP-completeness 204.302 in 2005."— Presentation transcript:

1 MIT and James Orlin1 NP-completeness 204.302 in 2005

2 MIT and James Orlin2 Complexity How can we show a problem is efficiently solvable? –We can show it constructively, by providing an algorithm and show that it solves the problem efficiently. –How can we show a problem is not efficiently solvable? –Proving this negative is the aim of complexity theory.

3 MIT and James Orlin3 What do we mean by a problem? C onsider maximize 3x + 4y subject to 4x + 5y  23 x  0, y  0 This is an “instance” of linear programming. When we say the linear programming problem, we refer to the collection of all instances. Similarly, the shortest path problem refers to the collection of all instances of finding shortest paths. The traveling salesman problem refers to all instances of the traveling salesman problem.

4 MIT and James Orlin4 Instances versus problem Complexity theory addresses the following problem: when is a problem hard? Note: it does not deal with the question of whether any instance is hard.

5 MIT and James Orlin5 General Fact As problem instances get larger, the time to solve the problem grows. But how fast? We say that a problem is solvable in polynomial time if there is a polynomial p(n) such that the time to solve a problem of size n is at most p(n).

6 MIT and James Orlin6 Some examples Finding a word in a dictionary with n entries. Time  log n, depending on assumptions. Sorting n items time  n log n Finding the shortest path from s to t time  n 2. All solvable in Polynomial time

7 MIT and James Orlin7 Running times as n grows

8 MIT and James Orlin8 Easy Problems Easy problems are those problems whose running time is guaranteed to grow slower than a polynomial of the size of the input. Everything else is a hard problem of some description.

9 MIT and James Orlin9 Polynomial Time Algorithms We consider a problem X to be “easy” or efficiently solvable, if there is a polynomial time algorithm A for solving X. We let P denote the class of problems solvable in polynomial time. Problems that are in the class P include linear Programming, so therefore the assignment problem, transportation problem and minimum cost flow problems are too. So are: - Finding a topological order –- Finding a critical path –- Finding an Euler Ian cycle

10 MIT and James Orlin10 To determine whether M is prime, one can divide M by every integer less than M. The number of steps taken by one variant of the simplex algorithm on the minimum cost flow problem is 1000 n log n pivots. Linear programming can be solved by a technique called the ellipsoid algorithm in at most n log (M) iterations, where each iteration takes at most 1000 n 3 steps. Which are polynomial time algorithms?

11 MIT and James Orlin11 Can integer programming be solved in polynomial time? It’s a fact that every algorithm that has ever been developed for integer programming takes exponential time. It is generally believed that no polynomial time algorithm for integer programming exists. Complexity theory can be used to prove that integer programming is hard.

12 MIT and James Orlin12 Hard problems in practice What can you say to your manager if he or she hands you a problem that is too difficult for you to solve. (adapted from Garey and Johnson)

13 MIT and James Orlin13 I cant’ find an efficient algorithm. I guess I’m too dumb.

14 MIT and James Orlin14 I cant’ find an efficient algorithm, because no such algorithm is possible

15 MIT and James Orlin15 I can’t find an efficient algorithm, but neither can these famous researchers.

16 MIT and James Orlin16 The class NP-easy Consider an optimization problem X, in which for any instance I, the goal is to find a feasible solution x for I with maximum value f I (x). (or minimum) We say that X is NP-Easy if there is a polynomial p( ) with the following properties: For every instance I of X 1.There is an optimal solution x for I such that size(x) < p(size( I )). “There is a small sized optimum solution” 2.For any proposed solution y, one can evaluate whether y is feasible in fewer than p(size( I )+ size(y)) steps. “One can efficiently check feasibility” 3.The number of steps to evaluate f(y) is fewer than p(size( I )+ size(y)). “One can efficiently evaluate the objective function.”

17 MIT and James Orlin17 The housing problem 400 students applied in the lottery for a wonderful new dorm that holds 100 students. You have a list of pairs of incompatible students. –no two incompatible students are in the list of 100 students chosen for the dorm –Is there an efficient procedure for finding the list of 100 students.

18 MIT and James Orlin18 0-1 Integer Programming is NP-easy Checking whether 0-1 integer programming is NP-easy. 1.There is an optimal solution x for I such that size(x) < p(size( I )). “There is a small sized optimum solution” --every solution is an n-vector of 0’s and 1’s 2.For any proposed solution y, one can evaluate whether y is feasible in fewer than p(size( I )+ size(y)) steps. “One can efficiently check feasibility” --evaluating whether a 0-1 vector is feasible means checking each constraint 3.The number of steps to evaluate f(y) is fewer than p(size( I )+ size(y)). “One can efficiently evaluate the objective function.” –evaluating f(x) is to evaluate co for some linear function c.

19 MIT and James Orlin19 Some More NP-easy Problems TSP Is there a small sized optimum solution? Can one check feasibility efficiently? Can one evaluate the objective function efficiently?

20 MIT and James Orlin20 On NP-easy problems Theorem. If problem X is NP-easy, and if Y is a special case of X, then Y is NP-easy. Example. 0-1 integer programming is NP-easy Capital budgeting is a special case of 0-1 integer programming. Therefore Capital budgeting is NP-easy. “If a problem is easier than an NP-easy problem it is NP-easy.”

21 MIT and James Orlin21 Other problems that are NP-easy Set cover problem (fire station problem) Capital budgeting problem Determining the largest prime number less than n that divides integer n –solutions are any numbers that divide n –size of any “solution x” is log x < log n –A solution can be checked as a divisor in polynomial time Also, any problem that is a special case of an NP-easy problem is NP-easy. –So determining if a number is prime is NP-easy

22 MIT and James Orlin22 On NP-easy optimization problems Almost any optimization problem that you see will ever want to solve will be NP-easy. It’s a challenge to find optimization problems that are not NP-easy. The next slide illustrates a problem that is not NP-easy.

23 MIT and James Orlin23 A problem that is not NP-easy. Problem: INPUT: an integer n Optimization: find the smallest integer k such that k>n, and both k and k+2 are prime numbers. It is possible that the size of optimum solution, which is log k, is exponentially large in the size of the problem instance, which is log n. So, this violates condition 1: the size of the optimum solution may be exponential in the size of the problem instance.

24 MIT and James Orlin24 NP-easy Almost every optimization problem that you will ever see is NP-easy. Question. Can the NP-easy problems be solved in polynomial time? This is a very famous unsolved problem in mathematics. It is often represented as “Does P = NP?” Amazing Fact 1: If 0-1 integer programming can be solved in polynomial time, then every other NP-easy problem can be solved in polynomial time. Amazing Fact 2: If the traveling salesman problem (or the capital budgeting problem, or the independent set problem) can be solved in polynomial time, then every other NP- easy problem can be solved in polynomial time.

25 MIT and James Orlin25 NP-equivalence and other classes We say that a problem is NP-equivalent if it is both NP hard and NP-easy. NP-hard NP-easy NP-equivalent NP-complete P

26 MIT and James Orlin26 Np-complete problems There is a set of problems that are called np-complete. These problems are equivalent to each other in terms of their solvability. No algorithm exists to solve them efficiently. If an algorithm is found to solve one of them, it can be adjusted to solve any of them.

27 MIT and James Orlin27 No algorithms OK, there’s no way to find an optimal solution efficiently. We want to get any solution then, so we need strategies to find feasible solutions. This is why heuristics exist.

28 MIT and James Orlin28 Recognizing an np-complete problem

29 MIT and James Orlin29 The class NP-hard An oracle function is a “black box” for solving an optimization problem. An oracle function for integer programming would take an integer programming instance as input and produce a solution in 1 time unit. Let X be an optimization problem. We say that X is NP- hard if every NP-easy problem can be solved in polynomial time if one is permitted to use an oracle function for X. Theorem: 0-1 integer programming is NP-hard.

30 MIT and James Orlin30 On NP-hardness Theorem. If problem X is NP-hard, and if X is a special case of Y, then Y is NP-hard. Example. 0-1 integer programming is NP-hard 0-1 integer programming is a special case of integer programming. Therefore, integer programming is NP- hard. “If a problem is harder than an NP-hard problem it is NP-hard.”

31 MIT and James Orlin31 Some examples of NP-hard problems Traveling Salesman Problem Capital Budgeting Problem (knapsack problem) Independent Set Problem Fire Station Problem (set covering) 0-1 Integer programming Integer Programming Project management with resource constraints and thousands more

32 MIT and James Orlin32 Proving that a problem is hard “To prove that problem X is hard, find a problem Y that you know is hard, and show that problem X is easier than Y” To prove that a problem X is NP-hard, start with a “similar” NP-hard problem Y. Then show that Y can be solved in polynomial time if one permits X to be used as a subroutine and counting each solution of x as taking 1 step.

33 MIT and James Orlin33 On Proving NP-hardness results Suppose that we know that the problem of determining a cycle is NP-hard. We will show that the problem of finding an a hamiltonian path is also NP-hard. A hamiltonian cycle is a cycle that passes through each node exactly once. A hamiltonian path is a path that includes every node of G. Proof technique: Start with any instance of the hamiltonian cycle problem. We denote this instance as G = (N, A). Transformation proofs (these are standard). Create an instance G’ = (N’, A) for the hamiltonian path problem from G with the following property: there is a hamiltonian path in G’ if and only if there is a hamiltonian cycle in G.

34 MIT and James Orlin34 A transformation 1 121 0 22 The original network The transformed network: node 1 of the original network was split into nodes 1 and 21, and nodes 0 and 22 were connected to the split nodes.

35 MIT and James Orlin35 Claim 1: If there is a hamiltonian cycle in the original graph then there is a hamiltonian path in the transformed graph. 1 A Hamiltonian Cycle. Connect one to node 1, and the other to node 21. Add in arcs (0,1) and (21, 22). 121 0 22 Take the two arcs in G incident to the node 1.

36 MIT and James Orlin36 Claim 1: If there is a hamiltonian path in the transformed graph then there is a hamiltonian cycle in the original graph. 1 Delete the two arcs (0, 1) and (21, 22). Then take the other arcs in G’ incident to 1 and 21, and make them incident to node 1 in G. 121 0 22 A Hamiltonian Path

37 MIT and James Orlin37 Proofs of NP-hardness transformations have two parts Original instance I, and transformed instance I ’. Part 1. An optimal (or feasible) solution for I induces an optimal (or feasible) solution for I ’. Part 2. An optimal (or feasible) solution for I ’ induces and optimal (or feasible) solution for I. Formulating problems as integer programs illustrates the type of transformation. Note: transformation can be difficult to develop. Great reference: Garey and Johnson 1979.

38 MIT and James Orlin38 Problem reduction If a problem P 1 polynomially reduces to problem P 2, and some polynomial-time algorithm solves P 2, then some polynomial-time algorithm solves P 1. Recall the construction of the dual problem to solve the primal problem in linear programming for example.

39 MIT and James Orlin39 The real benefits of problem reduction 1 If you can reduce a problem to make it easy, in polynomial time, then the original problem is in fact deemed easy. You can of course fall into the trap of reducing your problem and making it harder to solve. That doesn’t necessarily make the problem a hard problem in the first place.

40 MIT and James Orlin40 The real benefits of problem reduction 2 If on the other hand, you can show that your problem can be transformed to one that is known to be np-complete, then you won’t waste time finding an efficient algorithm to find its optimal solution. Rather, you will find an heuristic that does the job to the satisfaction of your employer and move on to the next job.

41 MIT and James Orlin41 Complexity theory takes a worst case perspective. For example any problem solvable with an algorithm with a running time of o(n^100) is considered easy, despite this running time. It is still possible for an np-complete problem to be solved faster than an easy problem of a comparable size.


Download ppt "MIT and James Orlin1 NP-completeness 204.302 in 2005."

Similar presentations


Ads by Google