Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 7 Proving Problems Hard

Similar presentations


Presentation on theme: "Chapter 7 Proving Problems Hard"— Presentation transcript:

1 Chapter 7 Proving Problems Hard
By: Tamer Ali Aldwairi Ravi Alapati Pooja Adhikari

2 Chapter 7.2 P-Complete problems

3 Topics Motivation NP & NP-Complete P & P-Complete Complexity classes
PSA P-Complete proof

4 Motivation Why are we studying complexity theory?
To investigate problems related to the amounts of resources required for the execution of algorithms and the inherent difficulty in providing efficient algorithms for specific computational problems.

5 NP Problems There are some problems for which no polynomial-time algorithm is known. These functions apparently require exponential time to execute on a Deterministic Turing Machine. However, they can be solved in polynomial time by an Nondeterministic Turing Machine. We call these problems NP problems (for Nondeterministic Polynomial).

6 NP-Complete NP-Complete is a set of the decision problem that are solvable by a (NTD) in polynomial time. NP-Complete problems are the most difficult problems in NP (“non-deterministic polynomial time”) in the sense that they are the smallest subclass of NP that could conceivably remain outside of P.

7 Examples of NP-complete problems
Traveling salesman problem Hamiltonian cycle problem Clique problem Subset sum problem Boolean satisfiability problem The vertex cover problem The k-color ability problem

8 NP-complete classification
NP-Hard NP-equivalent NP-easy

9 P-Complete P is the complexity theory containing decision problems which can be solved by a (DTM) using a polynomial amount of computation time, or polynomial time.

10 Examples of P-Complete
Recognizing any regular or context-free language. Testing whether there is a path between points two points a and b in a graph. Sorting and most other problems related to algorithms

11 IS P = NP? The most important open question of complexity theory is whether the complexity class P is the same as the complexity class NP, or is merely a subset as is generally believed. If any NP-complete problem can be solved by a polynomial time deterministic algorithm, then every problem in NP can be solved by a polynomial time deterministic algorithm. But no polynomial time deterministic algorithm is known to solve any of them

12 One of these two possibilities is correct
NP NP NP-complete NP-complete P P

13 Why it is important The relationship between the complexity classes is an unsolved question in theoretical computer science. It is considered to be the most important problem in the field. The Clay mathematics instute has offered a $1 million US prize for the first correct proof. In essence, the P = NP question asks: if positive solutions to a YES/NO problem can be verified quickly (where "quickly" means "in polynomial time"), can the answers also be computed quickly?

14 What if P NP If you know a problem is NP-complete, or if you can prove that it is reducible to one, then there is no point in looking for a P-time algorithm. You are going to have to do exponential-time work to solve this problem.

15 Complexity classes A complexity class is a class of problems grouped together according to their time and/or space complexity NC: can be solved very efficiently in parallel P: solvable by a DTM in poly-time (can be solved efficiently by a sequential computer)‏ NP: solvable by a NTM in poly-time (a solution can be checked efficiently by a sequential computer)‏ PSPACE: solvable by a DTM in poly-space NPSPACE: solvable by a NTM in poly-space EXPTIME: solvable by a DTM in exponential time

16 Relationships between complexity classes
NC  P  NP  PSPACE = NPSPACE  EXPTIME Saying a problem is in NP (P, PSPACE, etc.) gives an upper bound on its difficulty. Saying a problem is NP-hard (P-hard, PSPACE-hard, etc.) gives a lower bound on its difficulty. It means it is at least as hard to solve as any other problem in NP. Saying a problem is NP-complete (P-complete, PSPACE-complete, etc.) means that we have matching upper and lower bounds on its complexity.

17 Path system accessibility (PSA)‏
V finite set of vertices S  V (starting vertices)‏ T  V (ending vertices)‏ R  V (V X V X V) Vertex v V is deemed accessible if :- 1-) It belongs to S. 2-) Or if there exist accessible vertices x and y such that (x,y,v)  R

18 PSA Example V = {a,b,c,d,e}, S = {a}, T = {e}
R = { (a,a,b), (a,b,c), (a,b,d), (c,d,e) } PSA is P-complete Why? If we do a simple iterative algorithm through all possible triples. We will identify the newly accessible vertices, adding them to the set. Until a complete pass through all vertices fails to produce any addition.

19 P-complete proofs Unit resolution Circuit value Depth first search

20 Unit resolution Can the empty clause be derived by unit resolution (eg. {x} and {x', y1,....,yn} yield {y1, ,yn}. Exhaustive search algorithm (brute force). The process works in polynomial time because if we have n variables and m initial clauses, we need at most 2mn resolution. We prove the problem is P-complete by transferring PSA into it.

21 Unit resolution Each initially reachable, terminal becomes a one literal. The triples become clauses of three elements. x={x},y={y}, (x,y,z) = {x',y',z} x Λ y → z. Thus our transformation runs in logarithmic space.

22 Overview Some P-completeness proofs Circuit Value Problem
Depth-First Search

23 Circuit Value Problem A circuit (a combinational logic circuit realizing some boolean function) represented by a sequence a1,a2,………an ai is one of 3 entities i) a logic value (True or False) ii) an AND gate iii) an OR gate Output is the output of last gate, an Is the Output of the circuit True?

24 Circuit Value Problem It is a decision problem !
Proof by a reduction from PSA. We use the version of PSA from Theorem 6.8, which has a single element in its target set. Basic Idea – convert a triple (x, y, z) of PSA into an AND gate with inputs x, y and output z.

25 PSA PSA - Path system accessibility
An instance of PSA is composed of a finite set, V, of vertices, a subset S  V of starting vertices, a subset T  V of terminal vertices, and a relation R  VxVxV. A vertex vε V is deemed accessible if it belongs to S or if there exist accessible vertices ‘x’ and ‘y’ such that (x, y, z) ε R. Does T has any accessible vertices ?

26 PSA Example V = {a, b, c, d, e}, S = {a}, T= {e}
R = { (a, a, b), (a, b, c), (a, b, d), (c, d, e) }

27 Circuit Value Problem The circuit will have all elements of PSA as inputs Inputs set to TRUE if they are elements of the initial set. All other inputs are set to FALSE

28 Circuit Value Problem How to propagate logical values for each of the inputs ? In PSA, elements can become accessible through the application of the proper sequence of triples; in our circuit, this corresponds to transforming certain inputs form False to True depending on the output of the AND gate Each step in the propagation corresponds to the application of one of the triple from PSA The output of the AND gate could be false, even though the value we are propagating is already True So, combine the previous truth value of the current elements and the output of the AND gate through an OR gate to obtain the new truth value

29 Circuit Value Problem For each element ‘z’ of PSA set up a “propagation line” form input to output Initialize the line value to True for elements of the initial set and False for everything else. Update the truth value for each triple of PSA that has ‘z’ as its third element When all propagation is complete, the line that corresponds to the element in the target set of PSA is the output of the circuit.

30 Circuit Value Problem What order we should process the triples ?
The order is crucial. We use a fixed ordering. Fixed ordering may give only one new accessible element. So, we have to repeat the process with the same ordering.

31 Circuit Value Problem How many times we need to repeat the process ?
N-1 stages, N- total number of elements In order to make a difference, Each pass through the ordering must produce at least one newly accessible element. We can use just (N - K - L + 1) stages. K – number of initially accessible elements L – the size of the target set both (N - K - L + 1) and (N-1) are asymptotically of same order.

32 Circuit Value Problem Form an instance of PSA with n elements and m triples. We built a circuit with n propagation lines each with a total of m.(n-1) propagation steps grouped into (n-1) stages. We can view this circuit as a Matrix of n rows and m.(n-1) columns

33 Circuit Value Problem Example: X = { a, b, c, d}, S= {a}, T= {d}
R = { (a, a, b), (a, b, c), (b, c, d) }

34 Circuit Value Problem for i = 1 to n-1 do (* n-1 stages for n elements) for j=1 to m do (* one pass through all m triples) for k=1 to n do (* update column values) if k=z then place Real circuit fragment else place fake circuit fragment

35 Circuit Value Problem The indices of the gates are simple products of the three indices and of the constant size of the AND-OR circuit fragment and can be computed on the fly – The transformation takes only logarithmic space. The special version of PSA with a single element forming the entire initially accessible set can be used for the reduction. Monotone CV is P-complete even when exactly one of the inputs is set to True and all others are set to False. We can replace AND-OR gates with universal gates (NAND or NOR) CV is, in effect, a version of the Satisfiability where it satisfies the Boolean formula represented by the circuit.

36 Depth First Search Given a rooted graph (directed or not) and two distinguished vertices of the graph, u and v, will u be visited before or after v in a recursive depth-first search of the graph? We prove this problem P-complete by transforming CV to it.

37 Depth First Search To simplify the construction of CV , consider the
circuit that has a single input set to TRUE, has a single output, and is composed entirely of NOR gates. We create a gadget that we shall use for each gate. This graph fragment has two vertices to connect it to the inputs of the gate and as many vertices as needed for the fan-out of the gate.

38 Depth First Search We set up a gadget with m + 6 vertices.
Entrance vertex E(i) Exit vertex X(i) In(i,1) and In(i,2) inputs of the gate S(i) and T(i) - serve as beginning and end of an up and down chain of m vertices that connect to the outputs of the gate

39 Depth First Search We have two ways of traversing this gadget from the entrance to the exit. These two visits S( i ) and T( i ) in opposite directions. First path: proceed from E( i) through In(i,1) and In(i,2) to S( i), then down the chain picking up all vertices ( in other gadgets) that are outputs of this gadget, ending at T(i), then moving to X(i). This traversal visits all of the vertices in the gadget, plus all of the vertices in other gadgets (vertices labeled In( jx, y) where y is 1 or 2 and 1≤ x≤ m ) that correspond to the fan-out of the gate. Second path: moves from E(i) to T(i), ascends that chain of m vertices without visiting any of the input vertices in other gadgets, reaches S(i), and from there moves to X(i). This traversal does not visit any vertex corresponding to inputs.

40 Depth First Search We chain all gadgets together by connecting X( i) to E( i+1). The complete construction can be easily accomplished in logarithmic space We claim that the output of the last gate, gate n, is True if and only if the depth first search visits S( n) before T( n).

41 Depth First Search The proof is an easy induction:
The output of NOR gate is True if and only if both of its inputs are False. So that by induction, the vertices In(n,1) and In(n,2) of the last gate have not been visited in the traversal of the previous gadgets and thus must be visited in the traversal of the last gadget, which can be done only by using the first traversal, which visits S( n) before T( n).

42 From Decision to Optimization and Enumeration
Chapter 7.3 From Decision to Optimization and Enumeration

43 Motivation Main purpose :-
Complexity theory has been very successful in characterizing the difficult decision problems. No such discussion has been done for search, optimization and enumeration problems. This sub chapter discuss about the complexity class of these other kind of problems.

44 Background Computational complexity of a problem is the classification of computational problems in terms of their inherent complexity. This usually refers to the time or space usage on a particular computational model. Optimization Problem is the problem of finding the best solution from all feasible solution.

45 7.3.1 Turing Reduction and Search Problems
The previous chapters are mainly restricted to the many-one reduction while discussing the decision problems. It was assumed that the complexity classes are closed under many-one reductions. Many one reduction is a special case of Turing reduction. Turing reduction enlarges the scope to the search and optimization problems.

46 Definition 7.1 A problem is NP-hard if every problem in NP Turing reduces to it in polynomial time; it is NP-easy if it Turing reduces to some problem in NP in polynomial time and it is NP-equivalent if it is both NP-hard and NP-easy

47 NP hard problem is solvable in polynomial time only if P =NP in which case all NP-easy problems are tractable. NP equivalent is analogous to NP-complete i.e. NP-equivalence is a generalization through Turing reductions of NP-completeness.

48 Conversion from optimization version to decision version
The search and optimization version of the problems can be reduced to their decision version. This is done by adding a bound B on the value to optimize and asking a question is there a solution whose value is at most B or at least B?

49 Relation between two problems
If we have the solution for the optimization version then we can compare the solution to the bound and answer “yes” or “no”. If optimization problem is tractable so is its decision version. In other words if the optimization problem is easy so is its decision version. But if the decision problem is hard then its optimization version is also hard.

50 Technique for reduction from optimization to decision version
First find the optimal value of objective function by a process of binary search which is a step for optimization, then build a optimal solution structure piece by piece, verifying each choice through calls to the oracle for decision version.

51 Steps of reduction for Optimization problems:
Establish the lower and upper bounds for the value of objective function at an optimal solution. Use binary search with decision problem oracle to determine the value of optimal solution

52 Determine the change to be made to an instance when a first element of the solution has been chosen.
Build a solution ,one element at a time, from the empty set. To determine which to be put next, try all remaining elements, reflect the changes, ask the oracle on the existence of optimal solution to the instance formed by the remaining pieces changed as needed.

53 Knapsack is NP easy Let an instance of Knapsack have n objects, integer-valued weight function w, integer-valued value function v, and weight bound B. Let wmax is the weight of the heaviest object and vmax is the value of the most valuable object. Here the value of the optimal solution will be larger than zero and no larger than n.vmax. Although this range is exponential in input size it can be searched with a polynomial number of comparisons using binary search. Algorithm issues (logn+logvmax) queries to the decision oracle, the value bound is initially set to some value which is later changed. The outcome of the search is the value of the optimal solution which is called as Vopt.

54 Now we verify the composition of the optimal solution
Now we verify the composition of the optimal solution. We proceed with one object at a time: for each object in turn, we determine whether it may be included in an optimal solution. Initially the partial solution under construction includes no objects .To pick the first order, we try each in turn :when trying object i, we ask oracle whether there exists a solution to the new knapsack problem formed of (n-1) objects removing object i, with weight bound set to B-w(i) and value bound set to Vopt-v(i).

55 If the answer is “no” we try with the next object eventually the answer must be “yes” since a solution with value Vopt is known to exist, and the corresponding object say j is included in the partial solution. The weight bound is then updated to W-w(j) and the value bound to Vopt-v(j), and the process is repeated until the updated value bound reaches zero. At worst for a solution including k objects, we shall have examined n-k+1 choice and thus called the decision routine n-k+1 times –for our first choice n-k for our second choice.

56 Hence the construction phase requires only a polynomial number of calls to the oracle. Thus the conversion of optimization version of knapsack problem to its decision version does only a polynomial amount of work in the calls and the complete reduction runs in a polynomial time. Thus this reduction is done in polynomial time.

57 Notes: Search and optimization problem is called self reducible whenever it reduces to its own decision version. To prove problem NP-easy, we just need to show it reduces to some NP-complete decision problems.

58 Lemma 7.1:Let phi be some NP-complete problem, then oracle for any problem in NP can be replaces by oracle for phi with at most polynomial change in the running time. TSP is NP-easy following the above lemma.

59 Conclusion Many one reduction describes the complexity of decision problems. Turing reduction allows to extend the classification of decision problem to their search or optimization version.

60 References Bernard M. Moret, The Theory of Computation, Pearson Education,1998. Dexter C. kozen, Theory of Computation,Springer,2006. “Tractable and Intractable Computational Problems”, web.njit.edu/~leung/cis435dl/npc.pdf (current October 28,2007) “NP-hard,”Wikipedia,the free encyclopedia, (current October 28,2007).

61 Thank you.


Download ppt "Chapter 7 Proving Problems Hard"

Similar presentations


Ads by Google