Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.

Slides:



Advertisements
Similar presentations
Complexity Theory Lecture 6
Advertisements

Isolation Technique April 16, 2001 Jason Ku Tao Li.
 2004 SDU Lecture17-P,NP, NPC.  2004 SDU 2 1.Decision problem and language decision problem decision problem and language 2.P and NP Definitions of.
Having Proofs for Incorrectness
© The McGraw-Hill Companies, Inc., Chapter 8 The Theory of NP-Completeness.
Complexity 25-1 Complexity Andrei Bulatov #P-Completeness.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
Computability and Complexity 14-1 Computability and Complexity Andrei Bulatov Cook’s Theorem.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
Computability and Complexity 13-1 Computability and Complexity Andrei Bulatov The Class NP.
Computability and Complexity 19-1 Computability and Complexity Andrei Bulatov Non-Deterministic Space.
CS151 Complexity Theory Lecture 7 April 20, 2004.
CS151 Complexity Theory Lecture 5 April 13, 2004.
Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition.
FORMAL LANGUAGES, AUTOMATA AND COMPUTABILITY Read sections 7.1 – 7.3 of the book for next time.
Randomized Computation Roni Parshani Orly Margalit Eran Mantzur Avi Mintz
1 Slides by Golan Weisz, Omer Ben Shalom Nir Ailon & Tal Moran Adapted from Oded Goldreich’s course lecture notes by Moshe Lewenstien, Yehuda Lindell.
1 Polynomial Time Reductions Polynomial Computable function : For any computes in polynomial time.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
Time Complexity.
CS Master – Introduction to the Theory of Computation Jan Maluszynski - HT Lecture NP-Completeness Jan Maluszynski, IDA, 2007
Chapter 11: Limitations of Algorithmic Power
Complexity ©D. Moshkovitz 1 And Randomized Computations The Polynomial Hierarchy.
February 20, 2015CS21 Lecture 191 CS21 Decidability and Tractability Lecture 19 February 20, 2015.
The Polynomial Hierarchy By Moti Meir And Yitzhak Sapir Based on notes from lectures by Oded Goldreich taken by Ronen Mizrahi, and lectures by Ely Porat.
Defining Polynomials p 1 (n) is the bound on the length of an input pair p 2 (n) is the bound on the running time of f p 3 (n) is a bound on the number.
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
Definition: Let M be a deterministic Turing Machine that halts on all inputs. Space Complexity of M is the function f:N  N, where f(n) is the maximum.
חישוביות וסיבוכיות Computability and Complexity Lecture 7 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A AAAA.
The Complexity of Optimization Problems. Summary -Complexity of algorithms and problems -Complexity classes: P and NP -Reducibility -Karp reducibility.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
NP Complexity By Mussie Araya. What is NP Complexity? Formal Definition: NP is the set of decision problems solvable in polynomial time by a non- deterministic.
CSCI 2670 Introduction to Theory of Computing November 29, 2005.
CSCI 2670 Introduction to Theory of Computing December 1, 2004.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
CSC401 – Analysis of Algorithms Chapter 13 NP-Completeness Objectives: Introduce the definitions of P and NP problems Introduce the definitions of NP-hard.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
1 P P := the class of decision problems (languages) decided by a Turing machine so that for some polynomial p and all x, the machine terminates after at.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
1. 2 Lecture outline Basic definitions: Basic definitions: P, NP complexity classes P, NP complexity classes the notion of a certificate. the notion of.
Non-Approximability Results. Summary -Gap technique -Examples: MINIMUM GRAPH COLORING, MINIMUM TSP, MINIMUM BIN PACKING -The PCP theorem -Application:
P Vs NP Turing Machine. Definitions - Turing Machine Turing Machine M has a tape of squares Each Square is capable of storing a symbol from set Γ (including.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
NP-completeness Section 7.4 Giorgi Japaridze Theory of Computability.
NP-completeness Class of hard problems. Jaruloj ChongstitvatanaNP-complete Problems2 Outline  Introduction  Problems and Languages Turing machines and.
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
NP-complete Languages
CSCI 2670 Introduction to Theory of Computing December 2, 2004.
CSCI 2670 Introduction to Theory of Computing December 7, 2005.
Given this 3-SAT problem: (x1 or x2 or x3) AND (¬x1 or ¬x2 or ¬x2) AND (¬x3 or ¬x1 or x2) 1. Draw the graph that you would use if you want to solve this.
Theory of Computational Complexity Yuji Ishikawa Avis lab. M1.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
NP-Completeness A problem is NP-complete if: It is in NP
The NP class. NP-completeness
Graphs 4/13/2018 5:25 AM Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 NP-Completeness.
NP-Completeness NP-Completeness Graphs 5/7/ :49 PM x x x x x x x
Probabilistic Algorithms
Umans Complexity Theory Lectures
NP-Completeness Yin Tat Lee
Intro to Theory of Computation
Intro to Theory of Computation
NP-Completeness NP-Completeness Graphs 11/16/2018 2:32 AM x x x x x x
NP-Completeness NP-Completeness Graphs 12/3/2018 2:46 AM x x x x x x x
CS154, Lecture 13: P vs NP.
Time Complexity Classes
NP-Completeness Yin Tat Lee
CSE 589 Applied Algorithms Spring 1999
Instructor: Aaron Roth
CS151 Complexity Theory Lecture 5 April 16, 2019.
Presentation transcript:

Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom

P/Poly P/Poly is the class of Turing machines that accept external “advice” in computation. Formally - L  P/Poly if there exists a polynomial time two- input machine M and a sequence {a n } of “advice” strings |a n |  p(n) s.t

An alternative definition: For L in P/Poly there is a sequence of circuits {C n }, where C n has n inputs and 1 output, its size is bounded by p(n), and: This is called a non-uniform family of circuits.

There is not necessarily a connection between the different circuits.

Proof:  Circuit  TM with advice: – – There exists a family of circuits {Cn} deciding L and |C n | is poly- bounded. –Given an input x, use a standard circuit encoding for C |x| as advice. –The TM simulates the circuit and accepts/rejects accordingly. Theorem:P/Poly definitions equivalence

 TM with advice  circuit: –Similar to proof of Cook’s theorem build from M(a n,) a circuit which for input x of length n outputs M(a n,x). – – There exists a TM taking advice {a n }. –By running over all n’s a family of circuits is created from the advice strings.

Using P/Poly to decide P=NP Claim: P  P/poly - just use an empty advice. If we find a language L  NP and L  P/Poly then we prove P  NP. P/Poly and NP both use an external string for computation. How are they different?

The different between P/poly & NP a n 1.For a given n, P/poly has a universal witness a n as opposed to NP where every x of length n may have a different witness. 2.In NP, for every x  L, for every witness w M(x,w)=0. This is not true for P/poly. 3.P/poly=co-P/poly, but we do not know if NP=co-NP.

The power of P/Poly Theorem: BPP  P/Poly Proof: By simple amplification on BPP we get that for any x  {0,1} n, the probability for M to incorrectly decide x is < 2 -n. Reminder: In BPP, the computation uses a series r of poly(n) coin tosses.

P/poly includes non-recursive languages Example: All unary languages (subsets of {1}*) are in P/Poly - the advice string can be exactly  L (1 n ) and the machine checks if the input is unary.

There are non-recursive unary languages: Take {1 index(1) | x  L} where L is non-recursive.

Sparse Languages & Question Definition: A language S is sparse if there exists a polynomial p(.) such that for every n, |S  {0,1} n |  p(n) Theorem: NP  P/Poly for every L  NP, L is Cook reducible to a sparse language.

In other words a language is sparse when there is a “small” number of words in each length, p(n) words of length n. Example: Every unary language is sparse (p(n)=1)

Proof: It is enough to prove that SAT  P/Poly  SAT is Cook reducible to a sparse language. By P/Poly definition, SAT  P/Poly says that there exists a series of advice strings {a n } s.t.  n |a n |  q(n). q(.)  Poly and a polynomial Turing Machine M s.t. M( ,a n )=χ SAT (  ). Definition: S i n =0 i-1 10 q(n)-I S = {1 n 0S i n | n>0 where bit i of a n is 1} S is sparse since for every n |S  {0,1} n+q(n)+1 |  |a n |  q(n)

S is a list that have an element for each ‘1’ in the advices. For each length we have maximum of all 1’s, so we’ll get |a n | elements of length n+q(n)+1.

Cook-Reduction of SAT to S: Input:  of length n Reconstruct a n by q(n) queries to S. [The queries are: 1 n 0S i n for 1  i  q(n)]. Run M( ,a n ) thereby solving SAT in polynomial time. We solved SAT with a polynomial number of queries to an S-oracle means that it is Cook-reducible to the sparse language S.

Reconstruction is done bit by bit. if 1 n 0S i n  S then bit i of a n is 1.

[SAT  P/Poly  SAT is Cook reducible to a sparse language.]  In order to prove that SAT  P/Poly we will construct a series of advice strings {a n } and a deterministic polynomial time M that solves SAT using {a n }. SAT is Cook-reducible to sparse language S. Therefore, there exists an oracle machine M S which solves SAT in polynomial time t(.).   M S makes at most t(|  |) oracle queries of size t(|  |).

Construct a n by concatenating all strings of length t(|  |) in S. S is sparse  There exists p(.)  Poly s.t.  n |S  {0,1} n |  p(n). For every i there are at most p(i) elements of length i in S  |a n |  so a n is polynomial in length. Now, given a n the oracle queries will be simulated by scanning a n. M will act as same as M S except that the oracle queries will be answered by a n. M is a deterministic polynomial time machine that using a n solves SAT, therefore SAT  P/Poly. 

P=NP using the Karp-reduction Theorem P=NP iff for every language L  NP, L is Karp-reducible to a sparse language. Proof: (  ) P=NP  Any L in NP is Karp-reducible to a sparse language. L  NP, we reduce it (Karp reduction) to {1} using the function :

{1} is obviously a sparse language. The reduction function f is polynomial, since it computes x  L. L  NP and we assume that P = NP, so computing L takes polynomial time. The reduction is correct because x  L iff f(x)  {1}

(  ) SAT is Karp-reducible to a sparse language  P=NP We prove a weaker claim ( even though this claim holds) SAT is Karp-reducible to a guarded sparse language  P=NP Definition : A sparse language S is guarded if there exists a sparse language G, such that S  G and G is in P. For example, S= {1 n 0s n i | n >0,where bit i of a n is 1} is guarded sparse by G= {1 n 0s n i | n >0 and 0<i<=q(n) }.

Obviously, we can prove the theorem for SAT instead of proving it for every L in NP. (SAT is NP-complete)

As Claimed, we have a Karp reduction from SAT to a guarded sparse language S by f. Input: A boolean formula  =  (x 1,…,x n ). We build the assignment tree of  by assigning at each level another variable both ways (0,1). Each node is labeled by its assignment. Thus the leaves are labeled with n-bit string of all possible assignments to .

the leaves have a full assignment, thus a boolean constant value.

The tree assignment  1 =  (1,x 2,…,x n )  0 =  (0,x 2,…,x n )

We will solve  by DFS search on the tree using branch and bound. The algorithm will backtrack from a node only if there is no satisfying assignment in its entire subtree. At each node  it will calculate x .: S is the guarded sparse language of the reduction and G is the polynomial sparse language which S is guarded by. We also define :B  G – S.

x  is computed in polynomial time because Karp reduction is polynomial time reduction. B is well defined since S  G.

Algorithm: Input  =  (x 1, …,x n ). 1. B =  2. Tree-Search( ) 3. In case the above call was not halted, reject  as non satisfiable.

At first B is empty, we have no information on the part of S in G. We call the procedure Tree-Search for the first time with the empty assignment.

Tree-Search (  ) if |  |=n //we’ve reached a leaf  a full assignment if    True then output the assignment  and halt. else return. if |  | < n a.compute x  =f(   ) b.if x   G then return // x   G (  x   S     SAT) c.if x   B then return //x   B (  x   G-S     SAT) d.Tree-Search(  1) e.if x   G add x  to B //We backtracked from both sons f.Tree-Search(  0) g.return

x   G Can be computed in poly-time because G is in P. We backtrack (return in the recursion) when we know that x   S [ x   B or x   G ] hence there is no assignment that starts with the partial assignment , so there is no use expanding and building this assignment. If we backtrack from both sons of  we add x  to the set B  G - S

Correctness If  is satisfiable we will find some satisfying assignment, since for all nodes  on the path from the root to a leaf x   S and we continue developing its sons. When finding an assignment it is first checked to give TRUE and only then returned. The algorithm returns “no” if it has not found a satisfying assignment and has backtracked from all possible sons.

Complexity analysis: If we visit a number of nodes equal to the number of strings in G, the algorithm is polynomial, due to G’s sparseness. It is needed to show that only a polynomial part of the tree is developed. Claim: Let  and  be different nodes in the tree such that neither is an ancestor of the other and x  = x . Then it is not possible that the sons of both nodes were developed.

To prevent developing a certain part of the tree that does not lead to a truth assignment over and over by different x  s, we maintain the set B which keeps x  from which we have backtracked.We do not develop a node whose x is in B.

Proof: W.l.o.g. we assume that we reach  before . Since  is not an ancestor of , we arrive at  after backtracking from . If x   G then x   G since x  = x  and we will not develop them both. Otherwise, after backtracking from , x   B thus x   B so we will not develop its sons.

Last point : Only polynomial part of the tree is developed. Proof : G is sparse thus at each level of the tree there are at most p(n) different  such that x   G, moreover every two different nodes on this level are not ancestors of each other. Therefore by the previous claim the number of x  developed from this level is bounded by p(n).  The overall number of nodes developed is bounded by n p(n).

Each level of the tree represents a different number of variable needed to be assigned in the formula. In this length there are polynomial number of items belonging to G since it is a sparse language.

We have shown that with this algorithm, knowing that SAT is Karp reducible to a guarded sparse language, we can solve SAT in polynomial time  SAT  P  P=NP