Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Slides by Golan Weisz, Omer Ben Shalom Nir Ailon & Tal Moran Adapted from Oded Goldreich’s course lecture notes by Moshe Lewenstien, Yehuda Lindell.

Similar presentations


Presentation on theme: "1 Slides by Golan Weisz, Omer Ben Shalom Nir Ailon & Tal Moran Adapted from Oded Goldreich’s course lecture notes by Moshe Lewenstien, Yehuda Lindell."— Presentation transcript:

1

2 1 Slides by Golan Weisz, Omer Ben Shalom Nir Ailon & Tal Moran Adapted from Oded Goldreich’s course lecture notes by Moshe Lewenstien, Yehuda Lindell & Tamar Seeman

3 2 In This Lecture The non-uniform polynomial time class (P/Poly) The non-uniform polynomial time class (P/Poly) The two equivalent definitions for the class The two equivalent definitions for the class The power of P/Poly The power of P/Poly BPP is contained in P/Poly BPP is contained in P/Poly P/Poly includes non recursive languages P/Poly includes non recursive languages

4 3 What is P/Poly ? P/Poly comprise all languages accepted by TMs working with an external “advice” P/Poly comprise all languages accepted by TMs working with an external “advice” Formally - L  P/Poly if there exists a polynomial time two-input machine M and a sequence {a n } of “advice” strings |a n |  p(n) s.t Formally - L  P/Poly if there exists a polynomial time two-input machine M and a sequence {a n } of “advice” strings |a n |  p(n) s.t  8.1

5 4 What is P/Poly? TM M Input Advice If the advice were not polynomially bounded it could have been a lookup table for the language. Answer

6 5 An alternative def: For L in P/Poly there is a sequence of circuits {C n }, where C n has n inputs and 1 output, its size is bounded by p(n), and: For L in P/Poly there is a sequence of circuits {C n }, where C n has n inputs and 1 output, its size is bounded by p(n), and: Referred to as a non-uniform family of circuits. Referred to as a non-uniform family of circuits. P/Poly as Circuits ? circuits for distinct n are not necessarily correlated

7 6 Circuit 1 Non-Uniform Circuits Circuit 3 Circuit 2 A family of circuits {C n } where circuit C n has n inputs and 1 output.

8 7 Circuit  TM w/advice Circuit  TM w/advice  a family of circuits {C n } deciding L s.t. |C n | is poly-bounded.  a family of circuits {C n } deciding L s.t. |C n | is poly-bounded. For every n, use a standard circuit encoding for C |x| as advice For every n, use a standard circuit encoding for C |x| as advice Have a TM simulate the circuit and accepts/rejects accordingly. Have a TM simulate the circuit and accepts/rejects accordingly. P/Poly def’s equivalent  8.1.1

9 8 P/Poly def’s equivalent (2) TM w/advice  circuit TM w/advice  circuit  a TM taking advice {a n }  a TM taking advice {a n } Construct, given M(a n,.) a circuit that, on input x of length n, outputs M(a n,x) Construct, given M(a n,.) a circuit that, on input x of length n, outputs M(a n,x) Over all n’s we get a family of circuits, based on the advices Over all n’s we get a family of circuits, based on the advices

10 9 P/Poly and the P=NP ? P  P/poly - empty advice P  P/poly - empty advice If we find a language L  NP s.t. L  P/Poly we would prove P  NP If we find a language L  NP s.t. L  P/Poly we would prove P  NP P/Poly and NP both use an external string for computation. How are they different? P/Poly and NP both use an external string for computation. How are they different?  8.1.2 P/poly uses a universal witness a n for all inputs of size n, while for NP there may be different witnesses for inputs of same size. For L in NP, any witness is rejected if x  L For P/poly, however, there can be bad advices!

11 10 The power of P/Poly (1) Claim: BPP  P/Poly Claim: BPP  P/Poly Proof: By simple amplification on BPP we get that for any x  {0,1} n, the probability for M to err on x is < 2 -2n Proof: By simple amplification on BPP we get that for any x  {0,1} n, the probability for M to err on x is < 2 -2n  8.2.4 Recall: In BPP, the computation uses a sequence r of poly(n) coin tosses.

12 11 Apply Union Bound Now, there are 2 n x’s, for any x the probability of a bad r is at most 2 -2n Now, there are 2 n x’s, for any x the probability of a bad r is at most 2 -2n Therefore, there exists at least 1 r which is good for all x’s. Therefore, there exists at least 1 r which is good for all x’s. To illustrate (n=2): To illustrate (n=2): Boxes: x’s Black area: the “bad” r’s Overlapped, the black areas do not cover everything Therefore there are r’s that are good for all x’s The black areas cover less than 1 / 4 of the boxes

13 12 The Extreme Power of P/Poly In fact P/Poly is even much stronger class P/Poly includes non-recursive languages!!! P/Poly includes non-recursive languages!!!  8.2.3

14 13 The power of P/Poly (3) Example: All unary languages (subsets of {1}*) are in P/Poly: the advice string can be exactly  L (1 n ) and the machine checks if the input is unary. All unary languages (subsets of {1}*) are in P/Poly: the advice string can be exactly  L (1 n ) and the machine checks if the input is unary. Fact: There are non-recursive unary languages. Fact: There are non-recursive unary languages. Take {1 index(x) | x  L} where L is non-recursive

15 14 Uniform Families of Circuits P/Poly includes non-recursive languages due to it being non- uniform, hence, there is no guarantee the advice is even computable P/Poly includes non-recursive languages due to it being non- uniform, hence, there is no guarantee the advice is even computable A more reasonable class of circuits is the uniform family of circuits A more reasonable class of circuits is the uniform family of circuits  8.3

16 15 Uniform Advice Def: A family of circuits {C n } is uniform if there exists a poly- time TM M that generates C n on input 1 n Thm: L is decided by a uniform family of circuits iff L  P

17 16 Simulation Proof: If {C n } is a uniform family of circuits deciding L and M is a TM as above, then on input x If {C n } is a uniform family of circuits deciding L and M is a TM as above, then on input x Run M to obtain C n Run M to obtain C n Simulate C n on x and output the outcome Simulate C n on x and output the outcome The algorithm’s running time is polynomial The algorithm’s running time is polynomial

18 17 Uniform Circuits Proof (other direction): L is a language in P. So there is a poly- time machine M deciding L. L is a language in P. So there is a poly- time machine M deciding L. As in the proof of Cook’s theorem, a poly-time machine M’, given input 1 n can construct a poly-size circuit simulating M on inputs of size n. As in the proof of Cook’s theorem, a poly-time machine M’, given input 1 n can construct a poly-size circuit simulating M on inputs of size n. The family of circuits generated by M’ is uniform and decides L. The family of circuits generated by M’ is uniform and decides L.

19 18 Sparse languages (1) Def: A language S is sparse if there exists a polynomial P(.) s.t. for every n |S  {0,1} n |  p(n) (That is, the language contains at most p(n) strings of length n) Claim: NP  P/Poly iff for every L  NP, L is Cook reducible to a sparse language NP  P/Poly iff for every L  NP, L is Cook reducible to a sparse language  8.4 Suffices to consider SAT as it is NP-complete

20 19 Sparse languages (2) Proof: SAT  P/Poly  SAT is Cook reducible to a sparse language SAT  P/Poly  SAT is Cook reducible to a sparse language From P/Poly def, there exist a family of advice strings {a n } s.t.  n |a n | ≤ q(n) From P/Poly def, there exist a family of advice strings {a n } s.t.  n |a n | ≤ q(n) Let Let S i n = 000…00011111…..1 S i n = 000…00011111…..1 S = { 1 n 0S i n | n>0 and i ‘th bit of a n is 1} S = { 1 n 0S i n | n>0 and i ‘th bit of a n is 1} S is actually a list of unary pointers to positions of “1”s in {a n } S is actually a list of unary pointers to positions of “1”s in {a n } S is sparse as  n |S  {0,1} n+q(n)+1 |  |a n |  q(n)

21 20 Sparse languages (3) The reduction: On input , reconstruct a n by q(n) queries to an oracle for S of type 1 n 0S i n Run M(a n,  ) thereby solving SAT in polynomial time On input , reconstruct a n by q(n) queries to an oracle for S of type 1 n 0S i n Run M(a n,  ) thereby solving SAT in polynomial time We showed a polynomial-time solution for SAT using an S-oracle therefore it is Cook reducible to the sparse language S We showed a polynomial-time solution for SAT using an S-oracle therefore it is Cook reducible to the sparse language S

22 21 Sparse languages (4) Proof of other direction: SAT is Cook reducible to a sparse language implies SAT  P/Poly SAT is Cook reducible to a sparse language implies SAT  P/Poly  an oracle machine M S which decides SAT in poly-time t(.)  M makes at most t(|x|) oracle queries of size  t(|x|)  an oracle machine M S which decides SAT in poly-time t(.)  M makes at most t(|x|) oracle queries of size  t(|x|) Construct a n by concatenating all the strings of length  t(n) in S into a ‘table’, Since S is sparse the total length of a n is poly(n). Construct a n by concatenating all the strings of length  t(n) in S into a ‘table’, Since S is sparse the total length of a n is poly(n). The “advice machine” will take a n as advice and use it to simulate the oracle. The “advice machine” will take a n as advice and use it to simulate the oracle.

23 22 Sparse, Karp and P vs NP Claim: P=NP iff every language L in NP is Karp-reducible to a sparse language P=NP iff every language L in NP is Karp-reducible to a sparse language Proof: P=NP  Any L in NP is Karp-reducible to a sparse language Trivially Karp-reduce a language L in NP(= P) to the sparse language {0,1} using  8.4.9 f is poly-time computable if P=NP

24 23 Sparse, Karp and P vs NP (2) Proof of other direction: SAT is Karp-reducible to a sparse lang  P=NP Def of a guarded sparse language: Def of a guarded sparse language: S is guarded sparse if  G s.t G is a sparse language in P and S  G. S is guarded sparse if  G s.t G is a sparse language in P and S  G. Remark: Any unary language is guarded by the language {1 n |n>=0} (sparse and in P) Remark: Any unary language is guarded by the language {1 n |n>=0} (sparse and in P) guarded (a weaker version)

25 24 Sparse, Karp and P vs NP(3) Let f be the Karp-reduction of SAT to a guarded sparse language S. Let f be the Karp-reduction of SAT to a guarded sparse language S. Input: A Boolean formula  =  (x 1,…,x n ) Input: A Boolean formula  =  (x 1,…,x n ) The strategy will be to compute  with a DFS on the tree of all possible assignments of  ’s variables. The strategy will be to compute  with a DFS on the tree of all possible assignments of  ’s variables.

26 25 Sparse, Karp and P vs NP (4) Let: Let: We construct a tree with the  ’s as nodes. Each node  has two children,  0 and  1. The root is the empty assignment, and the leaves have a Boolean constant value. We construct a tree with the  ’s as nodes. Each node  has two children,  0 and  1. The root is the empty assignment, and the leaves have a Boolean constant value.

27 26 Sparse, Karp and P vs NP (5) 1 00 0 01 1011  0 =  (0,x 2,…,x n )  1 =  (1,x 2,…,x n ) 00

28 27 P=NP using the Karp- reduction(5) The algorithm will perform a DFS on the tree. The algorithm will perform a DFS on the tree. At each node  it will calculate x  At each node  it will calculate x  The algorithm will backtrack from a node if either: The algorithm will backtrack from a node if either: x   G (  x   S     SAT) x   G (  x   S     SAT) It backtracked from both child. of . It backtracked from both child. of . Can be computed in poly-time because G is in P. In this case we add x  to the set B  G-S - x   B (  x   G-S     SAT)

29 28 P=NP using the Karp- reduction(6) The algorithm stops and accept if it reaches a leaf and verifies that the assignment satisfies  The algorithm stops and accept if it reaches a leaf and verifies that the assignment satisfies  The algorithm stops and rejects if it backtracks from both children of the root node. The algorithm stops and rejects if it backtracks from both children of the root node. Complexity analysis: Call a node “bad” if it does not lead to a satisfying assignment, but the algorithm proceeds to its descendants Call a node “bad” if it does not lead to a satisfying assignment, but the algorithm proceeds to its descendants

30 29 Sparse, Karp and P vs NP (7) The point is that by keeping the data structure B, we make sure that there is no more than a polynomial number of bad nodes: that number cannot exceed |G-S| which is polynomial due to G’s sparseness

31 30 The DFS Algorithm Start: B<-  Tree-Search( ) In case the above call was not halted, reject. Tree-Search(  ): if |  |=n /* Leaf */ if    True halt and accept else return compute x  =f(   ) /* case not at a leaf */ if x   G return if x   B return Tree-Search(  0) Tree-Search(  1) add x  to B /* in case we return from the recursions */ return

32 31 Slides by Golan Weisz && Omer Ben Shalom. Adapted from Oded Goldreich’s course lecture notes by Ronen Mizrahi

33 32 Introduction The Polynomial-time Hierarchy is an extension of the classes P/NP. The Polynomial-time Hierarchy is an extension of the classes P/NP. Two ways to define the class: Two ways to define the class: By generalizing the notion of Cook reduction. By generalizing the notion of Cook reduction. By using logical quantifiers. By using logical quantifiers. We show that this class also upper bounds the notion of ‘efficient computation’. We show that this class also upper bounds the notion of ‘efficient computation’. We find conditions for the hierarchy to collapse. We find conditions for the hierarchy to collapse.

34 33 PH def - by oracle machines (1) Basic definitions Basic definitions M A - TM with oracle A M A - TM with oracle A L(M A ) - the language of inputs accepted by M A L(M A ) - the language of inputs accepted by M A M A (x) – the output of M A on input x M A (x) – the output of M A on input x L(M C ) is defined as the set of languages accepted by M with access to oracle A of class C. L(M C ) is defined as the set of languages accepted by M with access to oracle A of class C.  9.1.1

35 34 PH def - by oracle machines (2) L  C 1 C2  There exists a TM of class C 1 with access to an oracle in class C 2 that accepts L L  C 1 C2  There exists a TM of class C 1 with access to an oracle in class C 2 that accepts L Examples : Examples : P C = Languages accepted by DTM with access to an oracle A in class C. P C = Languages accepted by DTM with access to an oracle A in class C. NP C = Languages accepted by NTM with access to an oracle A in class C NP C = Languages accepted by NTM with access to an oracle A in class C BPP C =Languages accepted by a probabilistic TM with access to an oracle A in C. BPP C =Languages accepted by a probabilistic TM with access to an oracle A in C.

36 35 PH def - by oracle machines (3) We define by induction: We define by induction:  1 := NP  1 := NP  i+1 := NP  i  i+1 := NP  i  i := Co  i  i := Co  i  i+1 := P  i  i+1 := P  i

37 36 PH def - by oracle machines (4) Proposition: Proposition:  i  i   i+1   i+1  i+1  i  i   i+1   i+1  i+1 Proof: Proof:  i   i   i+1 =p  i ( for example NP  coNP  p NP =  2 )  i   i   i+1 =p  i ( for example NP  coNP  p NP =  2 ) We have an oracle for L   i We have an oracle for L   i Ask the oracle and return answer for  i Ask the oracle and return answer for  i Ask the oracle and return the opposite for  i (which is co-  i ). Ask the oracle and return the opposite for  i (which is co-  i ).

38 37 PH def - by oracle machines (5) p  i   i+1  i+1 p  i   i+1  i+1 p  i  NP  i =  i+1 as an NP TM can compute anything a P TM machine can p  i  NP  i =  i+1 as an NP TM can compute anything a P TM machine can As p  i is closed under complementation (any reply of the oracle can be reversed) then: As p  i is closed under complementation (any reply of the oracle can be reversed) then: L  p  i & p  i   i+1  L  i+1 L  p  i & p  i   i+1  L  i+1 -

39 38 PH def - by quantifiers (1) We are building on one of the basic NP definition We are building on one of the basic NP definition A k-ary relation R is polynomially bounded if exists a polynomial P(.) s.t.  (x 1,x 2,….,x k ) [(x 1,x 2 ….,x k )  R   i |x i |  p(|x 1 |) A k-ary relation R is polynomially bounded if exists a polynomial P(.) s.t.  (x 1,x 2,….,x k ) [(x 1,x 2 ….,x k )  R   i |x i |  p(|x 1 |) Note that the polynomial condition here is that all variables are bounded by the x 1 which we define as the input Note that the polynomial condition here is that all variables are bounded by the x 1 which we define as the input ->9.1.2

40 39 PH def - by quantifiers (2) Recall the Quantifier definition of NP: L  NP if there exists a polynomially bounded relation R L, recognizable in poly-time, s.t x  L iff  y s.t (x,y)  R L Recall the Quantifier definition of NP: L  NP if there exists a polynomially bounded relation R L, recognizable in poly-time, s.t x  L iff  y s.t (x,y)  R L

41 40 PH def - by quantifiers (3) One can now define  2 i : L  2 i if exists a polynomially bounded, polynomial time recognizable, (i+1)-ary relation R L s.t x  L iff  y 1  y 2  y 3.. Q i y i s.t (x,y 1,…,y i )  R L One can now define  2 i : L  2 i if exists a polynomially bounded, polynomial time recognizable, (i+1)-ary relation R L s.t x  L iff  y 1  y 2  y 3.. Q i y i s.t (x,y 1,…,y i )  R L Q i is “  ” if i is even and “  ” if odd Q i is “  ” if i is even and “  ” if odd

42 41 PH def equivalent (side 1) We prove  2 i   1 i by induction We prove  2 i   1 i by induction (base) For i=1 both are NP. (base) For i=1 both are NP. Assume  2 i   1 i, prove  2 i+1   1 i+1 : Assume  2 i   1 i, prove  2 i+1   1 i+1 : The i+1 st quantifiers can be viewed as (y 1 )+ the last i quantifiers The i+1 st quantifiers can be viewed as (y 1 )+ the last i quantifiers (*) we write: x  L iff  y 1 s.t. (x,y 1 )  L i where: (x’,y’)  L i iff  y 2 …Q i+1 y i+1 :(x’,y’,y 2 …,y i+1 )  R L (*) we write: x  L iff  y 1 s.t. (x,y 1 )  L i where: (x’,y’)  L i iff  y 2 …Q i+1 y i+1 :(x’,y’,y 2 …,y i+1 )  R L ->9.1.3

43 42 PH def equivalence: L i   1 i R L i is polynomially bounded and polynomial-time recognizable (as R L is) R L i is polynomially bounded and polynomial-time recognizable (as R L is) x  L i iff  y 1 …Q i+1 y i :(x,y 1 …,y i )  R l i x  L i iff  y 1 …Q i+1 y i :(x,y 1 …,y i )  R l i x  co-L i iff  y 1 …Q i y i :(x,y 1 …,y i )  R L i x  co-L i iff  y 1 …Q i y i :(x,y 1 …,y i )  R L i Therefore, co-L i  2 i, so L i   2 i Therefore, co-L i  2 i, so L i   2 i By the inductive hypothesis,  2 i  1 i so we have L i   1 i By the inductive hypothesis,  2 i  1 i so we have L i   1 i

44 43 PH def equivalence: L  1 i+1 A TM M in NP  i can check the claim (*) by guessing the value of y 1 and using the oracle for L i A TM M in NP  i can check the claim (*) by guessing the value of y 1 and using the oracle for L i Therefore L  1 i+1 Therefore L  1 i+1

45 44 PH def equivalent (side 2) We prove  1 i   2 i by induction We prove  1 i   2 i by induction Base as before on NP with same hypothesis Base as before on NP with same hypothesis L  1 i+1   a NTM M s.t. L  L(M  i ) which means  L’  1 i s.t. L=L(M L’ ) L  1 i+1   a NTM M s.t. L  L(M  i ) which means  L’  1 i s.t. L=L(M L’ )

46 45 PH def equivalent (3) L=L(M L’ ) means that: x  L iff  y 1,q 1,a 1,…,q t,a t s.t the TM M accepts x with non-deterministic choices y receiving a i to oracle query q i : (a j =1)  q j  L’ (a j =1)  q j  L’ (a j =0)  q j  L’ (a j =0)  q j  L’

47 46 PH def equivalent (4) “  y 1 ” is a polynomial-time predicate “  y 1 ” is a polynomial-time predicate We assume L’  i 1 and by induction  1 i  2 i  L’  2 i. Therefore for each question/answer we can write a relation R L’ s.t: We assume L’  i 1 and by induction  1 i  2 i  L’  2 i. Therefore for each question/answer we can write a relation R L’ s.t: (a j =1)   y 1 (j,1)  y 2 (j,1) …Q i y i (j,1) s.t (q j,y 1 (j,1),…,y i (j,1) )  R L’ (a j =0)   y 1 (j,2)  y 2 (j,2) …Q i+1 y i (j,2) s.t (q j,y 1 (j,2),…,y i (j,2) )  Co-R L’

48 47 PH def equivalent (5) Let j 1 1,…,j 1 k be the indices for which a j =1 Let j 1 1,…,j 1 k be the indices for which a j =1 Def: Def: w 1 :=y q 1 a 1 …q i a i y 1 (j 1 1,1) …y 1 (j 1 k,1) w 2 :=y 1 (j 0 1,2) …y 1 (j 0 i-k,2) y 2 (j 1 1,1) …y 2 (j 1 k,1) w i :=y i-1 (j 0 1,2) …y i-1 (j 0 i-k,2) y i (j 1 1,1) …y i (j 1 k,1) w i+1 := y i (j 0 1,2) …y i (j 0 i-k,2)

49 48 PH def equivalent (?) We can now define R L to be the (i+1)-ary relation (w 1,…,w i+1 )  R L iff for all j (a j =1)  (q j,y 1 (j,1),…,y i (j,1) )  R L’ (a j =0)  (q j,y 1 (j,2),…,y i (j,2) )  Co-R L’ We can now define R L to be the (i+1)-ary relation (w 1,…,w i+1 )  R L iff for all j (a j =1)  (q j,y 1 (j,1),…,y i (j,1) )  R L’ (a j =0)  (q j,y 1 (j,2),…,y i (j,2) )  Co-R L’

50 49 PH def equivalent (6) Since both R L’ and its complement are polynomially bounded and polynomial-time recognized we have: x  L iff  W 1  W 2 …Q i+1 W i+1 s.t (w 1,w 2,…,w i+1 )  R L which is the definition of  2 i+1 Since both R L’ and its complement are polynomially bounded and polynomial-time recognized we have: x  L iff  W 1  W 2 …Q i+1 W i+1 s.t (w 1,w 2,…,w i+1 )  R L which is the definition of  2 i+1

51 50 Thm: PH  PSPACE Proof: From the definition with quantifiers we get that x  L iff  y 1  y 2  y 3.. Q i y i (x,y 1,…,y i )  R L From the definition with quantifiers we get that x  L iff  y 1  y 2  y 3.. Q i y i (x,y 1,…,y i )  R L Given x we can use i variables to try all combinations of assignments for y 1 …y i Given x we can use i variables to try all combinations of assignments for y 1 …y i ->9.2.1

52 51 PH  PSPACE (2) Since R L is polynomially bounded we have a bound on the length of each y i and the number of variables is constant (for a given language) Since R L is polynomially bounded we have a bound on the length of each y i and the number of variables is constant (for a given language) So we have a PSPACE deterministic TM that decides L So we have a PSPACE deterministic TM that decides L

53 52 NP=CoNP  PH=NP (Intuition) Intuition - The “rungs” of the polynomial hierarchy’s “ladder” are NP Cook reductions, whose extra power over the Karp reduction lies in the fact that we can complement the oracle’s answer Intuition - The “rungs” of the polynomial hierarchy’s “ladder” are NP Cook reductions, whose extra power over the Karp reduction lies in the fact that we can complement the oracle’s answer If this power is meaningless the whole hierarchy collapses! If this power is meaningless the whole hierarchy collapses!

54 53 NP=CoNP  PH=NP (Proof) Proof (induction) - for all i  i =NP Proof (induction) - for all i  i =NP Assume  i =NP. Assume  i =NP. To prove for i+1 we need to show NP NP =NP. One side is trivial To prove for i+1 we need to show NP NP =NP. One side is trivial Let L  NP NP ; we need to show L  NP Let L  NP NP ; we need to show L  NP By definition, there exist an NTM M and a language A  NP s.t. L=L(M A ) (Since NP=CoNP also Co-A  NP) By definition, there exist an NTM M and a language A  NP s.t. L=L(M A ) (Since NP=CoNP also Co-A  NP)

55 54 NP=CoNP  PH=NP (Proof) We will simulate M A using an NTM M’ (without oracle access): We will simulate M A using an NTM M’ (without oracle access): M’ guesses the non-deterministic choices for M A. For each question M A asks the oracle M’ guesses the oracle’s answer. M’ guesses the non-deterministic choices for M A. For each question M A asks the oracle M’ guesses the oracle’s answer. For “true”, M’ runs a simulator for A (and makes sure the answer is “true”) For “true”, M’ runs a simulator for A (and makes sure the answer is “true”) For “false”, M’ runs a simulator for Co-A (and makes sure the answer is “true”) For “false”, M’ runs a simulator for Co-A (and makes sure the answer is “true”)

56 55 NP=CoNP  PH=NP (Proof) Exactly one of A or Co-A must return “true”. Since both A  NP and Co- A  NP, The simulator will run in polynomial time for at least one of them. Exactly one of A or Co-A must return “true”. Since both A  NP and Co- A  NP, The simulator will run in polynomial time for at least one of them. M runs in polynomial time, so at most a polynomial number of calls are made to the oracle. M runs in polynomial time, so at most a polynomial number of calls are made to the oracle.

57 56 NP=CoNP  PH=NP (Proof) We have found an NTM M’ that runs in polynomial time and accepts L. We have found an NTM M’ that runs in polynomial time and accepts L. By definition, therefore, L  NP By definition, therefore, L  NP

58 57 Generalization:  i =  i  PH=  i The previous proof can be generalized to: for every K>1 if  i =  i then the PH collapses from that point on and PH=  i The previous proof can be generalized to: for every K>1 if  i =  i then the PH collapses from that point on and PH=  i This relation will come in handy in the next proofs This relation will come in handy in the next proofs

59 58 Thm: BPP  PH (Overview) It is unknown whether BPP  NP nevertheless we can prove BPP   2 It is unknown whether BPP  NP nevertheless we can prove BPP   2 We show a  2 algorithm that shifts the gap between the success probability for ‘good’ inputs and ‘bad’ ones: We show a  2 algorithm that shifts the gap between the success probability for ‘good’ inputs and ‘bad’ ones: In the ‘good’ case - where the probability is high for a random string to succeed – that probability becomes =1 In the ‘good’ case - where the probability is high for a random string to succeed – that probability becomes =1 The probability of success in the bad case <1 The probability of success in the bad case <1 At that point, x  L iff for every random string our test succeeds. ->9.3

60 59 xLxL Algorithm errs Algorithm is correct xLxL 1- 1 / Poly 1 / Poly Normally, error probability is polynomially small

61 60 xLxL Algorithm errs Algorithm is correct xLxL xLxLxLxL how to shift the error The algorithm will only make mistakes when the input is “bad”

62 61 For a ‘Good’ input: all random strings become accepting

63 62 For a ‘Bad’ input: some random strings must remain non-accepting

64 63 BPP  PH (Proof Start) Use a version of BPP where for some polynomial p(.) prediction error is polynomially bounded in the length of the random string  x  {0,1} n PR r  R{0,1} p(n) [A(x,r)   L (x)] < 1 / (3*p(n)) (where  L (x) = 1 if x  L and  L (x) = 0 if x  L) note - The error probability here depends on the randomness complexity of the algorithm, with a large enough (log n) number of tries this can be achieved ->9.3

65 64 BPP  PH (Claim 1) for every x  L  {0,1} n there exist a set of elements S 1,…,S m  {0,1} m where m=p(n) s.t.  r  {0,1} m,  i  {1..m} A(x,r  S i ) = 1 That is, there is a polynomially bounded list of elements so that for every selection of r, at least one element, when XOR’ed with r, causes the algorithm to give the correct answer (if used as the random string).

66 65 BPP  PH (Claim 1: Proof Overview) Apply a probabilistic argument: To prove there exist such sequence, we prove a random sequence has positive probability of being so To prove there exist such sequence, we prove a random sequence has positive probability of being so We’ll actually upper-bound the probability that a random sequence {s i } does not satisfy the claim We’ll actually upper-bound the probability that a random sequence {s i } does not satisfy the claim This is equal to the probability that for x  L  r s.t. for every {s i } the algorithm rejects r  s i This is equal to the probability that for x  L  r s.t. for every {s i } the algorithm rejects r  s i

67 66 BPP  PH (Claim 1: Proof) This probability: Pr s 1,…,s m  R{0,1} m [  r  {0,1} m :  {i=1…m} (A(x,r  s i )=0)] can be written Pr s 1,…,s m  R{0,1} m [  {r  {0,1} m } (  {i=1…m} (A(x,r  s i )=0))]   {r  {0,1} m } ( Pr s 1,…,s m  R{0,1} m [  {i=1…m} (A(x,r  s i )=0)] ) =  {r  {0,1} m } (  {i=1…m} ( Pr s i  R{0,1} m [A(x,r  s i )=0] ) ) Since r  s i ’s are uniformly distributed: Pr s i  R{0,1} m [A(x,r  s i )=0]  ( 1 / 3m ) Therefore: P  2 m ( 1 / 3m ) m  1 Since s i ’s are chosen independently The sum of event Probabilities is greater than the probability of their union

68 67 BPP  PH (Claim 2) For every input x  L and for every choice of s i there exists r  {0,1} m s.t no r  s i is accepted

69 68 BPP  PH (Claim 2: Proof Overview) The probability that all are not accepted is 1-{the probability one is accepted} but from the first claim, that probability is bounded by 1 / (3m) The probability that all are not accepted is 1-{the probability one is accepted} but from the first claim, that probability is bounded by 1 / (3m) Therefore the probability for any r  {0,1} m that one of the possible r  s i accepts is the sum of the probabilities on each length of r and that equals m( 1 / 3m )=1/3 Therefore the probability for any r  {0,1} m that one of the possible r  s i accepts is the sum of the probabilities on each length of r and that equals m( 1 / 3m )=1/3 The probability that for any r all S i do not accept is  2/3 and there exist many such r The probability that for any r all S i do not accept is  2/3 and there exist many such r

70 69 BPP  PH (Proof Conclusion) Combining the two claims we get x  L iff  s 1,…,s m  {0,1} m  r  {0,1} m  {1..m} A(x,r  s i )=1 as we proved both directions Combining the two claims we get x  L iff  s 1,…,s m  {0,1} m  r  {0,1} m  {1..m} A(x,r  s i )=1 as we proved both directions This proves L   2 as requested, since the statement above is in  2 This proves L   2 as requested, since the statement above is in  2

71 70 If NP has small circuits PH collapses (Overview) We show another collapse claim for PH - if NP  P/Poly then PH collapses, this was suggested by Karp and Lipton We show another collapse claim for PH - if NP  P/Poly then PH collapses, this was suggested by Karp and Lipton We show that if NP  P/Poly then  2 =  2, this is enough to prove the claim; and to prove that it is enough to show  2  2 We show that if NP  P/Poly then  2 =  2, this is enough to prove the claim; and to prove that it is enough to show  2  2 ->9.4

72 71 If NP has small circuits PH collapses (Proof) If L  2 then from  2 definition exists a polynomially bounded R L s.t x  L iff  y  z s.t. (x,y,z)  R L If L  2 then from  2 definition exists a polynomially bounded R L s.t x  L iff  y  z s.t. (x,y,z)  R L we reduce one quantifier and define a coNP statement on an NP language L’ L’ = def {(x,y) |  z s.t. (x,y,z)  R L } we reduce one quantifier and define a coNP statement on an NP language L’ L’ = def {(x,y) |  z s.t. (x,y,z)  R L } It follows that x  L iff  y, (x,y)  L’ It follows that x  L iff  y, (x,y)  L’

73 72 If NP has small circuits PH collapses (Proof Cont.) As L’ is in NP there is a Karp reduction function for it to 3SAT. As L’ is in NP there is a Karp reduction function for it to 3SAT. From our assumption NP  P/Poly we get that 3SAT can be computed by a family of circuits {C m } of length m polynomial in input length From our assumption NP  P/Poly we get that 3SAT can be computed by a family of circuits {C m } of length m polynomial in input length

74 73 If NP has small circuits PH collapses (Proof Cont.2) We reconstruct membership in L’ to claim: We reconstruct membership in L’ to claim: Such a family of circuits exists Such a family of circuits exists They are polynomially bounded in length They are polynomially bounded in length They correctly calculate 3SAT – (this is the problematic statement) They correctly calculate 3SAT – (this is the problematic statement)

75 74 If NP has small circuits PH collapses (Proof Cont.3) We can guess the family of circuits and ensure they compute SAT:  1,  2,…,  n, [  2..n C’ i (  i )=(C’ i-1 (  ’ i )  C’ i-1 (  ’’ i )  C’ 1 is OK] where  i is any SAT formula over i variables and  ’ i and  ’’ i are the same formula with the last variable being set to 0 and 1 respectively. We can guess the family of circuits and ensure they compute SAT:  1,  2,…,  n, [  2..n C’ i (  i )=(C’ i-1 (  ’ i )  C’ i-1 (  ’’ i )  C’ 1 is OK] where  i is any SAT formula over i variables and  ’ i and  ’’ i are the same formula with the last variable being set to 0 and 1 respectively.

76 75 If NP has small circuits PH collapses (Proof Cont.4) The statement can be tested and proved recursively in polynomial time: The statement can be tested and proved recursively in polynomial time: For 1 variable the check is handled by C’ 1 which is the most basic circuit. For 1 variable the check is handled by C’ 1 which is the most basic circuit. any circuit of length i+1 can be validated using a circuit 1 variable smaller by checking substitution of both options for the extra variable and using the already proven i variable circuit. any circuit of length i+1 can be validated using a circuit 1 variable smaller by checking substitution of both options for the extra variable and using the already proven i variable circuit.

77 76 If NP has small circuits PH collapses (Proof Cont.5) The algorithm computes f(x,y), determines  ’ i and  ’’ i for all i and evaluates the circuits on the given input, all of which is polynomial time, therefore our claim is proven. The algorithm computes f(x,y), determines  ’ i and  ’’ i for all i and evaluates the circuits on the given input, all of which is polynomial time, therefore our claim is proven. we can therefore get : x  L iff  (C’ 1,…,C’ n ) s.t.  y,(  1,  2,…,  n ), (x,(C’ 1,…,C’ n ),(y,  1,  2,…,  n ))  R L where n is #var(f(x,y)) we can therefore get : x  L iff  (C’ 1,…,C’ n ) s.t.  y,(  1,  2,…,  n ), (x,(C’ 1,…,C’ n ),(y,  1,  2,…,  n ))  R L where n is #var(f(x,y)) This statement is in  2 therefore L  2 and  2  2 This statement is in  2 therefore L  2 and  2  2


Download ppt "1 Slides by Golan Weisz, Omer Ben Shalom Nir Ailon & Tal Moran Adapted from Oded Goldreich’s course lecture notes by Moshe Lewenstien, Yehuda Lindell."

Similar presentations


Ads by Google