Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Slides by Dana Moshkovitz. Adapted from Oded Goldreich’s course lecture notes.

Similar presentations

Presentation on theme: "1 Slides by Dana Moshkovitz. Adapted from Oded Goldreich’s course lecture notes."— Presentation transcript:

1 1 Slides by Dana Moshkovitz. Adapted from Oded Goldreich’s course lecture notes.

2 2 Outline  Proof systems: NP revisited.  Interactive proofs  The complexity class IP  Example: An interactive proof for Graph Non- Isomorphism  IP=PSPACE  Public coins

3 3 Proof Systems Back to NP  In order to understand the notion of Proof Systems, let us observe NP again.  In a way, the complexity class we will define and discuss later is a probabilistic analog of NP.  The languages in NP are those whose members all have short certificates of membership, which can be easily verified.

4 4 Proof Systems Back to NP  We can view this as follows: –There is a mighty powerful Prover. –The Prover needs to convince a Verifier that the input is indeed a member of the language. –So it sends the Verifier a short (polynomial) certificate. –The Verifier has limited resources: the verification of the certificate cannot take more than polynomial time.

5 5 Proof Systems Back to NP We will demonstrate this process for 3SAT: (x  y  z’)  (x’  y’)  z’ We would like to check the membership of a given formula: The prover must convince the verifier this formula is satisfiable, so it sends it an assignment, which supposedly satisfies the formula. It is not difficult for the mighty prover to find such, if such exists. The verifier simply needs to check the truth value of the formula under the assignment it received in order to find out whether the prover was right. This merely takes polynomial time.  (x)=false  (y)=true  (z)=false polynomial in the number of variables

6 6 Proof Systems Requirements  Let us specifically define the properties of a Proof System: –The verifier’s strategy is efficient –Correctness Requirements: –Completeness: For a true assertion, there is a convincing proof strategy. –Soundness: For a false assertion, no proof strategy exists. Make sure you understand why does the the proof system we presented for 3SAT satisfy these properties.

7 7 Interactive Proofs  We will introduce the notion of Interactive Proofs, which is a generalization of the concept of a Proof System we have already observed.  This generalization is obtained by adding two more features to the model: –allowing a two-way dialog between the parties (interaction) –allowing the verifier to toss coins (randomness).

8 8 Interactive Proofs  An Interactive Proof System for a language L is a two- party game between a verifier and a prover that interact on a common input in a way satisfying the following properties: –The verifier’s strategy is a probabilistic polynomial-time procedure. –Correctness requirements: –Completeness: There exists a prover strategy P, such that for every x  L, when interacting on a common input x, the prover P convinces the verifier with probability at least 2/3. –Soundness: For every x  L, when interacting on the common input x, any prover strategy P* convinces the verifier with probability at most 1/3.

9 9 IP  The complexity class IP consists of all the languages having an interactive proof system.  The number of messages exchanged during the protocol between the two parties is called the number of rounds in the system.  For every integer function r(.), the complexity class IP(r(.)) consists of all the languages that have an interactive proof system, in which, on common input x, at most r(|x|) rounds are used.  For a set of integer functions R, we denote IP(R)=U r  R IP(r(.)).

10 10 IP Observations  NP  IP  Since the verifier must run in polynomial-time, IP=IP( poly ), where poly is the set of polynomial functions.  The definition of IP can be expanded to require Perfect Completeness (acceptance probability 1).  On the other hand, if we demand Perfect Soundness, the class will collapse to NP-proof systems.  Again, the constants 1/3 and 2/3 in the definition can be amplified to probabilities 1-2 -p(.) and 2 -p(.), for any polynomial p(.).

11 11 Would IP Retain Its Strength Even Without Either Interaction or Randomness?  If we omit randomness, IP collapses to NP-proof systems (Make sure you understand why).  If we omit the interaction between the parties, we get IP(1) (also denoted AM), which seems to be a randomized (perhaps stronger) version of NP.  Together these two features yield a very powerful complexity class. How powerful? This will be clarified later.  First, let us observe an example.

12 12 Isomorphism between Graphs  The graphs G 1 =(V 1,E 1 ) and G 2 =(V 2,E 2 ) are called isomorphic (denoted G 1  G 2 ) if there exists a 1-1 and onto mapping  :V 1  V 2 such that (u,v)  E 1 iff (  (u),  (v))  E 1.  A mapping  between two isomorphic graphs is called an isomorphism between the graphs.  If no such mapping exists, the graphs are called non- isomorphic.  We define the language GNI as follows: GNI={(G 1,G 2 ): G 1 and G 2 are non-isomorphic}  We will use this language in order to demonstrate an interactive proof.

13 13 Isomorphic Graphs Example:  Take these two graphs  Although they seem very different, they are in fact isomorphic. Click to see the isomorphism between them.

14 14 GNI Motivation  This illustration shows us that GI is in NP (Why?).  Interestingly, it is not known whether it is NP-hard.  GNI - on the other hand - seems much harder (We need to check no isomorphism exists).  And indeed, it is not known whether GNI is in NP.  Thus it will be interesting to show that if two graphs are non-isomorphic, a Prover can convince a Verifier of this fact.

15 15 An Interactive Proof for GNI  Common Input: G 1 =({1,...,n},E 1 ) and G 2 =({1,...,n},E 2 ) Make sure you understand why could we assume, without loss of generality, that V 1 =V 2.  The Verifier chooses randomly i in {1,2} and a permutation  of {1,...,n}.  Then it applies  on the i-th graph to get: H=({1,...,n},{(  (u),  (v)):(u,v)  E})  And sends H to the Prover.  The prover sends j  {1,2} to the Verifier.  The Verifier accepts iff i=j.

16 16 An Interactive Proof for GNI Simulation  The verifier chooses one of the two graphs randomly.  The verifier constructs randomly a graph isomorphic to the graph it chose. The common input  The verifier sends the prover the graph  If the two input graphs are truly non-isomorphic, the prover can find which of the two graphs is isomorphic to the graph he received from the verifier, and send it the correct answer.  The verifier can check the answer easily (The verifier knows which graph was chosen) The Prover The Verifier

17 17 Conclusions  The described protocol is indeed an interactive proof system for GNI.  Make sure you can prove it.  Since the proof was implemented with only 2 rounds we can state: GNI  IP(2).

18 18 IP=PSPACE We shall prove next a rather surprising result, stating IP=PSPACE To do so, we will prove the following two claims: 1. IP  PSPACE this will follow if we can simulate every interactive proof using polynomial space. 2. PSPACE  IP this will follow if we can exhibit a PSPACE-complete language which is in IP.

19 19 IP  PSPACE The Key Observation  The proof of this direction is based on a very simple observation: If we know the verifier’s strategy, we can build a polynomial space optimal prover.  At each point that prover would choose the strategy, which has the highest probability to result in acceptance.  How would it know which one is the best? It will simply go over all possible interactions and check.  That’s why a polynomial space is necessary.

20 20 An Optimal Prover Notations  In order to formalize the former observation, we introduce the following notations:  Let F(  1,  1,...,  i-1,  i ) be the probability that an interaction beginning with  1,  1,...,  i-1,  i will result in acceptance. Where:  i and  i are i-th messages sent by the verifier and by the prover respectively.  Let r be the outcome of all the verifier’s coin tosses.  Let R  1,  1,...,  i-1,  i be the set of all r’s consistent with the interaction  1,  1,...,  i-1,  i.  Let V(r,  1,...,  i-1 ) be the message  i+1 sent by the verifier.  We will show F can be computed using polynomial space, and that for every i, an  i which maximizes the probability, can be found in the process.

21 21 An Optimal Prover  Using those notations we can write:  Although they might seem intimidating at first sight, the formulas presented here are quite trivial. Make sure you fully understand them.  And get the recursion formula for F:

22 22 An Optimal Prover  Why can we compute F according to the previous formula in polynomial space?  Finding which r’s are consistent (r  R  1,  1,...,  i-1,  i ) can be done by simulating the verifier, which works in polynomial time for fixed random bits. Note that |r| is poly.  Similarly, we can can find the verifier’s answer (V(r,  1,...,  i-1 )).  The recursion stops, once a full transcript of the interaction is reached. Then the probability can be computed directly, by enumerating all the r’s consistent with it.  Thus the depth of the recursion is bounded by the number of rounds, which is polynomial.

23 23 An Optimal Prover Example  Let us demonstrate this for the GNI example. Suppose we take the same strategy for the verifier as we described earlier.  What would the optimal prover do? Do you remember that verifier strategy?  choose one of the two graphs at random  construct a random graph isomorphic to the graph you chose  send that graph to the prover  accept iff the prover sent back the same index you chose.

24 24 An Optimal prover Example  Suppose these are the two input graphs. 1 2 accept reject  And this is the graph the prover received from the verifier.  For each possible answer, the prover should go over all possible random bits and find out which are consistent with the message it received.  Clearly, in this example there are two possible r’s (the verifier could have chosen both the first graph and the second one).  The prover should check which possible answer (1 or 2) yields the highest possibility for acceptance.  Here we don’t have to go too far: in the next move the verifier decides whether to accept, so by simulating it, we can find the desired probabilities. reject accept Input: Received: ½ ½ Prob. For acceptance:

25 25 IP  PSPACE  Finally, let us prove this containment:  Suppose we have a language L in IP.  Hence, there exists an interactive proof for L.  According to what we have just proven, there also exists a polynomial space optimal prover.  Therefore, for all possible verifier’s coin tosses we can simulate an interaction between the verifier and the optimal prover.  We accept iff more than 2/3 of the outcomes are accepting.  Clearly, we accept iff the input is in the language.  Consequently: IP  PSPACE

26 26 PSPACE  IP Introducing TQBF  We will show the following PSPACE-complete language has an interactive proof:  TQBF: Let  be a quantified boolean formula of the form:  =Q 1 x 1...Q m x m [  ], where  is a CNF formula and each Q i is either  or . We ask if  is true.  Next we will present the ideas, which will eventually allow us to write an interactive proof for TQBF.

27 27 PSPACE  IP Notation  Suppose we have a TQBF formula  =Q 1 x 1...Q m x m [  ].  For 1  i  m and a 1,...,a m  {0,1} let f i (a 1,...,a i )=1 iff Q i+1 x i+1...Q m x m [  (a 1,...,a i )] is true. (Otherwise f i (a 1,...,a i )=0).   f 0 () is the truth value of .

28 28 PSPACE  IP Intuition  The general idea behind the interactive proof is to convince the verifier f 0 ()=1 and f 0 () is indeed the truth value of the formula.This will be done by supplying it all the f i ’s so it can check each one really follows its successor.  The problem is that there is an exponential number of assignments to the variables.  This disqualifies the naive representation of the functions.  This also makes ensuring the validity of the functions seem impossible for the verifier.  The solution is based on a technique called arithmization, which will provide us with a better way for representing the functions and will allow the verifier to take advantage of its ability to use randomness.

29 29 Arithmization With each CNF formula , we associate a polynomial p in the following manner:  xixi   1-   xixi 1-(1-  )(1-  ) F 0 T 1 The bottom Line:  (x 1,...,x n ) is false iff p(x 1,...,x n )=0

30 30 Arithmization Example Note that in the resulting polynomial the degree of each variable is at most n (the number of variables in the formula). (x 1   x 2 )  x 1 (x1x2-x2+1)x1(x1x2-x2+1)x1 (1-x 2 ) x2x2 (1-(1-x 2 )) (1-x 1 ) x1x1 1-1-  x2x2 1-1- x1x2-x2+1x1x2-x2+1  x1x1 x12x2-x1x2+x1x12x2-x1x2+x1

31 31 Arithmization  Suppose now we have a QBF  =Q 1 x 1...Q m x m [  ].  we define:  ’=Q 1 x 1 R 1 x 1 Q 2 x 2 R 1 x 1 R 2 x 2...Q m x m R 1 x 1...R m x m [  ].  R is a reduction operator, which is designed to keep the degree of the polynomials small. Further explanations follow.  We rewrite this as:  ’=S 1 y 1...S k y k [  ] where: S i  { , ,R},y i  {x 1,...,x m }.  f k (x 1,...,x m ) is the polynomial obtained by arithmetizing .  If i<k then –if S i =  : f i (...)=f i+1 (...,0)f i+1 (...,1) –if S i =  : f i (...)=1-(1-f i+1 (...,0))(1-f i+1 (...,1)) –if S i =R : f i (...,a)=(1-a)f i+1 (...,0)+af i+1 (...,1)  The Rx operation on polynomials does not change their values on boolean input.  But it does produce a polynomial that is linear in x. Note, that we reorder the inputs to the functions, so the variable y i+1 is the last argument Why is this definition of f the same as the previous one for boolean input?

32 32 Arithmization Example  =  x 1  x 2 [(x 1  x 2 )  x 1 ] f 5 (x 1,x 2 )=x 1 2 x 2 -x 1 x 2 +x 1 f 4 (x 1,x 2 )=(1-x 2 )f 5 (x 1,0)+x 2 f 5 (x 1,1) =(1-x 1 )(x 1 2 ·0-x 1 ·0+x 1 )+x 2 (x 1 2 ·1-x 1 ·1+x 1 ) =x 1 -x 1 2 +x 1 2 x 2 f 3 (x 1,x 2 )=(1-x 1 )f 4 (0,x 2 )+x 1 f 4 (1,x 2 ) =(1-x 1 )·0+x 1 x 2 =x 1 x 2 f 2 (x 1 ) =f 3 (x 1,0)·f 3 (x 1,1) =x 1 f 1 (x 1 ) =(1-x 1 )f 2 (0)+x 1 f 2 (1) =x 1 f 0 () =1-(1-f 1 (0))(1-f 1 (1)) =1  ’=  x 1 Rx 1  x 2 Rx 1 Rx 2 [(x 1  x 2 )  x 1 ] Now we can use our former computation in order to calculate: Take this formula: We build  ’

33 33 An Interactive Proof for TQBF  V chooses a prime q>n 4. All arithmetic operations will be carried over GF[q]. f 0 ()  Phase 1: V verifies f 0 ()=1  Phase i: V finds f i (...,0) and f i (...,1)  If S= , f i-1 (...)=1-(1-f i (..., 0 ))(1-f i (..., 1))  V picks r in GF[q] at random and sends it to the prover. r  Phase k+1: V evaluates p(r 1,...,r m ) to compare with the value V has for f m (r 1,...,r m ).... f i (...,z) A setting of the variables to the previously selected random values  V checks that the degree is at most n.  Suppose S denotes the current quantifier. V checks the following:  If S= , f i-1 (...)=f i (..., 0 )f i (..., 1)  If S=R, f i-1 (...,r)=(1-r)f i (..., 0 )+rf i (..., 1)...

34 34 An Interactive Proof for TQBF  Clearly, when the formula is true, a honest prover can compute the functions, and V will accept (completeness).  What if the formula is false (soundness)?  If V has incorrect value for f i-1 (...), one of the values f i (...,0) and f i (...,1) must be incorrect and the polynomial for f i must be incorrect.  Consequently, for a random r the probability that a prover gets lucky in this phase because f i (...,r) is correct is at most the polynomial degree divided by the field’s size.  This statement will be clarified next.

35 35 An Interactive Proof for TQBF  This statement is the heart of the proof. In order to understand it, we need to take a better look at some properties of polynomials.  A polynomial in a single variable of degree at most d can have no more than d roots, unless it always evaluates to zero.  Therefore any two polynomials in a single variable of degree at most d can agree in at most d places, unless they agree everywhere.

36 36 An Interactive Proof for TQBF  Because of the reduction operator, the degrees of the polynomials obtained are bounded by n, the length of the CNF formula. This results from n also being the bound on the degree of f k.  This means that there are at most n places in which both f i (...,r) and the polynomial we got instead agree.  Hence, the probability that they agree in a random r is at most n divided by the field’s size, n 4, and this is what we stated earlier.

37 37 PSPACE  IP  Since this protocol proceeds for O(n 2 ) phases (why?), the probability a prover gets lucky at some phase is at most 1/n.  If a prover is never lucky, V will reject at phase k+1.  This completes our proof for the correctness of the protocol for TQBF, and allows us to state:  PSPACE  IP

38 38 IP=PSPACE  Let’s review what have we accomplished so far:  We proved that if we have an interactive proof for testing membership in some language, we can build a polynomial space Turing machine, which simulates the interaction between the verifier and an optimal prover, and thus accepts the language.  We also proved that there is an interactive proof for a PSPACE-complete language.  It follows that IP=PSPACE.

39 39 Public Coins Vs. Private Coins  According to our definition of interactive proofs, the coins tossed by the verifier are private.  That is, they are not visible to the prover.  One might wonder, if this property is really necessary or we can even allow our coins to be public.

40 40 Public Coins and GNI  Clearly, our previous protocol for GNI fails when the verifier has to reveal the outcome of its coin tosses.  Still, an interactive proof with public coins can be constructed for GNI.  Consider the following observation:  Roughly speaking, in the last protocol, the verifier had 2n! different graphs it could send the prover, if the graphs were indeed non- isomorphic, and only n! different graphs, if they were not.

41 41 Public Coins and GNI  This motivates us to use this approach:  The prover should try to convince the verifier the set of all graphs it could have sent in the former protocol is BIG.  This will be done by mapping the elements of the set (denoted W) into a table T of size 4m! and looking at the probability that a random entry in T is filled.

42 42 Public Coins and GNI  The protocol is:  Let S={0,1} n  {0,1} n. V chooses s=(a,b)  R S and  R {1,...,|T|} and sends them to P.  P computes  S n and c  {1,2} and sends to V the graph  (G c ).  V accepts iff h s (  (G c ))= , where the 2- universal hash functions h s (x) are defined as ax+b (The arithmetic operations are with respect to the finite field GF[2 n ]).  Note that V sends all the random bits it uses to P, so they are truly public. A family of hash functions H is 2-universal, if whenever h is chosen uniformly from H, (h(x),h(y)) is also uniformly distributed. Can you prove that the functions we defined are indeed 2-universal?

43 43 Public Coins and GNI  We want to show, that if the two input graphs are non-isomorphic, there is a fairly decent chance the prover P will be able to find a graph in W which is mapped to  by h s.  Given  {1,...,2N} Define E i to be the event that element i is mapped to the given . Pr [at least one element in the size N set is mapped to  ] = Pr[E 1 ...  E N ]   i Pr[E i ]-  i<j Pr[E i,E j ] = N/2N-C(N,2)1/4N 2  3/8 We used a 2-universal hash family inclusion-exclusion

44 44 Public Coins and GNI  If x  L, the probability V accepts is thus at least 3/8.  If x  L, W is 1/4 the size of the table, so the probability V accepts is at most 1/4.  The gap between these probabilities can be boosted in the usual way.  This concludes our proof for the correctness of the interactive proof with public coins for GNI.  Yet the question remains:  Are public coins as powerful as private coins in interactive proofs?  Next we introduce the related notations and quote some interesting theorems regarding public coins.

45 45 Public Coins  Public Coin Proof Systems (also known as Arthur-Merlin Games) are interactive proof systems, in which at each round the verifier can only toss coins and send their outcome to the prover. In the last round the verifier decides whether to accept or reject.  Intuitively: Arthur cannot ask Merlin tricky questions, only random ones, ‘cause Merlin knows all his tricks...  For every integer function r(.) the complexity class AM(r(.)) consists of all the languages that have Arthur- Merlin proof system in which, on common input x, at most r(|x|) rounds are used.  Denote AM=AM(2)

46 46 Public Coins  We quote the following results without proof:  Relating IP to AM: –  r(.) IP(r(.))  AM(r(.)+2)  Linear Speed-UP Theorem: –  r(.)  2 AM(2r(.))=AM(r(.))  We conclude: –  r(.)  2 IP(2r(.))=IP(r(.)) –IP(O(1))=AM(2)

47 47 Bibliography  In addition to the lecture notes taken from Oded Goldreich’s course (written by: Danny Harnik, Tzvika Hartman and Hillel Kugler), I also used:  Sipser’s Advanced Topics in Complexity Theory for the IP=PSPACE proof.  Michael Luby, Avi Wigderson, Pairwise Independence and Derandomization, July 1995 for the public coins interactive proof for GNI.

Download ppt "1 Slides by Dana Moshkovitz. Adapted from Oded Goldreich’s course lecture notes."

Similar presentations

Ads by Google