Presentation is loading. Please wait.

Presentation is loading. Please wait.

Randomized Computation Roni Parshani 025529199 Orly Margalit037616638 Eran Mantzur 028015329 Avi Mintz017629262.

Similar presentations


Presentation on theme: "Randomized Computation Roni Parshani 025529199 Orly Margalit037616638 Eran Mantzur 028015329 Avi Mintz017629262."— Presentation transcript:

1 Randomized Computation Roni Parshani 025529199 Orly Margalit037616638 Eran Mantzur 028015329 Avi Mintz017629262

2 RP – Random Polynomial Time Denotation: L is Language M is probabilistic polynomial time turning machine Definition: L  RP if  M such that x  L  Prob[ M(x) = 1 ]  ½ x  L  Prob[ M(x) = 0 ] = 1

3 RP – Random Polynomial Time The disadvantage of RP (coRP) is when the “Input” doesn’t belong to language (does belong to the language) the machine needs to return a correct answer at all times. Definition: x  L   L (x) = 1 x  L   L (x) = 0

4 RP  NP Proof: –Given: L  RP –Aim : L  NP L  RP   x  L  M such that more than 50% of y give M(x,y) = 1   y : M(x,y) = 1   x  L  y M(x,y) = 0  L  NP

5 coRP - Complementary Random Polynomial Time Definition: L  coRP if  M such that x  L  Prob[ M(x) = 1 ] = 1 x  L  Prob[ M(x) = 0 ]  ½ An alternative way to define coRP is coRP = { : L  RP }

6 coRP  co-NP Proof: –Give: L  coRP –Aim : L  co-NP L  coRP   RP   NP  L  co-NP

7 RP1 P(.) is a polynomial Definition: L  RP1 if  M, p(.) such that x  L  Prob[ M(x,r) = 1 ]  x  L  Prob[ M(x,r) = 0 ] = 1

8 RP2 P(.) is a polynomial Definition: L  RP2 if  M, p(.) such that x  L  Prob[ M(x,r) = 1 ]  1 – 2 -p(|x|) x  L  Prob[ M(x,r) = 0 ] = 1

9 RP1 = RP2 = RP Aim: RP1 = RP2 RP2  RP1  we can always select a big enough x such that < 1 – 2 -p(|x|)

10 RP1 = RP2 = RP RP1  RP2 L  RP1   M, p(.) such that  x  L Prob[ M(x,r) = 1 ]  we run M(x,r) t(|x|) times: If in any of the runs M(x,r) = 1  output is 1 If in all of the runs M(x,r) = 0  output is 0

11 RP1  RP2 Select t(|x|) ≥ Therefore if x  L  output is 0 If x  L the probability of outputting 0 is only if M(x,r) = 0 all t(|x|) times  ( Prob[M(x,r) = 0] ) t(|x|) ≤ (1- ) t(|x|)  [1- ] ≤ 2 -p(|x|)

12 RP1  RP2  So the probability of outputting 1 is larger than 1- 2 - p(|x|)  L  RP2 Conclusion:  RP1  RP  RP2  RP1 Therefore RP1 = RP = RP2

13 BPP – Bounded Probability Polynomial Time Definition: L  BPP if  M such that x  L  Prob[ M(x) = 1 ]  ⅔ x  L  Prob[ M(x) = 1 ] < In other words:  x : Prob[ M(x) =  L (x) ]  ⅔

14 coBPP = BPP coBPP = { : L  BPP } = { :  M : Prob[ M(x) =  L (x) ]  ⅔ } = { :  : Prob[ (x) =  (x) ]  ⅔ } = BPP = 1 – M(.) (M(.) exists iff (.) exists)

15 BPP1 Previously we defined stricter and weaker definition for RP, in a similar way we will define for BPP. Denotation: p(.) – positive polynomial f – polynomial time computable function Definition: L  BPP1 if  M, p(.), f such that  x  L  Prob[ M(x) = 1 ]  f(|x|) +  x  L  Prob[ M(x) = 1 ] < f(|x|) -

16 BPP = BPP1 Proof: Aim: BPP  BPP1 f(|x|) = ½ and p(|x|) = 6 This gives the original definition of BPP.

17 BPP = BPP1 Proof: Aim: BPP1  BPP L  BPP1   M such that  x  L Prob [ M(x) = 1]  f(|x|) +  x  L Prob [ M(x) = 1] < f(|x|) –

18 BPP1  BPP we want to know with Prob > ⅔ if 0  p  f(|x|) – 1/p(|x|) or iff(|x|) + 1/p(|x|)  p  1 Define: M’ runs M(x) n times, and each M(x) returns If > f(|x|) M’ returns YES, else NO

19 BPP1  BPP Calculation of n We run n independent Bernoulli variables with p  ½ and Prob < 2   =

20 BPP1  BPP Choose : and Result: M’ decides L(M) with Prob > ⅔

21 BPP2 Denotation: p(.) – positive polynomial Definition: L  BPP2 if  M, p(.) such that  x : Prob[ M(x) =  L (x) ]  1- 2 -p(|x|)

22 BPP  BPP2 Proof: Aim: BPP  BPP2 p(|x|) = This gives the original definition of BPP.

23 BPP  BPP2 Proof: Aim: BPP  BPP2 L  BPP   M :  x Prob[ M(x) =  L (x) ]  ⅔ Define: M’ runs M(x) n times, and each M(x) returns If > ½ M’ returns YES, else NO We know : Exp[M(x)] > ⅔  x  L Exp[M(x)] <  x  L

24 BPP  BPP2 Chernoff’s Equation : Let {X 1, X 2, …, X n } be a set of independent Bernoulli variables with the same expectations p  ½,and  : 0<   p(p-1) Then Prob

25 BPP  BPP2 From Chernoff’s equation :  Prob[|M’(x) – Exp[M(x)]|  ]  But if |M’(x) – Exp[M(x)]|   then M’ returns a correct answer

26 BPP  BPP2  Prob[M’(x)=  L (x) ]   polynomial P(x) we choose n such that  Prob[M’(x) =  L (x) ]   L  BPP2

27 RP  BPP Proof: L  RP if  M such that x  L  Prob[ M(x) = 1 ]  ½ x  L  Prob[ M(x) = 0 ] = 1 We previously proved BPP = BPP1 If we place in BPP1 formula with f(.)  and p(.)  4 this gives the original definition of RP.

28 P  BPP Proof: L  P   M such that M(x) =  L (x)  x : Prob[ M(x) =  L (x) ] =1  ⅔  L  BPP

29 PSPACE Definition: L  PSPACE if  M such that M(x) =  L (x) and  p such that M uses p(|x|) space. (No time restriction)

30 PP – Probability Polynomial Time Definition: L  PP if  M such that x  L  Prob[ M(x) = 1 ] > ½ x  L  Prob[ M(x) = 1 ]  ½ In other words  x : Prob[ M(x) =  L (x) ] > ½

31 PP  PSPACE Definition: (reminder) L  PP if  M such that  x : Prob[ M(x) =  L (x) ]  ½ Proof: L  PP   M, p(.) such that  x: Prob[ M(x,r) =  L (x) ] > ½ and M is polynomial time. If we run M on  r, M is correct more than 50% of the time.

32 PP  PSPACE Aim: L  PSPACE Run M on every single r. Count the number of received “1” and “0”. The correct answer is the greater result.

33 PP  PSPACE By the definition of PP, every L  PP this algorithm will always be correct. M(x,r) is polynomial in space  New algorithm is polynomial in space  L  PSPACE

34 Claim: PP = PP1 If we have a machine that satisfies PP it also satisfies PP1 (Since PP is stricter then PP1 and demands grater then 1/2 and PP demands only, equal or grater to ½) so clearly 

35 Let M be a language in PP1 Motivation The trick is to build a machine that will shift the answer of M towards the NO direction with a very small probability that is smaller than the smallest probability difference that M could have. So if M is biased towards YES our shift will not harm the direction of the shift. But if there is no bias(or bias towards NO) our shift will give us a bias towards the no answer.

36 Proof: Let M’ be defined as:

37 With probability return NO With probability invoke M M’ chooses one of two moves.

38 If :

39

40 Suppose that is decided by a non deterministic machine M with a running time that is bounded by the polynomial p(x). The following machine M’ then will decide L according to the following definition:

41 M’ uses it’s random coin tosses as a witness to M with only one toss that it does not pass to M’. This toss is used to choose it’s move. One of the two possible moves gets it to the ordinary computation of M with the same input(and the witness is the random input).

42 The other choice gets it to a computation that always accepts. Consider string x. If M doesn't have an accepting computation then the probability that M’ will answer 1 is exactly 1/2. On the other hand, if M has at least one accepting computation the probability that M’ will answer correctly is greater then 1/2.

43 Meaning and by the previous claim (PP = PP1) we get that. So we get that:

44 ZPP – Zero Error Probability We define a probabilistic turning machine which is allowed to reply “I Don’t Know” which will be symbolized by “ ┴ ”. Definition: L  ZPP if  M such that  x : Prob[ M(x) = ┴ ] ≤ ½  x : Prob[ M(x) =  L (x) or M(x) = ┴ ] = 1

45 Take. Let M be a “ZPP machine”. We will build a machine M’ that decides L according to the definition of RP.

46 If then by returning 0 when we will always answer correctly because in this case

47 If the probability of getting the right answer with M’ is greater then 1/2 since M returns a definite answer with probability greater then 1/2 and M’s definite answers are always correct.

48 In the same way it can be seen that by defining M’(x) as: we get that

49

50 If then we will get a YES answer from and hence from M’ with probability greater then 1/2. If then we will get a NO answer from and hence from M’ with probability greater then 1/2.

51 RSPACE – Randomized Space Complexity Definition: RSPACE (s) = L  RP such that M RP uses at most s(|x|) space and exp( s(|x|) ) time. BadRSPACE (s) = RSPACE (s) without time restriction.

52 Claim: badRSPACE = NSPACE badRSPACE NSPACE Let L badRSPACE. If x L that means there is at least one witness and the non deterministic machine of NSPACE will choose it.

53 If x L that means there are no witnesses at all therefore the non deterministic machine of NSPACE also will not find a solution.

54 NSPACE badRSPACE L NSPACE. M is the Non - deterministic Turing machine which decides L in space S(|x|). If x L there exists r of length exp(S(|x|), so that M(x,r) = 1, where r is the non-deterministic guess used by M. Therefore the probability of selecting r so that M(x,r) = 1 is at least

55 So if we repeatedly invoke M(x,.) on random r’s we can expect that after tries we will see an accepting computation. So what we want our machine M’ to do is run M on x and a newly randomly selected r (of length exp(S(|x|))) for about times and accept iff M accepts in one of these tries.

56 Problem: In order to count to we need a counter that uses space of exp(S(|x|)), and we only have S(|x|).

57 Solution: We will use a randomized counter that will use only S(|x|) space. We flip k = coins. if all are heads then stop else go on. The expected num of tries. But the real counter only needs to count to k and therefore only needs space of.


Download ppt "Randomized Computation Roni Parshani 025529199 Orly Margalit037616638 Eran Mantzur 028015329 Avi Mintz017629262."

Similar presentations


Ads by Google