Presentation is loading. Please wait.

Presentation is loading. Please wait.

Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.

Similar presentations


Presentation on theme: "Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string."— Presentation transcript:

1 Talk for Topics course

2 Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string of pseudo-random bits. Pseudo-Randomness: No small circuit can distinguish truly random bits from pseudo-random bits. few truly random bits many “ pseudo-random ” bits Nisan-Wigderson setting: The generator is more powerful than the circuit. (i.e., PRG runs in time n 5 for circuits of size n 3 ). Hardness vs. Randomness paradigm: [BM,Y,S] Construct PRGs assuming hard functions. f  EXP hard (on worst case) for small circuits. [NW88,BFNW93,I95,IW97,STV99,ISW99,ISW00]

3 Hardness versus Randomness Initiated by [BM,Yao,Shamir]. Assumption: explicit hard functions exist Efficient PRG ’ s exist Derandomization of prob. algorithms

4 Today ’ s talk Theorem: If there exist a function f s.t f in EXP. f is hard (on worst case) for poly-size circuits. Then there is a PRG which is computable in EXP and stretches n bits -> n c bits (for an arbitrary constant c) and “ fools ” poly-size circuits. Conclusion: BPP is in SUBEXP. Remark: Stronger assumption BPP=P.

5 The Impagliazzo-Wigderson assumption Computable in time 2 O(n). Cannot be computed by boolean circuits of size 2 δn, for some 0<δ<1. Computable in non-deterministic time poly(n) Cannot be computed in time poly(n). There exists a function f which is NP≠P

6 Converting Hardness into pseudo-randomness Basic idea of all PRG constructions: f is “ very hard ” for efficient algorithms. f(x) “ looks ” like a random coin to an efficient algorithm which gets x. Suggestion: PRG(x)=x,f(x). PRG is computable in exponential time and fools poly-size circuits.

7 A quick overview Hardness vs. Randomness: Cryptography: [BM,Y,S,HILL] Derandomization: [NW88,BFNW93,I95,IW97, IW98,STV99,ISW99,ISW00,SU01,U02]. Important milestone [NW88,IW97]: Under suitable hardness assumptions: Every probabilistic algorithm can be completely and efficiently derandomized! deterministic algorithms are just as strong as probabilistic algorithms!

8 Goal: Construct efficient pseudo-random generators We ’ re given a hard function f on n bits. We want to construct an efficient PRG. pseudo-random bits PRG short seed n bits n 10 bits

9 Truth table of f f(1)f(2)f(3) … f(x) … f(2 n ) A naive idea x f(x)..f(x+n 10 ) G outputs n 10 successive values of f G(x)=f(x),f(x+1),..,f(x+n 10 )

10 Want to prove f isn ’ t hard G isn ’ t pseudo-random f is hard G is pseudo-random

11 Outline of Proof f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random f is hard G is pseudo-random

12 Next-Bit Predictors Truly random bits satisfy: Every bit is random given the previous bits. For every algorithm P: Pr[P(prefix)=next bit]= ½ f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

13 Next-Bit Predictors f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random f(x)..f(x+i-1) f(x+i) Thm: [Yao82] If G isn ’ t pseudo-random then it has a weak bit. There ’ s an efficient algorithm P which predicts this bit given the previous bits. P(prefix)=next bit with probability ½ +ε.

14 To show that f is easy we ’ ll use P to show that it is efficiently computable. f has a poly-size circuit. Circuits are algorithms with “ non-uniform advice ”. We can choose n O(1) inputs and query f on these inputs. Showing that f is easy f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

15 Rules of the game We need to design an algorithm that: Queries f at few positions. (poly(n)). Uses the next-bit predictor P. Computes f everywhere. (on all 2 n positions). f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

16 Computing f using few queries Simplifying assumption: P(prefix)=next bit with probability 1. Queries (non-uniform advice) f(0),..,f(i-1) - n 10 bits Use P to compute f(i),f(i+1),f(i+2) … f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random f(0)…f(i-1) f(i) f(1)……f(i) f(i+1) f(2)..f(i+1) f(i+2) Compute f everywhere

17 Rules of the game We need to design an algorithm that: Queries f at few positions. (poly(n)). Uses the next-bit predictor P. Computes f everywhere. (on all 2 n positions). f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random * To get a small circuit we also need that for every x, f(x) can be computed in time n O(1) given the non-uniform advice.

18 A Problem: The predictor makes errors We ’ ve made a simplifying assumption that: Pr x [P(prefix)=next bit] = 1 We are only guaranteed that: Pr x [P(prefix)=next bit] > ½ +ε f(x)..f(x+i-1) f(x+i) vXvvXXXvXXvvvXvvXXVXvXXvX f(0)…f(i-1)f(1)……f(i) Error: cannot Continue! Use Error-Correcting techniques to recover from errors! Prefix

19 Using multivariate polynomials The function f 2n2n A line: One Dimension

20 Using multivariate polynomials f(1,1)f(1,2) f(2,1) 2 n/2 A cube: many dimensions f(x 1,x 2 ) * Low degree extension: We take a field F with about 2 n/d elements and extend f to a degree about 2 n/d polynomial in d variables. w.l.o.g f(x 1,..,x d ) is a low degree polynomial in d variables* x1x1 x2x2

21 Adjusting to Many Dimensions Problem: No natural meaning to successive in many dimensions. Successive: move one point right. The Generator: G(x 1,x 2 )=f(x 1,x 2 )..f(x 1,x 2 +n 10 ) 2 n/2 f(x 1,x 2 )..f(x 1,x 2 +n 10 ) X1X1 X2X2

22 Decoding Errors Apply the Predictor in parallel along a line. Get ( ½ +ε)-fraction of correct predictions. * Apply error correction: Learn all points on line 2 n/2 *With high probability if the line is chosen at random. v x v v x x v v x v v v v v v v v v A restriction of f to a line: A univariate polynomial! Low degree univariate polynomials have error-correcting properties! Interpolation: If we know a degree k polynomial on k+1 points we can compute it on all points. Coding theory studies the case where we don’t know where the errors are. If #errors is small (<25%) then it is possible to recover the correct values. v v v v v v v v v v x v v x x v v x We don’t know which positions were predicted correctly. k correct positions

23 Too many errors The predictor succeeds with probability ½ +ε. May make almost 50% errors. Coding Theory: Not enough information on on the line to decode. * 2 n/2 *It is possible to “List Decode” to get few polynomials one of which is correct. We also have the information we previously computed! v x v v x x v v x

24 Curves Instead of Lines Lines: deg. 1 polynomials: L(t)=at+b Curves: higher deg. (n O(1) ) C(t)=a r t r +a r-1 t r-1..+a 0 Observation: f restricted to a low-degree curve is still a low-degree univariate polynomial. 2 n/2

25 A special curve with intersection properties. Curve passes through: Few (random) points Successive points. 2 n/2 This curve intersects itself when moved!

26 Recovering From Errors 2 n/2 No errors! Previously computed. (½+ε)-fraction of correct predictions. Just like before: Query n 10 successive curves. Apply the predictor in parallel.

27 Recovering From Errors 2 n/2 No errors! Previously computed. (½+ε)-fraction of correct predictions. Lemma: + = Given: - “Noisy” predicted values. - Few correct values. We can correct!

28 Given: - “Noisy” predicted values. - Few correct values. We can correct! Recovering From Errors 2 n/2 Lemma: + = We implemented an errorless Predictor!

29 Story so far … We can “ error-correct ” a predictor that makes errors. Contribution to Coding Theory: Our strategy gives a new (list)-decoding algorithm for Reed-Muller codes [AS97,STV99]. Short version

30 Want: n O(1) Make: n 10 · |Curve| How many queries? 2 n/2 n 10 Want to use short curves.

31 Using many dimensions 1 dimension: 2 n 2 dimensions: 2 n/2 3 dimensions: 2 n/3 d dimensions: 2 n/d d=Ω(n/log(n)) => length = n O(1)

32 Conflict? Many DimensionsOne Dimension Error correction. Few queries. Natural meaning to successive. We’d like to have both!

33 A different Successor Function F d Vector-Space. Base Field F. F d Extension Field of F. Multiplicative group has a generator g. F d \ 0={1,g,g 2, g 3, … } Successor(v)=g · v Covers the space. Many Dimensions One Dimension 1 g g 2 g 3 ……. g i ……………………. We compute f Everywhere!

34 A New Successor Function Many Dimensions One Dimension 1 g g 2 g 3 ……. g i ……………………. Successor(v)=g · v Covers the space. We compute f Everywhere! Invertible linear transform. Maps curves to curves!

35 Given: - “Noisy” predicted values. - Few correct values. We can correct! Nothing Changes! 2 n/2 Lemma: + =

36 The final Construction Ingredients: f(x 1,..,x d ): a d-variate polynomial. g: generator of the extension field F d. Pseudo-Random Generator: This is essentially the naive idea we started from. * The actual construction is a little bit more complicated.

37 Query f at few short successive “ special curves ”. Use predictor to learn the next curve with errors. Use intersection properties of the special curve to error correct the current curve. Successive curves cover the space and so we compute f everywhere. Summary of proof f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

38 Conclusion A simple construction of PRG ’ s. (Almost all the complications we talked about are in the proof, not the construction!) This construction and proof are very versatile and have many applications: Randomness extractors, (list)-decoding, hardness amplification, derandomizing Arthur-Merlin games, unbalanced expander graphs. Further research: Other uses for the naive approach for PRG ’ s. Other uses for the error-correcting technique.

39 That ’ s it …

40 The next step in derandomization? Continue studying relations between various derandomization tasks and hard functions. Recent works [IKW01,IK02] essentially give: Derandomization => (weak) explicit hard functions. A way to prove that (weak) explicit hard functions exist! The existence of (weak) explicit hard functions may be easier to prove! (NEXP ≠ P/poly) may be easier than (NP ≠ P).

41 What I didn ’ t show Next step: Use error corrected predictor to compute f everywhere. The cost of “ error-correction ” : We ’ re using too many queries just to get started. We ’ re using many dimensions. (f is a polynomial in many variables). It ’ s not clear how to implement the naive strategy in many dimensions! More details from the paper/survey: www.wisdom.weizmann.ac.il/~ronens

42 Conclusion A simple construction of PRG ’ s. (Almost all the complications we talked about are in the proof, not the construction!) This construction and proof are very versatile and have many applications: Randomness extractors, (list)-decoding, hardness amplification, derandomizing Arthur-Merlin games, unbalanced expander graphs. Further research: Other uses for the naive approach for PRG ’ s. Other uses for the error-correcting technique.

43 That ’ s it …


Download ppt "Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string."

Similar presentations


Ads by Google