Download presentation

Presentation is loading. Please wait.

Published byEvelyn Sneath Modified about 1 year ago

1
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans

2
Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string of pseudo-random bits. Pseudo-Randomness: No small circuit can distinguish truly random bits from pseudo-random bits. few truly random bits many “ pseudo-random ” bits Nisan-Wigderson setting: The generator is more powerful than the circuit. (i.e., PRG runs in time n 5 for circuits of size n 3 ). Hardness vs. Randomness paradigm: [BM,Y,S] Construct PRGs assuming hard functions. f EXP hard (on worst case) for small circuits. [NW88,BFNW93,I95,IW97,STV99,ISW99,ISW00]

3
Randomness Extractors [NZ] random bits Ext imperfect randomness Extractors extract many random bits from arbitrary distributions which contain sufficient randomness. A sample from a physical source of randomness. A high (min)-entropy distribution. statistically close to uniform distribution. Impossible for deterministic procedures!

4
Randomness Extractors [NZ] random bits Ext short seed Extractors use a short seed of truly random bits extract many random bits from arbitrary distributions which contain sufficient randomness. Extractors have many applications! A lot of work on explicit constructions [vN53,B84, SV86,Z91,NZ93,SZ94,Z96,T96,T99,RRV99,ISW00, RSW00,TUZ01,TZS02]. Survey available from my homepage. imperfect randomness

5
Trevisan ’ s argument PRGsExtractors Pseudo- random bits PRG short seed hard function random bits Ext short seed imperfect randomness Trevisan ’ s argument: Every PRG construction with certain relativization properties is also an extractor. Extractors using the Nisan-Wigderson generator: [Tre99,RRV99,ISW00,TUZ01].

6
The method of Ta-Shma, Zuckerman and Safra [TZS01] Use Trevisan ’ s argument to give a new method for constructing extractors. Extractors by solving a “ generalized list- decoding ” problem. (List-decoding already played a role in this area [Tre99,STV99]). Solution inspired by list-decoding algorithms for Reed-Muller codes [AS,STV99]. Simple and direct construction.

7
Our results Use the ideas of [TZS01] in an improved way: Simple and direct extractors for all min-entropies. (For every a>0, seed=(1+a)(log n), output=k/(log n) O(a).) New list-decoding algorithm for Reed-Muller codes [AS97,STV99]. Trevisan ’ s argument “ the other way ” : New PRG construction. (Does not use Nisan-Wigderson PRG). Optimal conversion of hardness into pseudo-randomness. (HSG construction using only “ necessary ” assumptions). Improved PRG's for nondeterministic circuits (Consequence: better derandomization of AM). Subsequent paper [Uma02] gives quantitive improvements for PRGs.

8
The construction

9
Goal: Construct pseudo- random generators We ’ re given a hard function f on n bits. We want to construct a PRG. pseudo-random bits PRG short seed n bits n 10 bits

10
Truth table of f f(1)f(2)f(3) … f(x) … f(2 n ) A naive idea x f(x)..f(x+n 10 ) G outputs n 10 successive values of f G(x)=f(x),f(x+1),..,f(x+n 10 ) Previous: Make positions as independent as possible. [TZS01]: Make positions as dependent as possible.

11
Want to prove f isn ’ t hard G isn ’ t pseudo-random f is hard G is pseudo-random

12
Outline of Proof f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random f is hard G is pseudo-random

13
Next-Bit Predictors f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random f(x)..f(x+i-1) f(x+i) By the hybrid argument, there ’ s a small circuit P which predicts the next bit given the previous bits. P(prefix)=next bit with probability ½ +ε.

14
To show that f is easy we ’ ll use P to construct a small circuit for f. Circuits can use “ non- uniform advice ”. We can choose n O(1) inputs and query f on these inputs. Showing that f is easy f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

15
Rules of the game We need to design an algorithm that: Queries f at few positions. (poly(n)). Uses the next-bit predictor P. Computes f everywhere. (on all 2 n positions). f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

16
Computing f using few queries Simplifying assumption: P(prefix)=next bit with probability 1. Queries (non-uniform advice) f(0),..,f(i-1) - n 10 bits Use P to compute f(i),f(i+1),f(i+2) … f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random f(0)…f(i-1) f(i) f(1)……f(i) f(i+1) f(2)..f(i+1) f(i+2) Compute f everywhere

17
Rules of the game We need to design an algorithm that: Queries f at few positions. (poly(n)). Uses the next-bit predictor P. Computes f everywhere. (on all 2 n positions). f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random * To get a small circuit we also need that for every x, f(x) can be computed in time n O(1) given the non-uniform advice.

18
A Problem: The predictor makes errors We ’ ve made a simplifying assumption that: Pr x [P(prefix)=next bit] = 1 We are only guaranteed that: Pr x [P(prefix)=next bit] > ½ +ε f(x)..f(x+i-1) f(x+i) vXvvXXXvXXvvvXvvXXVXvXXvX f(0)…f(i-1)f(1)……f(i) Error: cannot Continue! Use Error-Correcting techniques to recover from errors! Prefix

19
Using multivariate polynomials The function f 2n2n A line: One Dimension

20
Using multivariate polynomials f(1,1)f(1,2) f(2,1) 2 n/2 A cube: many dimensions f(x 1,x 2 ) * Low degree extension [BF]: We take a field F with about 2 n/d elements and extend f to a degree about 2 n/d polynomial in d variables. w.l.o.g f(x 1,..,x d ) is a low degree polynomial in d variables* x1x1 x2x2

21
Adjusting to Many Dimensions Problem: No natural meaning to successive in many dimensions. Successive in [TZS01]: move one point right. The Generator: G(x 1,x 2 )=f(x 1,x 2 )..f(x 1,x 2 +n 10 ) 2 n/2 f(x 1,x 2 )..f(x 1,x 2 +n 10 ) X1X1 X2X2

22
Decoding Errors Apply the Predictor in parallel along a random line. With high probability we get ( ½ +ε)-fraction of correct predictions. * Apply error correction: Learn all points on line 2 n/2 *By pairwise independence properties of random lines. v x v v x x v v x v v v v v v v v v A restriction of f to a line: A univariate polynomial! v v v v v v v v v v x v v x x v v x Low degree univariate polynomials have error-correcting properties! Basic idea: Use decoding algorithms for Reed-Solomon codes to decode and continue. If #errors is small (<25%) then it is possible to recover the correct values. The predictor is only correct with probability ½+ε. May make almost 50% errors.

23
Too many errors Coding Theory: Not enough information on on the line to uniquely decode. It is possible to List- Decode to get few polynomials one of which is correct [S97]. [TZS01]: Use additional queries to pin down the correct polynomial. 2 n/2 We also have the information we previously computed! v x v v x x v v x

24
Curves Instead of Lines Lines: deg. 1 polynomials: L(t)=at+b Curves: higher deg. (n O(1) ) C(t)=a r t r +a r-1 t r-1..+a 0 2 n/2 Observation: f restricted to a low-degree curve is still a low-degree univariate polynomial. Points on degree r curve are r-wise independent. (crucial for analysis).

25
A special curve with intersection properties. Curve passes through: Few (random) points Successive points. 2 n/2 This curve intersects itself when moved!

26
Recovering From Errors 2 n/2 No errors! Previously computed. (½+ε)-fraction of correct predictions. Just like before: Query n 10 successive curves. Apply the predictor in parallel.

27
Recovering From Errors 2 n/2 No errors! Previously computed. (½+ε)-fraction of correct predictions. Lemma: + = Given: - “Noisy” predicted values. - Few correct values. We can correct!

28
Given: - “Noisy” predicted values. - Few correct values. We can correct! Recovering From Errors 2 n/2 Lemma: + = We implemented an errorless Predictor! Warning: This presentation is oversimplified. The lemma works only for randomly placed points. Actual solution is slightly more complicated and uses two “interleaved” curves.

29
Story so far … We can “ error-correct ” a predictor that makes errors. Coding Theory: Our strategy gives a new list-decoding algorithm for Reed- Muller codes [AS97,STV99]. Short version

30
List decoding Given a corrupted message p: Pr[p(x)=f(x)]>ε Output f 1,..,f t s.t. f in list.

31
Our setup: List decoding with predictor Given a predictor P: Pr[P(f(x-1),f(x-2),..,f(x-i))=f(x)]>ε Use k queries to compute f everywhere.

32
Our setup: List decoding with predictor Given a predictor P: Pr[P(x,f(x-1),f(x-2),..,f(x-i))=f(x)]>ε Use k queries to compute f everywhere. The decoding scenario is a special case when i=0 (predictor from empty prefix).

33
Our setup: List decoding with predictor Given a predictor P: Pr[P(x,f(x-1),f(x-2),..,f(x-i))=f(x)]>ε Use k queries to compute f everywhere. To list-decode output all possible f ’ s for all 2 k possible answers to queries.

34
Reducing the number of queries

35
Want: n O(1) Make: n 10 · |Curve| How many queries? 2 n/2 n 10 Want to use short curves.

36
Using many dimensions 1 dimension: 2 n 2 dimensions: 2 n/2 3 dimensions: 2 n/3 d dimensions: 2 n/d d=Ω(n/log(n)) => length = n O(1)

37
Conflict? Many DimensionsOne Dimension Error correction. Few queries. Natural meaning to successive. We’d like to have both!

38
A different Successor Function F d Vector-Space. Base Field F. F d Extension Field of F. Multiplicative group has a generator g. F d \ 0={1,g,g 2, g 3, … } Successor(v)=g · v Covers the space. Many Dimensions One Dimension 1 g g 2 g 3 ……. g i ……………………. We compute f Everywhere!

39
A New Successor Function Many Dimensions One Dimension 1 g g 2 g 3 ……. g i ……………………. Successor(v)=g · v Covers the space. We compute f Everywhere! Invertible linear transform. Maps curves to curves!

40
We use our decoding algorithm succesively. Choice of successor function guarantees that we learn f at every point! Nothing Changes! 2 n/2 Lemma: + =

41
The final Construction Ingredients: f(x 1,..,x d ): a d-variate polynomial. g: generator of the extension field F d. Pseudo-Random Generator: This is essentially the naive idea we started from. * The actual construction is a little bit more complicated.

42
Query f at few short successive “ special curves ”. Use predictor to learn the next curve with errors. Use intersection properties of the special curve to error correct the current curve. Successive curves cover the space and so we compute f everywhere. Summary of proof f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

43
Conclusion A simple construction of PRG ’ s. (Almost all the complications we talked about are in the proof, not the construction!) This construction and proof are very versatile and have many applications: Randomness extractors, (list)-decoding, hardness amplification, derandomizing Arthur-Merlin games, unbalanced expander graphs. Further research: Other uses for the naive approach for PRG ’ s. Other uses for the error-correcting technique.

44
That ’ s it …

45
What I didn ’ t show Next step: Use error corrected predictor to compute f everywhere. The cost of “ error-correction ” : We ’ re using too many queries just to get started. We ’ re using many dimensions. (f is a polynomial in many variables). It ’ s not clear how to implement the naive strategy in many dimensions! More details from the paper/survey:

46
Conclusion A simple construction of PRG ’ s. (Almost all the complications we talked about are in the proof, not the construction!) This construction and proof are very versatile and have many applications: Randomness extractors, (list)-decoding, hardness amplification, derandomizing Arthur-Merlin games, unbalanced expander graphs. Further research: Other uses for the naive approach for PRG ’ s. Other uses for the error-correcting technique.

47
That ’ s it …

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google