Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.

Slides:



Advertisements
Similar presentations
Pseudorandom Walks: Looking Random in The Long Run or All The Way? Omer Reingold Weizmann Institute.
Advertisements

Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS.
On the Complexity of Parallel Hardness Amplification for One-Way Functions Chi-Jen Lu Academia Sinica, Taiwan.
Unconditional Weak derandomization of weak algorithms Explicit versions of Yao s lemma Ronen Shaltiel, University of Haifa :
Low-End Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Ronen Shaltiel, University of Haifa Chris Umans, Caltech.
PRG for Low Degree Polynomials from AG-Codes Gil Cohen Joint work with Amnon Ta-Shma.
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Russell Impagliazzo ( IAS & UCSD ) Ragesh Jaiswal ( Columbia U. ) Valentine Kabanets ( IAS & SFU ) Avi Wigderson ( IAS ) ( based on [IJKW08, IKW09] )
Direct Product : Decoding & Testing, with Applications Russell Impagliazzo (IAS & UCSD) Ragesh Jaiswal (Columbia) Valentine Kabanets (SFU) Avi Wigderson.
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Are lower bounds hard to prove? Michal Koucký Institute of Mathematics, Prague.
Derandomization & Cryptography Boaz Barak, Weizmann Shien Jin Ong, MIT Salil Vadhan, Harvard.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans.
Quantum Information and the PCP Theorem Ran Raz Weizmann Institute.
Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Danny Gutfreund, Hebrew U. Ronen Shaltiel, Weizmann Inst. Amnon Ta-Shma, Tel-Aviv U.
Complexity Theory Lecture 9 Lecturer: Moni Naor. Recap Last week: –Toda’s Theorem: PH  P #P. –Program checking and hardness on the average of the permanent.
CS151 Complexity Theory Lecture 8 April 22, 2004.
Pseudorandomness for Approximate Counting and Sampling Ronen Shaltiel University of Haifa Chris Umans Caltech.
Circuit Complexity and Derandomization Tokyo Institute of Technology Akinori Kawachi.
A survey on derandomizing BPP and AM Danny Gutfreund, Hebrew U. Ronen Shaltiel, Weizmann Inst. Amnon Ta-Shma, Tel-Aviv U.
Better Pseudorandom Generators from Milder Pseudorandom Restrictions Raghu Meka (IAS) Parikshit Gopalan, Omer Reingold (MSR-SVC) Luca Trevian (Stanford),
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Time vs Randomness a GITCS presentation February 13, 2012.
Some Thoughts regarding Unconditional Derandomization Oded Goldreich Weizmann Institute of Science RANDOM 2010.
Derandomization: New Results and Applications Emanuele Viola Harvard University March 2006.
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator Ronen Shaltiel (Hebrew U) & Chris Umans (MSR) 2001.
On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
CS151 Complexity Theory Lecture 8 April 22, 2015.
In a World of BPP=P Oded Goldreich Weizmann Institute of Science.
CS151 Complexity Theory Lecture 9 April 27, 2004.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
On Constructing Parallel Pseudorandom Generators from One-Way Functions Emanuele Viola Harvard University June 2005.
1 Interactive Proofs proof systems interactive proofs and their power Arthur-Merlin games.
Pseudorandom Generators and Typically-Correct Derandomization Jeff Kinne, Dieter van Melkebeek University of Wisconsin-Madison Ronen Shaltiel University.
XOR lemmas & Direct Product thms - Many proofs Avi Wigderson IAS, Princeton ’82 Yao ’87 Levin ‘89 Goldreich-Levin ’95 Impagliazzo ‘95 Goldreich-Nisan-Wigderson.
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
CS151 Complexity Theory Lecture 10 April 29, 2015.
Polynomials Emanuele Viola Columbia University work partially done at IAS and Harvard University December 2007.
My Favorite Ten Complexity Theorems of the Past Decade II Lance Fortnow University of Chicago.
Umans Complexity Theory Lectures Lecture 17: Natural Proofs.
Pseudorandom Bits for Constant-Depth Circuits with Few Arbitrary Symmetric Gates Emanuele Viola Harvard University June 2005.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Hardness amplification proofs require majority Emanuele Viola Columbia University Work also done at Harvard and IAS Joint work with Ronen Shaltiel University.
Pseudo-random generators Talk for Amnon ’ s seminar.
Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University.
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
Umans Complexity Theory Lectures Lecture 9b: Pseudo-Random Generators (PRGs) for BPP: - Hardness vs. randomness - Nisan-Wigderson (NW) Pseudo- Random Generator.
Pseudo-randomness. Randomized complexity classes model: probabilistic Turing Machine –deterministic TM with additional read-only tape containing “coin.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
Derandomization & Cryptography
Pseudorandomness when the odds are against you
Pseudorandomness for Approximate Counting and Sampling
An average-case lower bound against ACC0
My Favorite Ten Complexity Theorems of the Past Decade II
Cryptography Lecture 6.
Pseudo-derandomizing learning and approximation
How to Delegate Computations: The Power of No-Signaling Proofs
Umans Complexity Theory Lectures
Indistinguishability by adaptive procedures with advice, and lower bounds on hardness amplification proofs Aryeh Grinberg, U. Haifa Ronen.
Cryptography Lecture 6.
Emanuele Viola Harvard University June 2005
CS151 Complexity Theory Lecture 10 May 2, 2019.
On Derandomizing Algorithms that Err Extremely Rarely
Emanuele Viola Harvard University October 2005
Pseudorandomness: New Results and Applications
Presentation transcript:

Talk for Topics course

Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string of pseudo-random bits. Pseudo-Randomness: No small circuit can distinguish truly random bits from pseudo-random bits. few truly random bits many “ pseudo-random ” bits Nisan-Wigderson setting: The generator is more powerful than the circuit. (i.e., PRG runs in time n 5 for circuits of size n 3 ). Hardness vs. Randomness paradigm: [BM,Y,S] Construct PRGs assuming hard functions. f  EXP hard (on worst case) for small circuits. [NW88,BFNW93,I95,IW97,STV99,ISW99,ISW00]

Hardness versus Randomness Initiated by [BM,Yao,Shamir]. Assumption: explicit hard functions exist Efficient PRG ’ s exist Derandomization of prob. algorithms

Today ’ s talk Theorem: If there exist a function f s.t f in EXP. f is hard (on worst case) for poly-size circuits. Then there is a PRG which is computable in EXP and stretches n bits -> n c bits (for an arbitrary constant c) and “ fools ” poly-size circuits. Conclusion: BPP is in SUBEXP. Remark: Stronger assumption BPP=P.

The Impagliazzo-Wigderson assumption Computable in time 2 O(n). Cannot be computed by boolean circuits of size 2 δn, for some 0<δ<1. Computable in non-deterministic time poly(n) Cannot be computed in time poly(n). There exists a function f which is NP≠P

Converting Hardness into pseudo-randomness Basic idea of all PRG constructions: f is “ very hard ” for efficient algorithms. f(x) “ looks ” like a random coin to an efficient algorithm which gets x. Suggestion: PRG(x)=x,f(x). PRG is computable in exponential time and fools poly-size circuits.

A quick overview Hardness vs. Randomness: Cryptography: [BM,Y,S,HILL] Derandomization: [NW88,BFNW93,I95,IW97, IW98,STV99,ISW99,ISW00,SU01,U02]. Important milestone [NW88,IW97]: Under suitable hardness assumptions: Every probabilistic algorithm can be completely and efficiently derandomized! deterministic algorithms are just as strong as probabilistic algorithms!

Goal: Construct efficient pseudo-random generators We ’ re given a hard function f on n bits. We want to construct an efficient PRG. pseudo-random bits PRG short seed n bits n 10 bits

Truth table of f f(1)f(2)f(3) … f(x) … f(2 n ) A naive idea x f(x)..f(x+n 10 ) G outputs n 10 successive values of f G(x)=f(x),f(x+1),..,f(x+n 10 )

Want to prove f isn ’ t hard G isn ’ t pseudo-random f is hard G is pseudo-random

Outline of Proof f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random f is hard G is pseudo-random

Next-Bit Predictors Truly random bits satisfy: Every bit is random given the previous bits. For every algorithm P: Pr[P(prefix)=next bit]= ½ f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

Next-Bit Predictors f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random f(x)..f(x+i-1) f(x+i) Thm: [Yao82] If G isn ’ t pseudo-random then it has a weak bit. There ’ s an efficient algorithm P which predicts this bit given the previous bits. P(prefix)=next bit with probability ½ +ε.

To show that f is easy we ’ ll use P to show that it is efficiently computable. f has a poly-size circuit. Circuits are algorithms with “ non-uniform advice ”. We can choose n O(1) inputs and query f on these inputs. Showing that f is easy f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

Rules of the game We need to design an algorithm that: Queries f at few positions. (poly(n)). Uses the next-bit predictor P. Computes f everywhere. (on all 2 n positions). f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

Computing f using few queries Simplifying assumption: P(prefix)=next bit with probability 1. Queries (non-uniform advice) f(0),..,f(i-1) - n 10 bits Use P to compute f(i),f(i+1),f(i+2) … f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random f(0)…f(i-1) f(i) f(1)……f(i) f(i+1) f(2)..f(i+1) f(i+2) Compute f everywhere

Rules of the game We need to design an algorithm that: Queries f at few positions. (poly(n)). Uses the next-bit predictor P. Computes f everywhere. (on all 2 n positions). f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random * To get a small circuit we also need that for every x, f(x) can be computed in time n O(1) given the non-uniform advice.

A Problem: The predictor makes errors We ’ ve made a simplifying assumption that: Pr x [P(prefix)=next bit] = 1 We are only guaranteed that: Pr x [P(prefix)=next bit] > ½ +ε f(x)..f(x+i-1) f(x+i) vXvvXXXvXXvvvXvvXXVXvXXvX f(0)…f(i-1)f(1)……f(i) Error: cannot Continue! Use Error-Correcting techniques to recover from errors! Prefix

Using multivariate polynomials The function f 2n2n A line: One Dimension

Using multivariate polynomials f(1,1)f(1,2) f(2,1) 2 n/2 A cube: many dimensions f(x 1,x 2 ) * Low degree extension: We take a field F with about 2 n/d elements and extend f to a degree about 2 n/d polynomial in d variables. w.l.o.g f(x 1,..,x d ) is a low degree polynomial in d variables* x1x1 x2x2

Adjusting to Many Dimensions Problem: No natural meaning to successive in many dimensions. Successive: move one point right. The Generator: G(x 1,x 2 )=f(x 1,x 2 )..f(x 1,x 2 +n 10 ) 2 n/2 f(x 1,x 2 )..f(x 1,x 2 +n 10 ) X1X1 X2X2

Decoding Errors Apply the Predictor in parallel along a line. Get ( ½ +ε)-fraction of correct predictions. * Apply error correction: Learn all points on line 2 n/2 *With high probability if the line is chosen at random. v x v v x x v v x v v v v v v v v v A restriction of f to a line: A univariate polynomial! Low degree univariate polynomials have error-correcting properties! Interpolation: If we know a degree k polynomial on k+1 points we can compute it on all points. Coding theory studies the case where we don’t know where the errors are. If #errors is small (<25%) then it is possible to recover the correct values. v v v v v v v v v v x v v x x v v x We don’t know which positions were predicted correctly. k correct positions

Too many errors The predictor succeeds with probability ½ +ε. May make almost 50% errors. Coding Theory: Not enough information on on the line to decode. * 2 n/2 *It is possible to “List Decode” to get few polynomials one of which is correct. We also have the information we previously computed! v x v v x x v v x

Curves Instead of Lines Lines: deg. 1 polynomials: L(t)=at+b Curves: higher deg. (n O(1) ) C(t)=a r t r +a r-1 t r-1..+a 0 Observation: f restricted to a low-degree curve is still a low-degree univariate polynomial. 2 n/2

A special curve with intersection properties. Curve passes through: Few (random) points Successive points. 2 n/2 This curve intersects itself when moved!

Recovering From Errors 2 n/2 No errors! Previously computed. (½+ε)-fraction of correct predictions. Just like before: Query n 10 successive curves. Apply the predictor in parallel.

Recovering From Errors 2 n/2 No errors! Previously computed. (½+ε)-fraction of correct predictions. Lemma: + = Given: - “Noisy” predicted values. - Few correct values. We can correct!

Given: - “Noisy” predicted values. - Few correct values. We can correct! Recovering From Errors 2 n/2 Lemma: + = We implemented an errorless Predictor!

Story so far … We can “ error-correct ” a predictor that makes errors. Contribution to Coding Theory: Our strategy gives a new (list)-decoding algorithm for Reed-Muller codes [AS97,STV99]. Short version

Want: n O(1) Make: n 10 · |Curve| How many queries? 2 n/2 n 10 Want to use short curves.

Using many dimensions 1 dimension: 2 n 2 dimensions: 2 n/2 3 dimensions: 2 n/3 d dimensions: 2 n/d d=Ω(n/log(n)) => length = n O(1)

Conflict? Many DimensionsOne Dimension Error correction. Few queries. Natural meaning to successive. We’d like to have both!

A different Successor Function F d Vector-Space. Base Field F. F d Extension Field of F. Multiplicative group has a generator g. F d \ 0={1,g,g 2, g 3, … } Successor(v)=g · v Covers the space. Many Dimensions One Dimension 1 g g 2 g 3 ……. g i ……………………. We compute f Everywhere!

A New Successor Function Many Dimensions One Dimension 1 g g 2 g 3 ……. g i ……………………. Successor(v)=g · v Covers the space. We compute f Everywhere! Invertible linear transform. Maps curves to curves!

Given: - “Noisy” predicted values. - Few correct values. We can correct! Nothing Changes! 2 n/2 Lemma: + =

The final Construction Ingredients: f(x 1,..,x d ): a d-variate polynomial. g: generator of the extension field F d. Pseudo-Random Generator: This is essentially the naive idea we started from. * The actual construction is a little bit more complicated.

Query f at few short successive “ special curves ”. Use predictor to learn the next curve with errors. Use intersection properties of the special curve to error correct the current curve. Successive curves cover the space and so we compute f everywhere. Summary of proof f isn’t hard Use P to compute f Exists next-bit predictor P for G G isn’t pseudo-random

Conclusion A simple construction of PRG ’ s. (Almost all the complications we talked about are in the proof, not the construction!) This construction and proof are very versatile and have many applications: Randomness extractors, (list)-decoding, hardness amplification, derandomizing Arthur-Merlin games, unbalanced expander graphs. Further research: Other uses for the naive approach for PRG ’ s. Other uses for the error-correcting technique.

That ’ s it …

The next step in derandomization? Continue studying relations between various derandomization tasks and hard functions. Recent works [IKW01,IK02] essentially give: Derandomization => (weak) explicit hard functions. A way to prove that (weak) explicit hard functions exist! The existence of (weak) explicit hard functions may be easier to prove! (NEXP ≠ P/poly) may be easier than (NP ≠ P).

What I didn ’ t show Next step: Use error corrected predictor to compute f everywhere. The cost of “ error-correction ” : We ’ re using too many queries just to get started. We ’ re using many dimensions. (f is a polynomial in many variables). It ’ s not clear how to implement the naive strategy in many dimensions! More details from the paper/survey:

Conclusion A simple construction of PRG ’ s. (Almost all the complications we talked about are in the proof, not the construction!) This construction and proof are very versatile and have many applications: Randomness extractors, (list)-decoding, hardness amplification, derandomizing Arthur-Merlin games, unbalanced expander graphs. Further research: Other uses for the naive approach for PRG ’ s. Other uses for the error-correcting technique.

That ’ s it …