RANDOMNESS VS. MEMORY: Prospects and Barriers Omer Reingold, Microsoft Research and Weizmann With insights courtesy of Moni Naor, Ran Raz, Luca Trevisan,

Slides:



Advertisements
Similar presentations
Walk the Walk: On Pseudorandomness, Expansion, and Connectivity Omer Reingold Weizmann Institute Based on join works with Michael Capalbo, Kai-Min Chung,
Advertisements

Pseudorandom Walks: Looking Random in The Long Run or All The Way? Omer Reingold Weizmann Institute.
Undirected ST-Connectivity in Log-Space Omer Reingold Weizmann Institute.
PRG for Low Degree Polynomials from AG-Codes Gil Cohen Joint work with Amnon Ta-Shma.
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
A Combinatorial Construction of Almost-Ramanujan Graphs Using the Zig-Zag product Avraham Ben-Aroya Avraham Ben-Aroya Amnon Ta-Shma Amnon Ta-Shma Tel-Aviv.
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS.
Pseudorandomness from Shrinkage David Zuckerman University of Texas at Austin Joint with Russell Impagliazzo and Raghu Meka.
Extracting Randomness David Zuckerman University of Texas at Austin.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Circuit and Communication Complexity. Karchmer – Wigderson Games Given The communication game G f : Alice getss.t. f(x)=1 Bob getss.t. f(y)=0 Goal: Find.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
1 Nondeterministic Space is Closed Under Complement Presented by Jing Zhang and Yingbo Wang Theory of Computation II Professor: Geoffrey Smith.
Complexity Theory Lecture 3 Lecturer: Moni Naor. Recap Last week: Non deterministic communication complexity Probabilistic communication complexity Their.
Deterministic Amplification of Space-Bounded Probabilistic Algorithms Ziv Bar-Yossef Oded Goldreich U.C. Berkeley Weizmann Institute U.C. Berkeley Weizmann.
Better Pseudorandom Generators from Milder Pseudorandom Restrictions Raghu Meka (IAS) Parikshit Gopalan, Omer Reingold (MSR-SVC) Luca Trevian (Stanford),
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Time vs Randomness a GITCS presentation February 13, 2012.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
(Omer Reingold, 2005) Speaker: Roii Werner TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A AA A A A A AA A.
Yi Wu (CMU) Joint work with Parikshit Gopalan (MSR SVC) Ryan O’Donnell (CMU) David Zuckerman (UT Austin) Pseudorandom Generators for Halfspaces TexPoint.
Constant Degree, Lossless Expanders Omer Reingold AT&T joint work with Michael Capalbo (IAS), Salil Vadhan (Harvard), and Avi Wigderson (Hebrew U., IAS)
Undirected ST-Connectivity 2 DL Omer Reingold, STOC 2005: Presented by: Fenghui Zhang CPSC 637 – paper presentation.
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
Undirected ST-Connectivity in Log-Space By Omer Reingold (Weizmann Institute) Year 2004 Presented by Maor Mishkin.
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
GOING DOWN HILL: MORE EFFICIENT PSEUDORANDOM GENERATORS FROM ANY ONE-WAY FUNCTION Joint with Iftach Haitner and Salil Vadhan Omer Reingold&
1 Constructing Pseudo-Random Permutations with a Prescribed Structure Moni Naor Weizmann Institute Omer Reingold AT&T Research.
The Power of Randomness in Computation 呂及人中研院資訊所.
On Everlasting Security in the Hybrid Bounded Storage Model Danny Harnik Moni Naor.
Undirected ST-Connectivity In Log Space
Undirected ST-Connectivity In Log Space Omer Reingold Slides by Sharon Bruckner.
Extractors with Weak Random Seeds Ran Raz Weizmann Institute.
Simulating independence: new constructions of Condensers, Ramsey Graphs, Dispersers and Extractors Boaz Barak Guy Kindler Ronen Shaltiel Benny Sudakov.
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
Pseudorandom Generators for Combinatorial Shapes 1 Parikshit Gopalan, MSR SVC Raghu Meka, UT Austin Omer Reingold, MSR SVC David Zuckerman, UT Austin.
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
RANDOMNESS AND PSEUDORANDOMNESS Omer Reingold, Microsoft Research and Weizmann.
Endre Szemerédi & TCS Avi Wigderson IAS, Princeton.
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness Seeded.
Polynomials Emanuele Viola Columbia University work partially done at IAS and Harvard University December 2007.
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness.
My Favorite Ten Complexity Theorems of the Past Decade II Lance Fortnow University of Chicago.
Artur Czumaj DIMAP DIMAP (Centre for Discrete Maths and it Applications) Computer Science & Department of Computer Science University of Warwick Testing.
Pseudorandom generators for group products Michal Koucký Institute of Mathematics, Prague Prajakta Nimbhorkar Pavel Pudlák IMSC, Chenai IM, Prague IMSC,
Complexity and Efficient Algorithms Group / Department of Computer Science Testing the Cluster Structure of Graphs Christian Sohler joint work with Artur.
Derandomized Constructions of k -Wise (Almost) Independent Permutations Eyal Kaplan Moni Naor Omer Reingold Weizmann Institute of ScienceTel-Aviv University.
Pseudo-random generators Talk for Amnon ’ s seminar.
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
RANDOMNESS AND PSEUDORANDOMNESS Omer Reingold, Microsoft Research and Weizmann.
Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University.
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
Stochastic Streams: Sample Complexity vs. Space Complexity
Coding, Complexity and Sparsity workshop
Approximating the MST Weight in Sublinear Time
CS154, Lecture 18:.
Pseudorandomness when the odds are against you
Complexity of Expander-Based Reasoning and the Power of Monotone Proofs Sam Buss (UCSD), Valentine Kabanets (SFU), Antonina Kolokolova.
Turnstile Streaming Algorithms Might as Well Be Linear Sketches
Extractors: Optimal Up to Constant Factors
Undirected ST-Connectivity In Log Space
The Zig-Zag Product and Expansion Close to the Degree
On Derandomizing Algorithms that Err Extremely Rarely
Presentation transcript:

RANDOMNESS VS. MEMORY: Prospects and Barriers Omer Reingold, Microsoft Research and Weizmann With insights courtesy of Moni Naor, Ran Raz, Luca Trevisan, Salil Vadhan, Avi Wigderson, many more …

Randomness In Computation (1)  Distributed computing (breaking symmetry)  Cryptography: Secrets, Semantic Security, …  Sampling, Simulations, …

Randomness In Computation (2)  Communication Complexity (e.g., equality)  Routing (on the cube [Valiant]) - drastically reduces congestion

Randomness In Computation (3)  In algorithms – useful design tool, but many times can derandomize (e.g., PRIMES in P). Is it always the case?  RL=L means that every randomized algorithm can be derandomized with only a constant factor increase in memory

Talk’s Premise: Many Frontiers of RL=L RL=L RL in L 3/2 And Beyond Barriers of previous proofs  wealth of excellent research problems.

RL  (NL  ) L 2 [Savitch 70] poly(|x|) configs transitions on current random bit duplicate (running time T) ≤ poly(|x|) times s = start config t = accept config Configuration graph (per RL algorithm for P & input x): x  P  random walk from s ends at t w.p. ≥ ½ x  P  t unreachable from s Enumerating all possible paths – too expensive. Main idea: 1st half of computation only transmits log n bits to 2nd half

Oblivious Derandomiztion of RL  Pseudorandom generators that fool space-bounded algorithms [AKS 87, BNS 89, Nisan 90, NZ 93, INW 94 ]  Nisan’s generator has seed length log 2 n  Proof that RL in L 2 via oblivious derandomization  Major tool in the study of RL vs. L  Applications beyond [Ind 00, Siv 02, KNO 05,…]  Open problem: PRGs with reduced seed length

Randomness Your Service  Basic idea [NZ93] (related to Nisan’s generator):  Let log -space A read a random 100 logn bit string x.  Since A remembers at most logn bits, x still contains (roughly) 99 logn bits of entropy (independent of A ’s state).  Can recycle x : G x,y x, Ext(x,y)

Randomness Your Service  NZ generator:  Possible setting of parameters: x is O(log n) long. Each y i is O(log ½ n) long and have log ½ n y i ’s.  Expand O(log n) bits to O(log 3/2 n) (get any poly)  Error >> 1/n ([AKS87] gets almost log 2 n bits w. error 1/n) G x,y 1,y 2, … x, Ext(x,y 1 ), Ext(x,y 2 ),

Randomness Your Service  NZ generator:  Error >> 1/n ([AKS87] gets almost log 2 n bits w. error 1/n)  Open: get any polynomial expansion w. error 1/n  Open: super polynomial expansion with logarithmic seed and constant error (partial result [RR99]). G x,y 1,y 2, … x, Ext(x,y 1 ), Ext(x,y 2 ),

Nisan,INW Generators via Extractors  Recall basic generator:  Lets flip it … G x,y x, Ext(x,y)

Nisan,INW Generators via Extractors x,y xExt(x,y) Given state of machine in the middle, Ext(x,y) still  -random log n Loss at each level: log n (possible entropy in state). + log 1/ έ for extractor seed, where έ =  /n Altogether: seed length = log 2 n

Nisan,INW + NZ  RL=L  Let M be an RL machine  Using [Nisan] get M’ that uses only log 2 n random bits  Fully derandomize M’ using [NZ]  Or does it?  M’ is not an RL machine (access to seed of [Nisan, INW] not read once)  Still, natural approach – derandomize seed of [Nisan] Can we build PRGs from read once ingredients? Not too promising …

RL  L 3/2 [SZ95] - “derandomized” [Nis]  Nisan’s generator has following properties:  Seed divided into h (length log 2 n ) and x (length logn ).  Given h in input tape, generator runs in L.   M, w.h.p over h, fixing h and ranging over x implies a good generator for M.  h is shorter if we generate less than n bits

[SZ95] - basic idea  Fix h, divide run of M to segments:  Enumerate over x, estimate all transition probs.  Replace each segment with a single transition  Recurse using the same h  Now M’ depends on h M’ close to some t-power of M. [SZ] perturb M’ to eliminate dependency on h

[SZ95] –further progress  Open: Translate [SZ] to a better generator against space bounded algorithms!  Potentially, can then recursively apply [SZ] and get better derandomization of RL (after constant number of iterations may get RL in L 1+  )  Armoni showed an interesting extrapolation between [NZ] and [INW] and as a result got a slight improvement (RL in L 3/2 /(log L) 1/2 )

Thoughts on Improving INW x,y xExt(x,y) Loss at each level: log n (possible entropy in state). + log 1/ έ for extractor seed, where έ =  /n Avoiding loss due to entropy in state: [RR99] Recycle the entropy of the states. Challenge: how to do it when do not know state probabilities? Open: better PRGs against constant width branching programs Even for combinatorial rectangles we do not know “optimal” PRGs

Thoughts on Improving INW x,y xExt(x,y) Loss at each level: log n (possible entropy in state). + log 1/ έ for extractor seed, where έ =  /n Avoiding loss due to extractor seeds: Can we recycle y from previous computation? Challenge: contain dependencies … Do we need a seed at all? Use seedless extractors instead? x Ext(x,y(x))

Thoughts on Improving INW x,y xExt(x,y) Loss at each level: log n (possible entropy in state). + log 1/ έ for extractor seed, where έ =  /n Extractor seed is long because we need to work with small error έ =  /n Error reduction for PRGs? If use error έ =  /(log n) sequence still has some unpredictability property, is it usable? (Yes for SL [R04,RozVad05]!)

Final Comment on Improving INW  Perhaps instead on reducing the loss per level we should reduce the number of levels?  This means that at each level the number of pseudorandom strings we have should increase more rapidly (e.g., quadraticaly).  Specific approach based on ideas from Cryptography (constructions of PRFs based on PR Synthesizers [NR]), more complicated to apply here.

Its all About Graph Connectivity  Directed Connectivity captures NL  Undirected Connectivity is in L [R04].  Oblivious derandomization: pseudo-converging walks for consistently labelled regular digraphs [R04,RTV05]  Where is RL on this scale?  Connectivity for digraphs w/polynomial mixing time [RTV05] Outgoing edges have labels. Consistent labelling means that each label forms a permutation on vertices A walk on consistently labelled graph cannot lose entropy

Connectivity for undirected graphs [R04] Connectivity for regular digraphs [RTV05], Pseudo-converging walks for consistently-labelled, regular digraphs [R04, RTV05] Pseudo-converging walks for regular digraphs [RTV05] Connectivity for digraphs w/polynomial mixing time [RTV05] RL in L Suffice to prove RL=L Towards RL vs. L? It is not about reversibility but about regularity In fact it is about having estimates on stationary probabilities [CRV07]

Some More Open Problems  Pseudo-converging walks on an (inconsistently labelled) clique. (Similarly, universal traversal sequence).  Undirected Dirichlet Problem:  Input: undirected graph G, a vertex s, a set B of vertices, a function f: B → [0, 1].  Output: estimation of f(b) where b is the entry point of the random walk into B.

Conclusions  Richness of research directions and open problems towards RL=L and beyond:  PRGs against space bounded computations  Directed connectivity. Even if you think that NL=L is plain crazy, many interesting questions and some beautiful research …

Widescreen Test Pattern (16:9) Aspect Ratio Test (Should appear circular) 16x9 4x3