Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University.

Slides:



Advertisements
Similar presentations
Pseudorandom Walks: Looking Random in The Long Run or All The Way? Omer Reingold Weizmann Institute.
Advertisements

Low-End Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Ronen Shaltiel, University of Haifa Chris Umans, Caltech.
An Introduction to Randomness Extractors Ronen Shaltiel University of Haifa Daddy, how do computers get random bits?
Linear-Degree Extractors and the Inapproximability of Max Clique and Chromatic Number David Zuckerman University of Texas at Austin.
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Extracting Randomness From Few Independent Sources Boaz Barak, IAS Russell Impagliazzo, UCSD Avi Wigderson, IAS.
2 4 Theorem:Proof: What shall we do for an undirected graph?
Derandomization & Cryptography Boaz Barak, Weizmann Shien Jin Ong, MIT Salil Vadhan, Harvard.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans.
Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Danny Gutfreund, Hebrew U. Ronen Shaltiel, Weizmann Inst. Amnon Ta-Shma, Tel-Aviv U.
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness.
Deterministic Amplification of Space-Bounded Probabilistic Algorithms Ziv Bar-Yossef Oded Goldreich U.C. Berkeley Weizmann Institute U.C. Berkeley Weizmann.
A survey on derandomizing BPP and AM Danny Gutfreund, Hebrew U. Ronen Shaltiel, Weizmann Inst. Amnon Ta-Shma, Tel-Aviv U.
Derandomized parallel repetition theorems for free games Ronen Shaltiel, University of Haifa.
Approximating Average Parameters of Graphs Oded Goldreich, Weizmann Institute Dana Ron, Tel Aviv University.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
Some Thoughts regarding Unconditional Derandomization Oded Goldreich Weizmann Institute of Science RANDOM 2010.
1 Mazes In The Theory of Computer Science Dana Moshkovitz.
1 Algorithms for Large Data Sets Ziv Bar-Yossef Lecture 8 May 4, 2005
Derandomization: New Results and Applications Emanuele Viola Harvard University March 2006.
ACT1 Slides by Vera Asodi & Tomer Naveh. Updated by : Avi Ben-Aroya & Alon Brook Adapted from Oded Goldreich’s course lecture notes by Sergey Benditkis,
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
Complexity 19-1 Complexity Andrei Bulatov More Probabilistic Algorithms.
Computability and Complexity 20-1 Computability and Complexity Andrei Bulatov Class NL.
CS151 Complexity Theory Lecture 11 May 4, CS151 Lecture 112 Outline Extractors Trevisan’s extractor RL and undirected STCONN.
3-source extractors, bi-partite Ramsey graphs, and other explicit constructions Boaz barak rOnen shaltiel Benny sudakov avi wigderson Joint work with GUY.
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
Complexity 1 Mazes And Random Walks. Complexity 2 Can You Solve This Maze?
The Power of Randomness in Computation 呂及人中研院資訊所.
1 Slides by Elery Pfeffer and Elad Hazan, Based on slides by Michael Lewin & Robert Sayegh. Adapted from Oded Goldreich’s course lecture notes by Eilon.
In a World of BPP=P Oded Goldreich Weizmann Institute of Science.
Undirected ST-Connectivity In Log Space Omer Reingold Slides by Sharon Bruckner.
Simulating independence: new constructions of Condensers, Ramsey Graphs, Dispersers and Extractors Boaz Barak Guy Kindler Ronen Shaltiel Benny Sudakov.
Randomness – A computational complexity view Avi Wigderson Institute for Advanced Study.
The Power and Weakness of Randomness (when you are short on time) Avi Wigderson School of Mathematics Institute for Advanced Study.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
Endre Szemerédi & TCS Avi Wigderson IAS, Princeton.
On Constructing Parallel Pseudorandom Generators from One-Way Functions Emanuele Viola Harvard University June 2005.
XOR lemmas & Direct Product thms - Many proofs Avi Wigderson IAS, Princeton ’82 Yao ’87 Levin ‘89 Goldreich-Levin ’95 Impagliazzo ‘95 Goldreich-Nisan-Wigderson.
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness Seeded.
Testing the independence number of hypergraphs
Markov Chains and Random Walks. Def: A stochastic process X={X(t),t ∈ T} is a collection of random variables. If T is a countable set, say T={0,1,2, …
Some Fundamental Insights of Computational Complexity Theory Avi Wigderson IAS, Princeton, NJ Hebrew University, Jerusalem.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Polynomials Emanuele Viola Columbia University work partially done at IAS and Harvard University December 2007.
Extractors: applications and constructions Avi Wigderson IAS, Princeton Randomness.
RANDOMNESS VS. MEMORY: Prospects and Barriers Omer Reingold, Microsoft Research and Weizmann With insights courtesy of Moni Naor, Ran Raz, Luca Trevisan,
Pseudorandom Bits for Constant-Depth Circuits with Few Arbitrary Symmetric Gates Emanuele Viola Harvard University June 2005.
Pseudo-random generators Talk for Amnon ’ s seminar.
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
Space Complexity Guy Feigenblat Based on lecture by Dr. Ely Porat Complexity course Computer science department, Bar-Ilan university December 2008.
Umans Complexity Theory Lectures
Probabilistic Algorithms
Markov Chains and Random Walks
Derandomization & Cryptography
CS151 Complexity Theory Lecture 11 May 8, 2017.
Pseudorandomness when the odds are against you
Turnstile Streaming Algorithms Might as Well Be Linear Sketches
Pseudo-derandomizing learning and approximation
Prabhas Chongstitvatana
Emanuele Viola Harvard University June 2005
CS151 Complexity Theory Lecture 11 May 7, 2019.
On Derandomizing Algorithms that Err Extremely Rarely
Emanuele Viola Harvard University October 2005
CS151 Complexity Theory Lecture 4 April 8, 2004.
Presentation transcript:

Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University

SL vs. L Theseus Ariadne Crete, ~1000 BC

SL vs L Thm (informal): SL=L except on rare inputs Thm (formal): For every  >0 there is a deterministic logspace algorithm, which correctly determines undirected st-connectivity, except on at most exp(n  ) graphs on n vertices on which it answers “?” Thm: Fix any language A in SL. Then for every  >0 there is a deterministic logspace algorithm, which correctly determines membership in A, except on at most exp(n  ) inputs of length n, on which it answers “?”

Derandomization “God does not play dice with the universe”

General Derandomization Thm (informal): BPP=P except on rare inputs under some natural complexity assumption. Thm (formal): Assumption: There is a function in P, which has no approx n k –size circuits with SAT oracle for any k. Conclusion: Fix any language A in BPP. Then for every  >0 there is a deterministic polyime algorithm, which for every n errs on at most exp(n  ) inputs of length n.

The Old Paradigm Alg’(x): Majority {Alg(x,G(1)),…,Alg(x,G(2 d )} Alg random bits (|r|=m) r Alg(x,r) correct for most r input x (|x|=n) Wx = good r’s for x |Wx|/ 2 m > 3/4 If: G efficient, pseudo-random generator for Alg., d=O(log n) Then: Alg’ is deterministic, efficient, correct for every x G (|s|=d) s seed

The New Idea Alg’(x): Majority {Alg(x,E(x,1)),…,Alg(x,E(x,2 d )} If: E efficient extractor, d=O(log n) Then: Alg’ is deterministic, efficient, correct for all but few x Alg random bits (|r|=m) r Alg(x,r) correct for most r input x (|x|=n) W x = good r’s for x |W x |/2 m > 3/4 E (|s|=d) s seed (*) m<n(**) W x independent of x

Extractors Def [NZ]: A function E:{0,1} n  {0,1} d → {0,1} m is a (k,  )-extractor if for every k-source X |E(X,U d ) – U m | 1 <  Def [NZ]: A probability distribution X on {0,1} n is a k-source if for every x Pr[X=x]  2 -k Def (informal): Extractors “smooth out” every probability distribution of sufficient “entropy” with the aid of “few” truly random bits. Lemma [NZ]: Fix any event W  {0,1} m. At most 2 k x  {0,1} n satisfy |Pr[ E(x,U d )  W] – |W|/2 m |>  Thm [Z,NZ,T,ISW,SU]: Explicit efficient extractors exist

The LogSpace Arena NL non-deterministic space O(log n) st-connectivity in directed graphs st-connectivity in directed graphs L deterministic space O(log n) st-conn. In directed outdegree 1 graphs st-conn. In directed outdegree 1 graphs SL symmetric non-deterministic space O(log n) st-connectivity in undirected graphs st-connectivity in undirected graphs RL probabilistic space O(log n) L b deterministic space O((log n) b ) L  SL  RL  NL  L 2 L  SL  RL  NL  L 2

Theorems Thm [S]: NL  L 2 Thm [IS] NL = coNL Thm [ALLKR] SL  RL Thm [NSW] SL  L 3/2 Thm [SZ] RL  L 3/2 Thm [ASTW] SL  L 4/3 Open Problems NL = L ? RL = L ? SL = L ? New Thm SL = L except on rare instances

Traversal Sequences  = (  1,  2,…,  p ) in {0,1} p G undirected graph on n vertices A walk w on G starting at v using  : w ← v, set k =  log deg(v)  walk(G,v,  ): (1) If |  |<k output w (2) If not, let i be the value of the 1 st k bits  ’ ←  - first k bits v’ ← i th neighbour of v in G w ← w,v’ walk(G,v’,  ’)

Universal Traversal Sequences Def [C]: A sequence  is n-universal (n-uts)if for every graph on G on n vertices, and for every vertex v of G, walk(G,v,  ) visits all vertices in v’s connected component. Conj [C]: Computing n-uts is in L Thm [AKLLR]: A random walk of length n 4 visits all vertices of a connected n-vertex graph Cor [AKLLR]: Most sequences of length n 6 are n-uts Thm [N]: There is a pseudo-random generator for RL which uses only O((log n) 2 ) random bits and space. Cor [N]: Computing n-uts is in L 2

The NSW Connectivity Algorithm Main subroutine (in L): Input: an n-vertex graph G, any k-uts  Output: an n/k-vertex graph G’, such that G is connected iff G’ is. The algorithm: Repeat main subroutine (log n)/(log k) steps Total space complexity: (log n) 2 /(log k) In [NSW]: k = exp ((log n) 1/2 )  SL in L 3/2 (since by [N] k-uts can be found in L) Here: k=n  /6 for any  >0,  random m-bit string, m=k 6 Most  of length m=n  are k-uts >3/4 <n independent of G

The New Connectivity Algorithm Main subroutine (in L): Input: an n-vertex graph G,  =E(G) Output: an graph G’, connected iff G is, with n/k vertices if  is an k-uts The algorithm: Repeat main subroutine (log n)/(log k) steps Fix  >0, set m=l 6 =n , d=O(log n) Fix a logspace (m 2, 1/8)-extractor E:{0,1} n  {0,1} d → {0,1} m Set E(G)= E(G,1),E(G,2),…,E(G,2 d ) Total space complexity: (log n) 2 /(log k) =O(log n) Whenever E(G) is an k-uts This fails for at most exp(m 2 ) = exp(n 2  ) graphs

General Derandomization Alg’(x): Majority {Alg(x,G(E(x,1))),…,Alg(x,G(E(x,2 d )))} If: E efficient extractor, G pseudorandom generator Then: Alg’ is deterministic, efficient, correct for all but few x G Alg random bits (|r|=m>n) r Alg(x,r) correct for most r input x (|x|=n) |W x |/ 2 m > n W=  x W x |W|/ 2 m > 3/4 E (|s|=d) s seed nn

Assumptions vs. Conclusions Thm[IW]: If DTIME(2 O(n) )  SIZE(2  n ) for all  >0 Then BPP=P New Thm: If P is not approx by SIZE SAT (n k ) for all integers k Then BPP=P for all but exp(n  ) n-bit inputs Proof: n k running time of Alg on length n inputs W can be recognized in SIZE SAT (n k ) f  P cannot be approx by SIZE SAT (n k/  ) G=NW f fools W [NW,KvM]

Discussion & Problems OPEN  Find other examples of such algorithms  Prove: SL = L Efficient deterministic algorithms which are correct on all but exp(cn) length n inputs (c<1) correct (whp) on dist with high enough (min) entropy Generalize some known classes of algorithms (1) Derandomizations under uniform assumptions correct (whp) on efficiently samplable distributions (2) Average case analysis correct for specific structured distributions