Some Thoughts regarding Unconditional Derandomization Oded Goldreich Weizmann Institute of Science RANDOM 2010.

Slides:



Advertisements
Similar presentations
Pseudorandom Walks: Looking Random in The Long Run or All The Way? Omer Reingold Weizmann Institute.
Advertisements

The Equivalence of Sampling and Searching Scott Aaronson MIT.
Unconditional Weak derandomization of weak algorithms Explicit versions of Yao s lemma Ronen Shaltiel, University of Haifa :
Low-End Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Ronen Shaltiel, University of Haifa Chris Umans, Caltech.
PRG for Low Degree Polynomials from AG-Codes Gil Cohen Joint work with Amnon Ta-Shma.
Boolean Circuits of Depth-Three and Arithmetic Circuits with General Gates Oded Goldreich Weizmann Institute of Science Based on Joint work with Avi Wigderson.
Derandomization & Cryptography Boaz Barak, Weizmann Shien Jin Ong, MIT Salil Vadhan, Harvard.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Danny Gutfreund, Hebrew U. Ronen Shaltiel, Weizmann Inst. Amnon Ta-Shma, Tel-Aviv U.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Lecture 16: Relativization Umans Complexity Theory Lecturess.
CS151 Complexity Theory Lecture 8 April 22, 2004.
Circuit Complexity and Derandomization Tokyo Institute of Technology Akinori Kawachi.
A survey on derandomizing BPP and AM Danny Gutfreund, Hebrew U. Ronen Shaltiel, Weizmann Inst. Amnon Ta-Shma, Tel-Aviv U.
On the limitations of efficient computation Oded Goldreich Weizmann Institute of Science.
Time vs Randomness a GITCS presentation February 13, 2012.
Tractable and intractable problems for parallel computers
CS151 Complexity Theory Lecture 7 April 20, 2004.
ACT1 Slides by Vera Asodi & Tomer Naveh. Updated by : Avi Ben-Aroya & Alon Brook Adapted from Oded Goldreich’s course lecture notes by Sergey Benditkis,
On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
CS151 Complexity Theory Lecture 7 April 20, 2015.
Is solving harder than checking? Oded Goldreich Weizmann Institute of Science.
On Proximity Oblivious Testing Oded Goldreich - Weizmann Institute of Science Dana Ron – Tel Aviv University.
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
CS151 Complexity Theory Lecture 8 April 22, 2015.
1 On approximating the number of relevant variables in a function Dana Ron & Gilad Tsur Tel-Aviv University.
1 Algorithmic Aspects in Property Testing of Dense Graphs Oded Goldreich – Weizmann Institute Dana Ron - Tel-Aviv University.
1 Recap (I) n -qubit quantum state: 2 n -dimensional unit vector Unitary op: 2 n  2 n linear operation U such that U † U = I (where U † denotes the conjugate.
Chapter 11 Limitations of Algorithm Power Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
On Testing Computability by small Width OBDDs Oded Goldreich Weizmann Institute of Science.
In a World of BPP=P Oded Goldreich Weizmann Institute of Science.
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
On Constructing Parallel Pseudorandom Generators from One-Way Functions Emanuele Viola Harvard University June 2005.
1 2 Probabilistic Computations  Extend the notion of “efficient computation” beyond polynomial-time- Turing machines.  We will still consider only.
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
Quantum Computing MAS 725 Hartmut Klauck NTU
Pseudorandom Generators and Typically-Correct Derandomization Jeff Kinne, Dieter van Melkebeek University of Wisconsin-Madison Ronen Shaltiel University.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Eric Allender Rutgers University Circuit Complexity meets the Theory of Randomness SUNY Buffalo, November 11, 2010.
My Favorite Ten Complexity Theorems of the Past Decade II Lance Fortnow University of Chicago.
Umans Complexity Theory Lectures Lecture 17: Natural Proofs.
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 667 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 653 Lecture.
Donghyun (David) Kim Department of Mathematics and Computer Science North Carolina Central University 1 Chapter 7 Time Complexity Some slides are in courtesy.
CS151 Complexity Theory Lecture 16 May 20, The outer verifier Theorem: NP  PCP[log n, polylog n] Proof (first steps): –define: Polynomial Constraint.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Pseudo-random generators Talk for Amnon ’ s seminar.
Comparing Notions of Full Derandomization Lance Fortnow NEC Research Institute With thanks to Dieter van Melkebeek.
Eric Allender Rutgers University Curiouser and Curiouser: The Link between Incompressibility and Complexity CiE Special Session, June 19, 2012.
Overview of the theory of computation Episode 3 0 Turing machines The traditional concepts of computability, decidability and recursive enumerability.
Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University.
1 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lectures
Pseudo-randomness. Randomized complexity classes model: probabilistic Turing Machine –deterministic TM with additional read-only tape containing “coin.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
Complexity Classes.
Derandomization & Cryptography
Randomness and Computation
Finding Cycles and Trees in Sublinear Time
Algorithms vs. Circuit Lower Bounds
Igor Carboni Oliveira University of Oxford
Pseudorandomness when the odds are against you
Hidden Markov Models Part 2: Algorithms
Pseudo-derandomizing learning and approximation
Chapter 11 Limitations of Algorithm Power
Every set in P is strongly testable under a suitable encoding
On Derandomizing Algorithms that Err Extremely Rarely
Complexity Theory: Foundations
Presentation transcript:

Some Thoughts regarding Unconditional Derandomization Oded Goldreich Weizmann Institute of Science RANDOM 2010

Replacing Luca Trevisan A different perspective (although I don’t know what Luca planned to say…). Focus on what I know (although I know little…) Speculations rather than open problems I will focus on a new result by which “BPP=P if and only if there exists suitable pseudorandom generators”. I will clarify what I mean by all these terms, but don’t be too alert: I mean the standard. Warning: I’ll be using BPP and P to denote classes of promise problems.

BPP=P iff there exists “suitable” pseudorandom generators The term “pseudorandom generator” refers to a general paradigm with numerous incarnations (ranging from general-purpose PRGs (i.e., fooling any efficient observer) to special-purpose PRGs (e.g., pairwise independence PRGs)). The common themes (and differences) relate to (1) amount of stretching, (2) notion of “looking random”, and (3) complexity of (deterministic) generation (or stretching). For the purpose of derandomizing (e.g., BPP) it suffices to use PRGs that run in exponential-time (i.e., exponential in length of their input seeds). Their output should look random to linear-time observers (i.e., linear in length of PRG’s output).  canonical derandomizers.

Canonical derandomizers (recap. and more/detailed) Canonical derandomizers (PRGs) also come in several flavors. In all, generation time is exponential (in seed’s length); the small variations refer to the exact formulation of the pseudorandomness condition and to the stretch function. The most standard formulation refers to all (non-uniform) linear-size circuits. Also standard is a uniform formulation: For any fixed polynomial p, no probabilistic p-time algorithm can distinguish the PRG’s output from a truly random string with gap greater than 1/p. We refer to this notion. Indeed, we shall focus on exponential stretch… (The PRG’s running time, in terms of its output length may be larger than p.)

Canonical derandomizers (a sanity check) NEW: We “reverse” the foregoing connection, showing that if BPP is effectively in P, then one can construct canonical derandomizers of exponential stretch. Well-known: Using canonical derandomizers of exponential stretch we can effectively put BPP in P; that is, for every problem in BPP and every polynomial p, we obtain a deterministic poly-time algorithm such that no (prob.) p-time algorithm can find (except w. prob. 1/p) an input on which the deterministic algorithm errs. First, combine the randomized algorithm with the PRG to obtain an effectively equivalent randomized algorithm of logarithmic randomness complexity. Then, use straightforward derandomization.

Reversing the PRG-to-derandomization connection Assume (for simplicity) that BPP=P (rather than only effectively so). We construct canonical derandomizers of exponential stretch. Note that a random function of exponential stretch has the desired pseudorandomness feature (w.r.t gap 1/p, we use a seed of length O(log p) ). But we need an explicit (deterministic) construction. Idea: Just derandomize the above construction by using BPP=P. Problem: BPP=P refers to decision(al) problems, whereas we have at hand a construction problem (or a search problem). Solution: Reduce “BPP-search” problems to BPP, via a deterministic poly-time reduction that carefully implements the standard bit-by-bit process. (BPP as promise problem used here!)

A closer look at the construction (search) problem Recall: We assume that BPP=P, and construct canonical derandomizers of exponential stretch. The search problem at hand: Given 1 n, find a set S n of n-bit long strings such that any p(n)-time observer cannot distinguish a string selected uniformly in S n from a totally random string. (W.r.t gap 1/p(n), where S n has size poly(p(n))=poly(n).) Note: validity of solutions can be checked in BPP. BPP-search  finding solutions in PPT + checking them in BPP. Reduce “BPP-search” problems to BPP, by extending the (current) solution prefix according to an estimate of the probability that a random extension of this prefix yields a valid solution. (The estimate is obtained via a query to a BPP oracle (of a promise type).)

Summary: canonical derandomizers are necessary (not merely sufficient) for placing BPP in P THM (1 st version of equivalence): The following are equiv. 1.For every polynomial p, BPP is p- effectively in P. 2.For every polynomial p, there exists a p- robust canonical derandomizer of exponential stretch. A problem is p-effectively solved by a function F if no probabilistic p-time algorithm can find an input on which F errs. A PRGs is p-robust if no probabilistic p-time algorithm can distinguish its output from a truly random one with gap greater than 1/p. Targeted  auxiliary-input PRG (same aux to the PRG and its test). THM (2 nd version of equivalence): BPP=P iff there exists a targeted canonical derandomizer of exponential stretch.

Reflections on our construction of canonical derandomizers. Recall: We assumed that BPP=P, and constructed canonical derandomizers of exponential stretch. The construction of a canonical derandomizer may amount to a fancy diagonalization argument, where the “fancy” aspect refers to the need to estimate the average behavior of machines. Indeed, we saw that the construction of a suitable set S n reduces to obtaining such estimates, which are easy to get from a BPP oracle. One lesson is that BPP=P is equivalent to the existence of canonical derandomizers of exponential stretch. Another lesson is that derandomization may be more related to diagonalization than to “hard” lower bounds…

Time of speculations Derandomization may be more related to diagonalization than to “hard” lower bounds… The common wisdom (for a decade) has been that derandomization requires proving lower bounds [IW98, IKW01, KI03]. IW98: BPP contained in i.o.-AvSubEXP implies BPP ≠ EXP. Of course, BPP  SubEXP implies BPP  EXP (by DTime Hierarchy). IKW01: BPP  NSubEXP (or even less…) implies NEXP  P/poly, and ditto for MA  NSubEXP. (Actually, the focus is on the latter.) But this follows by “MA  NSubEXP implies NEXP  P/poly”, which in turn follows from “NEXP  P/poly implies NEXP  MA”. So this is a matter of “Karp-Lipton/[BFL]” + the NTime Hierarchy. KI03: derandomizing PIT yields B/A circuit lower bounds for NEXP. Ditto re whether such lower bounds are so much out of reach.

Additional thoughts (or controversies) Some researchers attribute great importance to the difference between promise problems and “pure” decision problems. I have blurred this difference, and believe that whenever it exists we should consider the (general) promise problem version. Shall we see BPP=P proven in our lifetime? The (only) negative evidence we have is that this would imply circuit lower bounds in NEXP [IKW01, KI03]. But recall that we do know that NEXP  P/poly if and only if NEXP  MA, so is this negative evidence not similar to saying that derandomizing MA [or BPP] implies a “lower bound” on computing EXP [or NEXP] by MA [or BPP]?

The End The slides of this talk are available at The paper (w.o. the bolder speculations) is available at