In a World of BPP=P Oded Goldreich Weizmann Institute of Science.

Slides:



Advertisements
Similar presentations
Pseudorandom Walks: Looking Random in The Long Run or All The Way? Omer Reingold Weizmann Institute.
Advertisements

Impagliazzos Worlds in Arithmetic Complexity: A Progress Report Scott Aaronson and Andrew Drucker MIT 100% QUANTUM-FREE TALK (FROM COWS NOT TREATED WITH.
The Equivalence of Sampling and Searching Scott Aaronson MIT.
Finding Cycles and Trees in Sublinear Time Oded Goldreich Weizmann Institute of Science Joint work with Artur Czumaj, Dana Ron, C. Seshadhri, Asaf Shapira,
Unconditional Weak derandomization of weak algorithms Explicit versions of Yao s lemma Ronen Shaltiel, University of Haifa :
PRG for Low Degree Polynomials from AG-Codes Gil Cohen Joint work with Amnon Ta-Shma.
Boolean Circuits of Depth-Three and Arithmetic Circuits with General Gates Oded Goldreich Weizmann Institute of Science Based on Joint work with Avi Wigderson.
Derandomization & Cryptography Boaz Barak, Weizmann Shien Jin Ong, MIT Salil Vadhan, Harvard.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans.
Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Danny Gutfreund, Hebrew U. Ronen Shaltiel, Weizmann Inst. Amnon Ta-Shma, Tel-Aviv U.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
CS151 Complexity Theory Lecture 8 April 22, 2004.
Circuit Complexity and Derandomization Tokyo Institute of Technology Akinori Kawachi.
A survey on derandomizing BPP and AM Danny Gutfreund, Hebrew U. Ronen Shaltiel, Weizmann Inst. Amnon Ta-Shma, Tel-Aviv U.
On the limitations of efficient computation Oded Goldreich Weizmann Institute of Science.
Time vs Randomness a GITCS presentation February 13, 2012.
Tractable and intractable problems for parallel computers
CS151 Complexity Theory Lecture 7 April 20, 2004.
Some Thoughts regarding Unconditional Derandomization Oded Goldreich Weizmann Institute of Science RANDOM 2010.
Perfect and Statistical Secrecy, probabilistic algorithms, Definitions of Easy and Hard, 1-Way FN -- formal definition.
Analysis of Security Protocols (V) John C. Mitchell Stanford University.
ACT1 Slides by Vera Asodi & Tomer Naveh. Updated by : Avi Ben-Aroya & Alon Brook Adapted from Oded Goldreich’s course lecture notes by Sergey Benditkis,
On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
CS151 Complexity Theory Lecture 7 April 20, 2015.
Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.
Is solving harder than checking? Oded Goldreich Weizmann Institute of Science.
–Def: A language L is in BPP c,s ( 0  s(n)  c(n)  1,  n  N) if there exists a probabilistic poly-time TM M s.t. : 1.  w  L, Pr[M accepts w]  c(|w|),
Derandomizing LOGSPACE Based on a paper by Russell Impagliazo, Noam Nissan and Avi Wigderson Presented by Amir Rosenfeld.
CS151 Complexity Theory Lecture 8 April 22, 2015.
1 On approximating the number of relevant variables in a function Dana Ron & Gilad Tsur Tel-Aviv University.
1 Recap (I) n -qubit quantum state: 2 n -dimensional unit vector Unitary op: 2 n  2 n linear operation U such that U † U = I (where U † denotes the conjugate.
On Testing Computability by small Width OBDDs Oded Goldreich Weizmann Institute of Science.
Finding Cycles and Trees in Sublinear Time Oded Goldreich Weizmann Institute of Science Joint work with Artur Czumaj, Dana Ron, C. Seshadhri, Asaf Shapira,
CMSC 414 Computer and Network Security Lecture 3 Jonathan Katz.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Why Extractors? … Extractors, and the closely related “Dispersers”, exhibit some of the most “random-like” properties of explicitly constructed combinatorial.
On Constructing Parallel Pseudorandom Generators from One-Way Functions Emanuele Viola Harvard University June 2005.
Cryptography Lecture 2 Arpita Patra. Summary of Last Class  Introduction  Secure Communication in Symmetric Key setting >> SKE is the required primitive.
Quantum Computing MAS 725 Hartmut Klauck NTU
Pseudorandom Generators and Typically-Correct Derandomization Jeff Kinne, Dieter van Melkebeek University of Wisconsin-Madison Ronen Shaltiel University.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
My Favorite Ten Complexity Theorems of the Past Decade II Lance Fortnow University of Chicago.
Umans Complexity Theory Lectures Lecture 17: Natural Proofs.
1 Introduction to Quantum Information Processing CS 467 / CS 667 Phys 667 / Phys 767 C&O 481 / C&O 681 Richard Cleve DC 653 Lecture.
Donghyun (David) Kim Department of Mathematics and Computer Science North Carolina Central University 1 Chapter 7 Time Complexity Some slides are in courtesy.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Pseudo-random generators Talk for Amnon ’ s seminar.
Comparing Notions of Full Derandomization Lance Fortnow NEC Research Institute With thanks to Dieter van Melkebeek.
Eric Allender Rutgers University Curiouser and Curiouser: The Link between Incompressibility and Complexity CiE Special Session, June 19, 2012.
Almost SL=L, and Near-Perfect Derandomization Oded Goldreich The Weizmann Institute Avi Wigderson IAS, Princeton Hebrew University.
1 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lectures
Pseudo-randomness. Randomized complexity classes model: probabilistic Turing Machine –deterministic TM with additional read-only tape containing “coin.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
Complexity Classes.
Derandomization & Cryptography
Randomness and Computation
Finding Cycles and Trees in Sublinear Time
Pseudorandomness when the odds are against you
Background: Lattices and the Learning-with-Errors problem
Hidden Markov Models Part 2: Algorithms
Pseudo-derandomizing learning and approximation
Cryptography Lecture 5.
Every set in P is strongly testable under a suitable encoding
On Derandomizing Algorithms that Err Extremely Rarely
Presentation transcript:

In a World of BPP=P Oded Goldreich Weizmann Institute of Science

Talk’s Outline The concrete contents and main message of this talk is that BPP=P if and only if there exists suitable pseudorandom generators. It was known for decades that suitable pseudorandom generators imply BPP=P. The novelty is in the converse. More generally, we explore what follows if BPP=P. Throughout the talk, BPP and P denote classes of promise problems. We shall start with a brief review of pseudorandom generators.

Pseudorandom generators: a general paradigm The term “pseudorandom generator” (PRG) refers to a general paradigm with numerous incarnations, ranging from general-purpose PRGs (i.e., fooling any efficient observer) to special- purpose PRGs (e.g., pairwise independence PRGs). The common themes (and differences) relate to (1) amount of stretching, (2) notion of “looking random”, and (3) complexity of (deterministic) generation (or stretching). N.B.: In all cases the PRG itself is a deterministic algorithm. G seed output sequence

Pseudorandom generators: canonical derandomizers The term “pseudorandom generator” refers to a general paradigm with numerous incarnations. The common themes (and differences) relate to (1) amount of stretching, (2) notion of “looking random”, and (3) complexity of (deterministic) generation (or stretching). For the purpose of derandomizing (e.g., BPP) it suffices to use PRGs that run in exponential-time (i.e., exponential in length of their input seeds). Their output should look random to linear-time observers (i.e., linear in length of PRG’s output).  canonical derandomizers. G seed output sequence

Canonical derandomizers (recap and use) Def (canonical derandomizer): A PRG that run in exponential-time (i.e., exponential in length of its input seed) producing output that looks random to linear-time observers (i.e., linear in length of PRG’s output). THM: If there exist canonical derandomizers of exponential stretch, then BPP is in P. (Start with a linear-time randomized algorithm.) First, combine the randomized algorithm with the PRG to obtain a functionaly equivalent randomized algorithm of logarithmic randomness complexity. Note that this increases the running time by an exp(log) = poly term. Functional equivalence follows by indistinguishability! Next, use straightforward derandomization, introducing an overhead of exp(log) = poly factor.

Canonical derandomizers (recap. and more/detailed) Canonical derandomizers (PRGs) also come in several flavors. In all, generation time is exponential ( in seed’s length ) ; the small variations refer to the exact formulation of the pseudorandomness condition and to the stretch function. The most standard formulation refers to all (non-uniform) linear-size circuits. (That’s the one we used in prior slide.) Also standard is a uniform formulation: For any fixed polynomial p, no probabilistic p-time algorithm can distinguish the PRG’s output from a truly random string with gap greater than 1/p. We refer to this notion. Indeed, we shall focus on exponential stretch… (The PRG’s running time, in terms of its output length may be larger than p.)

Canonical derandomizers (the uniform version, a sanity check) NEW: We “reverse” the foregoing connection, showing that if BPP is effectively in P, then one can construct canonical derandomizers of exponential stretch. Well-known: Using canonical derandomizers of exponential stretch we can effectively put BPP in P; that is, for every problem in BPP and every polynomial p, we obtain a deterministic poly-time algorithm such that no (prob.) p-time algorithm can find (except w. prob. 1/p) an input on which the deterministic algorithm errs. First, combine the randomized algorithm with the PRG to obtain an effectively equivalent randomized algorithm of logarithmic randomness complexity. Note: A p-time algorithm finding an error yields a p-time distinguisher! Then, use straightforward derandomization.

Reversing the PRG-to-derandomization connection Assume (for simplicity) that BPP=P (rather than only effectively so). We construct canonical derandomizers of exponential stretch. Note that a random function of exponential stretch has the desired (p-time) pseudorandomness feature (w.r.t gap 1/p, we use a seed of length O(log p) ). But we need an explicit (deterministic) construction. Idea: Just derandomize the above construction by using BPP=P. Problem: BPP=P refers to decision(al) problems, whereas we have at hand a construction problem (or a search problem). Solution: Reduce “BPP-search” problems to BPP, v ia a deterministic poly-time reduction that carefully implements the standard bit-by-bit process. (BPP as promise problem used here!)

A closer look at the construction (search) problem Recall: We assume that BPP=P, and construct canonical derandomizers of exponential stretch. The search problem at hand: Given 1 n, find a set S n of n-bit long strings such that any p(n)-time observer cannot distinguish a string selected uniformly in S n from a totally random string. (W.r.t gap 1/p(n), where S n has size poly(p(n))=poly(n).) Note: validity of solutions can be checked in BPP. BPP-search  finding solutions in PPT + checking them in BPP. Reduce “BPP-search” problems to BPP, by extending the (current) solution prefix according to an estimate of the probability that a random extension of this prefix yields a valid solution. (The estimate is obtained via a query to a BPP oracle (of a promise type).)

Summary: canonical derandomizers are necessary (not merely sufficient) for placing BPP in P THM (1 st version of equivalence): The following are equiv. 1.For every polynomial p, BPP is p - effectively in P. 2.For every polynomial p, there exists a p - robust canonical derandomizer of exponential stretch. A problem is p-effectively solved by a function F if no probabilistic p-time algorithm can find an input on which F errs. A PRGs is p-robust if no probabilistic p-time algorithm can distinguish its output from a truly random one with gap greater than 1/p. Targeted  auxiliary-input PRG (same aux to the PRG and its test). THM (2 nd version of equivalence): BPP=P iff there exists a targeted canonical derandomizer of exponential stretch.

Reflections on our construction of canonical derandomizers. Recall: We assumed that BPP=P, and constructed canonical derandomizers of exponential stretch. The construction of a canonical derandomizer may amount to a fancy diagonalization argument, where the “fancy” aspect refers to the need to estimate the average behavior of machines. Indeed, we saw that the construction of a suitable set S n reduces to obtaining such estimates, which are easy to get from a BPP oracle. One lesson is that BPP=P is equivalent to the existence of canonical derandomizers of exponential stretch. Another lesson is that derandomization may be more related to diagonalization than to “hard” lower bounds…

Additional thoughts (or controversies) Some researchers attribute great importance to the difference between promise problems and “pure” decision problems. I have blurred this difference, and believe that whenever it exists we should consider the (general) promise problem version. Shall we see BPP=P proved in our lifetime? The (only) negative evidence we have is that this would imply circuit lower bounds in NEXP [IKW01, KI03]. But recall that we do know that NEXP  P/poly if and only if NEXP  MA, so is this negative evidence not similar to saying that derandomizing MA [or BPP] implies a “lower bound” on computing NEXP [or EXP] by MA [or BPP]? Furthermore, maybe this indicates that such lower bounds are within reach (cf. Williams)?

The End The slides of this talk are available at The paper is available at