CS151 Complexity Theory Lecture 9 April 27, 2004.

Slides:



Advertisements
Similar presentations
Quantum Software Copy-Protection Scott Aaronson (MIT) |
Advertisements

Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS.
Low-End Uniform Hardness vs. Randomness Tradeoffs for Arthur-Merlin Games. Ronen Shaltiel, University of Haifa Chris Umans, Caltech.
PRG for Low Degree Polynomials from AG-Codes Gil Cohen Joint work with Amnon Ta-Shma.
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Talk for Topics course. Pseudo-Random Generators pseudo-random bits PRG seed Use a short “ seed ” of very few truly random bits to generate a long string.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
List decoding Reed-Muller codes up to minimal distance: Structure and pseudo- randomness in coding theory Abhishek Bhowmick (UT Austin) Shachar Lovett.
CS151 Complexity Theory Lecture 8 April 22, 2004.
Pseudorandomness for Approximate Counting and Sampling Ronen Shaltiel University of Haifa Chris Umans Caltech.
Complexity Theory Lecture 11 Lecturer: Moni Naor.
Using Nondeterminism to Amplify Hardness Emanuele Viola Joint work with: Alex Healy and Salil Vadhan Harvard University.
Time vs Randomness a GITCS presentation February 13, 2012.
CS151 Complexity Theory Lecture 7 April 20, 2004.
CS151 Complexity Theory Lecture 5 April 13, 2004.
Simple Extractors for All Min-Entropies and a New Pseudo-Random Generator Ronen Shaltiel (Hebrew U) & Chris Umans (MSR) 2001.
Linear-time encodable and decodable error-correcting codes Daniel A. Spielman Presented by Tian Sang Jed Liu 2003 March 3rd.
1 Slides by Iddo Tzameret and Gil Shklarski. Adapted from Oded Goldreich’s course lecture notes by Erez Waisbard and Gera Weiss.
Correcting Errors Beyond the Guruswami-Sudan Radius Farzad Parvaresh & Alexander Vardy Presented by Efrat Bank.
Arithmetic Hardness vs. Randomness Valentine Kabanets SFU.
CS151 Complexity Theory Lecture 7 April 20, 2015.
The Goldreich-Levin Theorem: List-decoding the Hadamard code
CS151 Complexity Theory Lecture 11 May 4, CS151 Lecture 112 Outline Extractors Trevisan’s extractor RL and undirected STCONN.
CS151 Complexity Theory Lecture 8 April 22, 2015.
Locally Decodable Codes Uri Nadav. Contents What is Locally Decodable Code (LDC) ? Constructions Lower Bounds Reduction from Private Information Retrieval.
CS151 Complexity Theory Lecture 10 April 29, 2004.
The Power of Randomness in Computation 呂及人中研院資訊所.
Foundations of Privacy Lecture 11 Lecturer: Moni Naor.
1 High noise regime Desire code C : {0,1} k  {0,1} n such that (1/2-  ) fraction of errors can be corrected (think  = o(1) )  Want small n  Efficient.
On the Complexity of Approximating the VC Dimension Chris Umans, Microsoft Research joint work with Elchanan Mossel, Microsoft Research June 2001.
Foundations of Cryptography Lecture 9 Lecturer: Moni Naor.
Complexity Theory Lecture 12 Lecturer: Moni Naor.
Correlation testing for affine invariant properties on Shachar Lovett Institute for Advanced Study Joint with Hamed Hatami (McGill)
CS151 Complexity Theory Lecture 9 April 27, 2015.
Great Theoretical Ideas in Computer Science.
On Constructing Parallel Pseudorandom Generators from One-Way Functions Emanuele Viola Harvard University June 2005.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
XOR lemmas & Direct Product thms - Many proofs Avi Wigderson IAS, Princeton ’82 Yao ’87 Levin ‘89 Goldreich-Levin ’95 Impagliazzo ‘95 Goldreich-Nisan-Wigderson.
Great Theoretical Ideas in Computer Science.
Probabilistic verification Mario Szegedy, Rutgers www/cs.rutgers.edu/~szegedy/07540 Lecture 5.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
CS151 Complexity Theory Lecture 10 April 29, 2015.
On Constructing Parallel Pseudorandom Generators from One-Way Functions Emanuele Viola Harvard University June 2005.
Fall 2013 CMU CS Computational Complexity Lectures 8-9 Randomness, communication, complexity of unique solutions These slides are mostly a resequencing.
Umans Complexity Theory Lectures Lecture 17: Natural Proofs.
1 Asymptotically good binary code with efficient encoding & Justesen code Tomer Levinboim Error Correcting Codes Seminar (2008)
CS151 Complexity Theory Lecture 16 May 20, The outer verifier Theorem: NP  PCP[log n, polylog n] Proof (first steps): –define: Polynomial Constraint.
Pseudorandom Bits for Constant-Depth Circuits with Few Arbitrary Symmetric Gates Emanuele Viola Harvard University June 2005.
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Pseudo-random generators Talk for Amnon ’ s seminar.
Error-Correcting Codes and Pseudorandom Projections Luca Trevisan U.C. Berkeley.
Umans Complexity Theory Lecturess Lecture 11: Randomness Extractors.
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
Umans Complexity Theory Lectures Lecture 9b: Pseudo-Random Generators (PRGs) for BPP: - Hardness vs. randomness - Nisan-Wigderson (NW) Pseudo- Random Generator.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
Pseudo-randomness. Randomized complexity classes model: probabilistic Turing Machine –deterministic TM with additional read-only tape containing “coin.
Complexity Theory and Explicit Constructions of Ramsey Graphs Rahul Santhanam University of Edinburgh.
Umans Complexity Theory Lectures
Umans Complexity Theory Lectures
Sublinear-Time Error-Correction and Error-Detection
Pseudorandomness when the odds are against you
RS – Reed Solomon List Decoding.
Locally Decodable Codes from Lifting
The Curve Merger (Dvir & Widgerson, 2008)
Umans Complexity Theory Lectures
CS21 Decidability and Tractability
Emanuele Viola Harvard University June 2005
CS151 Complexity Theory Lecture 10 May 2, 2019.
CS151 Complexity Theory Lecture 8 April 26, 2019.
CS151 Complexity Theory Lecture 7 April 23, 2019.
Presentation transcript:

CS151 Complexity Theory Lecture 9 April 27, 2004

CS151 Lecture 92 Outline The Nisan-Wigderson generator Error correcting codes from polynomials Turning worst-case hardness into average-case hardness

April 27, 2004CS151 Lecture 93 Hardness vs. randomness We have shown: If one-way permutations exist then BPP   δ>0 TIME(2 n δ )  EXP simulation is better than brute force, but just barely stronger assumptions on difficulty of inverting OWF lead to better simulations…

April 27, 2004CS151 Lecture 94 Hardness vs. randomness We will show: If E requires exponential size circuits then BPP = P by building a different generator from different assumptions. E =  k DTIME(2 kn )

April 27, 2004CS151 Lecture 95 Hardness vs. randomness BMY: for every δ > 0, G δ is a PRG with seed length t = m δ output length m error ε < 1/m d (all d) fooling size s = m e (all e) running time m c running time of simulation dominated by 2 t

April 27, 2004CS151 Lecture 96 Hardness vs. randomness To get BPP = P, would need t = O(log m) BMY building block is one-way- permutation: f:{0,1} t → {0,1} t required to fool circuits of size m e (all e) with these settings a circuit has time to invert f by brute force! can’t get BPP = P with this type of PRG

April 27, 2004CS151 Lecture 97 Hardness vs. randomness BMY pseudo-random generator: –one generator fooling all poly-size bounds –one-way-permutation is hard function –implies hard function in NP  coNP New idea (Nisan-Wigderson): –for each poly-size bound, one generator –hard function allowed to be in E =  k DTIME(2 kn )

April 27, 2004CS151 Lecture 98 Comparison BMY:  δ > 0 PRG G δ NW: PRG G seed length t = m δ t = O(log m) running time t c mm c output length mm error ε < 1/m d (all d) ε < 1/m fooling size s = m e (all e) s = m

April 27, 2004CS151 Lecture 99 NW PRG NW: for fixed constant δ, G = {G n } with seed length t = O(log n) t = O(log m) running time n c m c output length m = n δ m error ε < 1/m fooling size s = m Using this PRG we obtain BPP = P –to fool size n k use G n k/δ –running time O(n k + n ck/δ )2 t = poly(n)

April 27, 2004CS151 Lecture 910 NW PRG First attempt: build PRG assuming E contains unapproximable functions Definition: The function family f = {f n }, f n :{0,1} n  {0,1} is s(n)-unapproximable if for every family of size s(n) circuits {C n }: Pr x [C n (x) = f n (x)] ≤ ½ + 1/s(n).

April 27, 2004CS151 Lecture 911 One bit Suppose f = {f n } is s(n)-unapproximable, for s(n) = 2 Ω(n), and in E a “1-bit” generator family G = {G n }: G n (y) = y◦f log n (y) Idea: if not a PRG then exists a predictor that computes f log n with better than ½ + 1/s(log n) agreement; contradiction.

April 27, 2004CS151 Lecture 912 One bit Suppose f = {f n } is s(n)-unapproximable, for s(n) = 2 δn, and in E a “1-bit” generator family G = {G n }: G n (y) = y◦f log n (y) –seed length t = log n –output length m = log n + 1 (want n δ ) –fooling size s  s(log n) = n δ –running time n c –error ε  1/ s(log n) = 1/ n δ < 1/m

April 27, 2004CS151 Lecture 913 Many bits Try outputting many evaluations of f: G(y) = f(b 1 (y))◦f(b 2 (y))◦…◦f(b m (y)) Seems that a predictor must evaluate f(b i (y)) to predict i-th bit Does this work?

April 27, 2004CS151 Lecture 914 Many bits Try outputting many evaluations of f: G(y) = f(b 1 (y))◦f(b 2 (y))◦…◦f(b m (y)) predictor might notice correlations without having to compute f but, more subtle argument works for a specific choice of b 1 …b m

April 27, 2004CS151 Lecture 915 Nearly-Disjoint Subsets Definition: S 1,S 2,…,S m  {1…t} is an (h, a) design if –for all i, |S i | = h –for all i ≠ j, |S i  S j | ≤ a {1..t} S1S1 S2S2 S3S3

April 27, 2004CS151 Lecture 916 Nearly-Disjoint Subsets Lemma: for every ε > 0 and m < n can in poly(n) time construct an (h = log n, a = εlog n) design S 1,S 2,…,S m  {1…t} with t = O(log n).

April 27, 2004CS151 Lecture 917 Nearly-Disjoint Subsets Proof sketch: –pick random (log n)-subset of {1…t} –set t = O(log n) so that expected overlap with a fixed S i is εlog n/2 –probability overlap with S i is > εlog n is at most 1/n –union bound: some subset has required small overlap with all S i picked so far… –find it by exhaustive search; repeat n times.

April 27, 2004CS151 Lecture 918 The NW generator f  E s(n)-unapproximable, for s(n) = 2 δn S 1,…,S m  {1…t} (log n, a = δlog n/3) design with t = O(log n) G n (y)=f log n (y |S 1 )◦f log n (y |S 2 )◦…◦f log n (y |S m ) f log n : seed y

April 27, 2004CS151 Lecture 919 The NW generator Theorem (Nisan-Wigderson): G={G n } is a pseudo-random generator with: –seed length t = O(log n) –output length m = n δ/3 –running time n c –fooling size s = m –error ε = 1/m

April 27, 2004CS151 Lecture 920 The NW generator Proof: –assume does not ε-pass statistical test C = {C m } of size s: |Pr x [C(x) = 1] – Pr y [C( G n (y) ) = 1]| > ε –can transform this distinguisher into a predictor P of size s’ = s + O(m): Pr y [P(G n (y) 1 … i-1 ) = G n (y) i ] > ½ + ε/m

April 27, 2004CS151 Lecture 921 The NW generator Proof (continued): Pr y [P(G n (y) 1 … i-1 ) = G n (y) i ] > ½ + ε/m –fix bits outside of S i to preserve advantage: Pr y’ [P(G n (  y’  ) 1 … i-1 ) = G n (  y’  ) i ] > ½ + ε/m  G n (y)=f log n (y |S 1 )◦f log n (y |S 2 )◦…◦f log n (y |S m ) f log n : y ’ SiSi

April 27, 2004CS151 Lecture 922  The NW generator Proof (continued): –G n (  y’  ) i is exactly f log n (y’) –for j ≠ i, as vary y’, G n (  y’  ) i varies over 2 a values! –hard-wire up to (m-1) tables of 2 a values to provide G n (  y’  ) 1 … i-1 G n (y)=f log n (y |S 1 )◦f log n (y |S 2 )◦…◦f log n (y |S m ) f log n : y ’ SiSi

April 27, 2004CS151 Lecture 923 The NW generator G n (y)=f log n (y |S 1 )◦f log n (y |S 2 )◦…◦f log n (y |S m ) f log n : P output f log n (y ’) y’ size s + O(m) + (m-1)2 a < s(log n) = n δ advantage ε/m=1/m 2 > 1/s(log n) = n -δ contradiction hardwired tables

April 27, 2004CS151 Lecture 924 Worst-case vs. Average-case Theorem (NW): if E contains 2 Ω(n) -unapp- roximable functions then BPP = P. How reasonable is unapproximability assumption? Hope: obtain BPP = P from worst-case complexity assumption –try to fit into existing framework without new notion of “unapproximability”

April 27, 2004CS151 Lecture 925 Worst-case vs. Average-case Theorem (Impagliazzo-Wigderson, Sudan-Trevisan-Vadhan) If E contains functions that require size 2 Ω(n) circuits, then E contains 2 Ω(n) –unapp- roximable functions. Proof: –main tool: error correcting code

April 27, 2004CS151 Lecture 926 Error-correcting codes Error Correcting Code (ECC): C:Σ k  Σ n message m  Σ k received word R –C(m) with some positions corrupted if not too many errors, can decode: D(R) = m parameters of interest: –rate: k/n –distance: d = min m  m’ Δ(C(m), C(m’)) C(m) R

April 27, 2004CS151 Lecture 927 Distance and error correction C is an ECC with distance d can uniquely decode from up to  d/2  errors ΣnΣn d

April 27, 2004CS151 Lecture 928 Distance and error correction can find short list of messages (one correct) after closer to d errors! Theorem (Johnson): a binary code with distance (½ - δ 2 )n has at most O(1/δ 2 ) codewords in any ball of radius (½ - δ)n.

April 27, 2004CS151 Lecture 929 Example: Reed-Solomon alphabet Σ = F q : field with q elements message m  Σ k polynomial of degree at most k-1 p m (x) = Σ i=0…k-1 m i x i codeword C(m) = (p m (x)) x  F q rate = k/q

April 27, 2004CS151 Lecture 930 Example: Reed-Solomon Claim: distance d = q – k + 1 –suppose Δ(C(m), C(m’)) < q – k + 1 –then there exist polynomials p m (x) and p m’ (x) that agree on more than k-1 points in F q –polnomial p(x) = p m (x) - p m’ (x) has more than k-1 zeros –but degree at most k-1… –contradiction.

April 27, 2004CS151 Lecture 931 Example: Reed-Muller Parameters: t (dimension), h (degree) alphabet Σ = F q : field with q elements message m  Σ k multivariate polynomial of total degree at most h: p m (x) = Σ i=0…k-1 m i M i {M i } are all monomials of degree ≤ h

April 27, 2004CS151 Lecture 932 Example: Reed-Muller M i is monomial of total degree h –e.g. x 1 2 x 2 x 4 3 –need # monomials (h+t choose t) > k codeword C(m) = (p m (x)) x  (F q ) t rate = k/q t Claim: distance d = (1 - h/q)q t –proof: Schwartz-Zippel: polynomial of degree h can have at most h/q fraction of zeros

April 27, 2004CS151 Lecture 933 Codes and hardness Reed-Solomon (RS) and Reed-Muller (RM) codes are efficiently encodable efficient unique decoding? –yes (classic result) efficient list-decoding? –yes (recent result: Sudan. On problem set.)

April 27, 2004CS151 Lecture 934 Codes and Hardness Use for worst-case to average case: truth table of f:{0,1} log k  {0,1} (worst-case hard) truth table of f’:{0,1} log n  {0,1} (average-case hard) m:m: C(m): 00010

April 27, 2004CS151 Lecture 935 Codes and Hardness if n = poly(k) then f  E implies f’  E Want to be able to prove: if f’ is s’-approximable, then f is computable by a size s = poly(s’) circuit

April 27, 2004CS151 Lecture 936 Codes and Hardness Key: circuit C that approximates f’ implicitly gives received word R Decoding procedure D “computes” f exactly R: C(m): D C Requires special notion of efficient decoding