Of 31 March 2, 2011Local List IPAM 1 Local List Decoding Madhu Sudan Microsoft Research TexPoint fonts used in EMF. TexPoint fonts used in EMF.

Slides:



Advertisements
Similar presentations
Limitations of Quantum Advice and One-Way Communication Scott Aaronson UC Berkeley IAS Useful?
Advertisements

Impagliazzos Worlds in Arithmetic Complexity: A Progress Report Scott Aaronson and Andrew Drucker MIT 100% QUANTUM-FREE TALK (FROM COWS NOT TREATED WITH.
Agnostically Learning Decision Trees Parikshit Gopalan MSR-Silicon Valley, IITB00. Adam Tauman Kalai MSR-New England Adam R. Klivans UT Austin
Reductions to the Noisy Parity Problem TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA A A A A Vitaly Feldman Parikshit.
Hardness of Reconstructing Multivariate Polynomials. Parikshit Gopalan U. Washington Parikshit Gopalan U. Washington Subhash Khot NYU/Gatech Rishi Saket.
Hardness Amplification within NP against Deterministic Algorithms Parikshit Gopalan U Washington & MSR-SVC Venkatesan Guruswami U Washington & IAS.
On the Complexity of Parallel Hardness Amplification for One-Way Functions Chi-Jen Lu Academia Sinica, Taiwan.
Unconditional Weak derandomization of weak algorithms Explicit versions of Yao s lemma Ronen Shaltiel, University of Haifa :
December 2, 2009 IPAM: Invariance in Property Testing 1 Invariance in Property Testing Madhu Sudan Microsoft/MIT TexPoint fonts used in EMF. Read the TexPoint.
LEARNIN HE UNIFORM UNDER DISTRIBUTION – Toward DNF – Ryan ODonnell Microsoft Research January, 2006.
The Method of Multiplicities Madhu Sudan Microsoft New England/MIT TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A.
Complexity Theory Lecture 6
Randomness Extractors: Motivation, Applications and Constructions Ronen Shaltiel University of Haifa.
Russell Impagliazzo ( IAS & UCSD ) Ragesh Jaiswal ( Columbia U. ) Valentine Kabanets ( IAS & SFU ) Avi Wigderson ( IAS ) ( based on [IJKW08, IKW09] )
Average-case Complexity Luca Trevisan UC Berkeley.
Are lower bounds hard to prove? Michal Koucký Institute of Mathematics, Prague.
Multiplicity Codes Swastik Kopparty (Rutgers) (based on [K-Saraf-Yekhanin ’11], [K ‘12], [K ‘14])
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Of 22 August 29-30, 2011 Rabin ’80: APT 1 Invariance in Property Testing Madhu Sudan Microsoft Research TexPoint fonts used in EMF. Read the TexPoint manual.
Simple extractors for all min- entropies and a new pseudo- random generator Ronen Shaltiel Chris Umans.
May 5, 2010 MSRI 1 The Method of Multiplicities Madhu Sudan Microsoft New England/MIT TexPoint fonts used in EMF. Read the TexPoint manual.
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Locally Decodable Codes from Nice Subsets of Finite Fields and Prime Factors of Mersenne Numbers Kiran Kedlaya Sergey Yekhanin MIT Microsoft Research.
Efficient access to TIN Regular square grid TIN Efficient access to TIN Let q := (x, y) be a point. We want to estimate an elevation at a point q: 1. should.
List decoding Reed-Muller codes up to minimal distance: Structure and pseudo- randomness in coding theory Abhishek Bhowmick (UT Austin) Shachar Lovett.
Learning Juntas Elchanan Mossel UC Berkeley Ryan O’Donnell MIT Rocco Servedio Harvard.
Constraint Satisfaction over a Non-Boolean Domain Approximation Algorithms and Unique Games Hardness Venkatesan Guruswami Prasad Raghavendra University.
Yi Wu (CMU) Joint work with Parikshit Gopalan (MSR SVC) Ryan O’Donnell (CMU) David Zuckerman (UT Austin) Pseudorandom Generators for Halfspaces TexPoint.
Sparse Random Linear Codes are Locally Decodable and Testable Tali Kaufman (MIT) Joint work with Madhu Sudan (MIT)
On Uniform Amplification of Hardness in NP Luca Trevisan STOC 05 Paper Review Present by Hai Xu.
The Goldreich-Levin Theorem: List-decoding the Hadamard code
Sublinear time algorithms Ronitt Rubinfeld Blavatnik School of Computer Science Tel Aviv University TexPoint fonts used in EMF. Read the TexPoint manual.
CS151 Complexity Theory Lecture 10 April 29, 2004.
Adi Akavia Shafi Goldwasser Muli Safra
CS151 Complexity Theory Lecture 9 April 27, 2004.
On the Complexity of Approximating the VC Dimension Chris Umans, Microsoft Research joint work with Elchanan Mossel, Microsoft Research June 2001.
Correlation testing for affine invariant properties on Shachar Lovett Institute for Advanced Study Joint with Hamed Hatami (McGill)
Quantum Computing MAS 725 Hartmut Klauck NTU TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
Amplification and Derandomization Without Slowdown Dana Moshkovitz MIT Joint work with Ofer Grossman (MIT)
1 Asymptotically good binary code with efficient encoding & Justesen code Tomer Levinboim Error Correcting Codes Seminar (2008)
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Iftach Haitner and Eran Omri Coin Flipping with Constant Bias Implies One-Way Functions TexPoint fonts used in EMF. Read the TexPoint manual before you.
1 Introduction to Quantum Information Processing QIC 710 / CS 667 / PH 767 / CO 681 / AM 871 Richard Cleve DC 2117 Lectures
Pseudorandomness: New Results and Applications Emanuele Viola IAS April 2007.
Tali Kaufman (Bar-Ilan)
Umans Complexity Theory Lectures
Sublinear-Time Error-Correction and Error-Detection
Locality in Coding Theory
Circuit Lower Bounds A combinatorial approach to P vs NP
Error-Correcting Codes:
Background: Lattices and the Learning-with-Errors problem
Local Decoding and Testing Polynomials over Grids
Algebraic Codes and Invariance
Complexity of Expander-Based Reasoning and the Power of Monotone Proofs Sam Buss (UCSD), Valentine Kabanets (SFU), Antonina Kolokolova.
CS 154, Lecture 6: Communication Complexity
Analysis and design of algorithm
Local Error-Detection and Error-correction
An average-case lower bound against ACC0
Linear sketching with parities
Richard Anderson Lecture 25 NP-Completeness
Locally Decodable Codes from Lifting
Linear sketching over
Beer Therapy Madhu Ralf Vote early, vote often
Linear sketching with parities
Chapter 11 Limitations of Algorithm Power
CS151 Complexity Theory Lecture 10 May 2, 2019.
Switching Lemmas and Proof Complexity
Soft decoding, dual BCH codes, & better -biased list decodable codes
Presentation transcript:

of 31 March 2, 2011Local List IPAM 1 Local List Decoding Madhu Sudan Microsoft Research TexPoint fonts used in EMF. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A Read the TexPoint manual before you delete this box.: A A A A

of 31 Overview Last 20 years: Last 20 years: Lots of work on List Decoding Lots of work on List Decoding Lots of work on Local Decoding Lots of work on Local Decoding Today: Today: A look at the intersection: Local List Decoding A look at the intersection: Local List Decoding Part I: The accidental beginnings Part I: The accidental beginnings Part II: Some applications Part II: Some applications Part III: Some LLD codes Part III: Some LLD codes Part IV: Current works Part IV: Current works March 2, 2011Local List IPAM2

of 31 March 2, 2011Local List IPAM 3 Part I: History

of 31 List Decodable Code Encoding function: E: Σ k Σ n Encoding function: E: Σ k Σ n Code: C = Image(E) Code: C = Image(E) (ρ,L) -List-Decodable Code: 8 r 2 Σ n, (ρ,L) -List-Decodable Code: 8 r 2 Σ n, #{w 2 C | Δ(r,w) · ρ.n} · L. #{w 2 C | Δ(r,w) · ρ.n} · L. List-decoder: Outputs list, given r. List-decoder: Outputs list, given r. [Elias 57, Wozencraft 58] [Elias 57, Wozencraft 58] March 2, 2011Local List IPAM4

of 31 Local (Unique) Decoding ρ-decoder: ρ-decoder: Has access to r s.t. Δ(r, E(m)) · ρ.n Has access to r s.t. Δ(r, E(m)) · ρ.n Outputs m. Outputs m. ρ-local decoder: ρ-local decoder: Has query access to r: [n] Σ. Has query access to r: [n] Σ. Input: i 2 [k] Input: i 2 [k] Outputs: m i Outputs: m i (ρ,t)-LDC: makes · t queries for every r,i. (ρ,t)-LDC: makes · t queries for every r,i. March 2, 2011Local List IPAM5

of 31 Local List Decoding ρ-List decoder: ρ-List decoder: Access to r 2 Σ n Access to r 2 Σ n Outputs {m 1,…,m L } = {m | Δ(r,E(m)) · ρ.n} Outputs {m 1,…,m L } = {m | Δ(r,E(m)) · ρ.n} (ρ,t)-list decoder: (ρ,t)-list decoder: Query access to r:[n] Σ Query access to r:[n] Σ Inputs: i 2 [k], j 2 [L] Inputs: i 2 [k], j 2 [L] Outputs: (m j ) i Outputs: (m j ) i Note: numbering m 1,…,m L may be arbitrary; Note: numbering m 1,…,m L may be arbitrary; but consistent as we vary i. but consistent as we vary i. March 2, 2011Local List IPAM6

of 31 (Convoluted) History 1950 [Reed+Muller]: 1950 [Reed+Muller]: Code due to Muller; Decoder due to Reed. Code due to Muller; Decoder due to Reed. Majority-logic decoder: Essentially a local decoder for ρ < distance/2, Majority-logic decoder: Essentially a local decoder for ρ < distance/2, Not stated/analyzed in local terms. Not stated/analyzed in local terms [Elias] 1957 [Elias] Defined List Decoding. Defined List Decoding. Analyzed in random-error setting only. Analyzed in random-error setting only. [1980s] Many works on random-self-reducibility [1980s] Many works on random-self-reducibility Essentially: Local decoders ( for un/natural codes ). Essentially: Local decoders ( for un/natural codes ). March 2, 2011Local List IPAM7

of 31 (Convoluted) History 1986 [Goldreich-Levin]: 1986 [Goldreich-Levin]: Local List-decoder for Hadamard code. Local List-decoder for Hadamard code. No mention of any of the words in paper. No mention of any of the words in paper. List-decoding in acknowledgments. List-decoding in acknowledgments. But idea certainly there – also in [Levin 85] But idea certainly there – also in [Levin 85] (many variations since: KM, GRS). (many variations since: KM, GRS) [BeaverFeigenbaum, Lipton, GemmellLiptonRubinfeldSWigderson,GemmellS.]: [BeaverFeigenbaum, Lipton, GemmellLiptonRubinfeldSWigderson,GemmellS.]: Local decoder for generalized RM codes. Local decoder for generalized RM codes. 96,98 [Guruswami+S]: 96,98 [Guruswami+S]: List-decoder for Reed-Solomon codes. List-decoder for Reed-Solomon codes. March 2, 2011Local List IPAM8

of 31 (Convoluted) History 1999 [S.TrevisanVadhan]: 1999 [S.TrevisanVadhan]: Local List-Decoding defined Local List-Decoding defined LLD for Generalized RM code. LLD for Generalized RM code [KatzTrevisan]: 2000 [KatzTrevisan]: Local Decoding defined. Local Decoding defined. Lower bounds for LDCs. Lower bounds for LDCs. March 2, 2011Local List IPAM9

of 31 Why Convoluted? What is convoluted? What is convoluted? Big gap (positive/negative) between definitions and algorithms Big gap (positive/negative) between definitions and algorithms Why? Why? Motivations/Applications changing. Motivations/Applications changing. Algorithms not crucial to early applications Algorithms not crucial to early applications Some applications needed specific codes Some applications needed specific codes Different communities involved Different communities involved Information theory/Coding theory Information theory/Coding theory CS: Complexity/Crypto/Learning CS: Complexity/Crypto/Learning March 2, 2011Local List IPAM10

of 31 March 2, 2011Local List IPAM 11 Part II: Applications

of 31 Hardcore Predicates f:{0,1} n {0,1} n is a owf if f:{0,1} n {0,1} n is a owf if f easy to compute f easy to compute f -1 hard on random inputs: f -1 hard on random inputs: random: given y = f(x) for uniform x, output x in f -1 (y). random: given y = f(x) for uniform x, output x in f -1 (y). hard: every polytime alg. succeeds with negligible probabilty. hard: every polytime alg. succeeds with negligible probabilty. b:{0,1} n {0,1} is hardcore predicate for f, if f remains hard to invert given b(x) and f(x) b:{0,1} n {0,1} is hardcore predicate for f, if f remains hard to invert given b(x) and f(x) March 2, 2011Local List IPAM12

of 31 Hardcore Predicates b:{0,1} n £ [M] {0,1} is a (randomized) hardcore predicate for f, if b(x,s) hard to predict w.p. ½ + є, given f(x) and s. b:{0,1} n £ [M] {0,1} is a (randomized) hardcore predicate for f, if b(x,s) hard to predict w.p. ½ + є, given f(x) and s. [BlumMicali,Yao,GoldreichLevin]: [BlumMicali,Yao,GoldreichLevin]: 1-1 owf f + hardcore b ) pseudorandom generator. 1-1 owf f + hardcore b ) pseudorandom generator. [GoldreichLevin,Impagliazzo]: [GoldreichLevin,Impagliazzo]: If E:{0,1} k {0,1} m is a (½ - є,poly(n))-LLDC, then b(x,s) = E(x) s is a hardcore predicate for every owf f. If E:{0,1} k {0,1} m is a (½ - є,poly(n))-LLDC, then b(x,s) = E(x) s is a hardcore predicate for every owf f. March 2, 2011Local List IPAM13

of 31 Proof of [GL,I] Suppose A predicts b(x,s) given f(x), s Suppose A predicts b(x,s) given f(x), s Fix f(x); let r(s) = A(f(x),s). Fix f(x); let r(s) = A(f(x),s). Run Decoder(r,i,j) for all i,j to recover {x 1,…,x L }. Run Decoder(r,i,j) for all i,j to recover {x 1,…,x L }. Check if f(x j ) = f(x)! Check if f(x j ) = f(x)! (Easy) Claim: This recovers f -1 (f(x)) w.h.p. (Easy) Claim: This recovers f -1 (f(x)) w.h.p. March 2, 2011Local List IPAM14

of 31 Thoughts Did [GL] really need Local List-Decoding? Did [GL] really need Local List-Decoding? No. Simple LDC mapping k to poly(k) bits would do. No. Simple LDC mapping k to poly(k) bits would do. Unfortunately, none was known with poly(k) time list-decoder. Unfortunately, none was known with poly(k) time list-decoder. GL: Designed (½ - epsilon,poly(k))-LLDC for Hadamard code (which maps k bits to 2 k bits). GL: Designed (½ - epsilon,poly(k))-LLDC for Hadamard code (which maps k bits to 2 k bits). March 2, 2011Local List IPAM15

of 31 Hardness amplification Classical quest in complexity: Classical quest in complexity: Find hard functions (for some class). E.g., Find hard functions (for some class). E.g., f 2 NP – P f 2 NP – P f 2 PSPACE – P f 2 PSPACE – P Story so far: Cant find such. Story so far: Cant find such. Modern question: Modern question: Find functions that are really hard. Find functions that are really hard. Boolean f 2 NP that is hard to distinguish from random function in P. Boolean f 2 NP that is hard to distinguish from random function in P. March 2, 2011Local List IPAM16

of 31 Hardness amplification Thm [Lipton, …, S. Trevisan Vadhan]: Thm [Lipton, …, S. Trevisan Vadhan]: Let f:{0,1} k {0,1} be a hard to compute in time poly(k). Let f:{0,1} k {0,1} be a hard to compute in time poly(k). Let E:{0,1} K {0,1} N be (½-є,poly(k)) locally-list-decodable with K = 2 k, N = 2 n. Let E:{0,1} K {0,1} N be (½-є,poly(k)) locally-list-decodable with K = 2 k, N = 2 n. Then g:{0,1} n {0,1} given by g = E(f) is hard to distinguish from random for poly(k) time algorithms. Then g:{0,1} n {0,1} given by g = E(f) is hard to distinguish from random for poly(k) time algorithms. Proof: Obvious from definitions. Proof: Obvious from definitions. March 2, 2011Local List IPAM17

of 31 Agnostic Learning General goal of learning theory: General goal of learning theory: Given a class of functions F; Given a class of functions F; query/sample access to f 2 F; query/sample access to f 2 F; Learn f (or circuit (approx.) computing it). Learn f (or circuit (approx.) computing it). Learning with Noise: Learning with Noise: f not in F, but well-approximated by some function in F f not in F, but well-approximated by some function in F Agnostic Learning: Agnostic Learning: No relationship between f and F; No relationship between f and F; learn some approximation of f in F (if it exists). learn some approximation of f in F (if it exists). Useful in applications, as well as theory. Useful in applications, as well as theory. March 2, 2011Local List IPAM18

of 31 Agnostic Learning (contd.) GL result (Kushilevitz-Mansour interpretation): GL result (Kushilevitz-Mansour interpretation): Can agnostically learn linear approximations to Boolean functions, with queries. Can agnostically learn linear approximations to Boolean functions, with queries. Kushilevitz-Mansour: Kushilevitz-Mansour: List-decoding helps even more: Can learn decision trees. List-decoding helps even more: Can learn decision trees. Jackson: Jackson: Also CNF/DNF formulae … Also CNF/DNF formulae … March 2, 2011Local List IPAM19

of 31 March 2, 2011Local List IPAM 20 Part III: Some LLD Codes

of 31 Hadamard Code Code: Maps {0,1} k {0,1} 2 k. Code: Maps {0,1} k {0,1} 2 k. Codewords: Codewords: functions from {0,1} k {0,1}. functions from {0,1} k {0,1}. Encoding of m = is the function E m (y 1 … y k ) = Σ i=1 k m i y i (mod 2). Encoding of m = is the function E m (y 1 … y k ) = Σ i=1 k m i y i (mod 2). I.e., codewords are homogenous, k-variate, degree 1 polynomials over GF(2). I.e., codewords are homogenous, k-variate, degree 1 polynomials over GF(2). GL/KM version. GL/KM version. (Rackoff version). (Rackoff version). March 2, 2011Local List IPAM21

of 31 Decoding Hadamard Code (GL/KM) Preliminaries: Preliminaries: View words as functions mapping to {+1,-1}. View words as functions mapping to {+1,-1}. = Exp_y [f(y).g(y)]. = Exp_y [f(y).g(y)]. = 0 if a b and 1 o.w. = 0 if a b and 1 o.w. Let f_a =. Let f_a =. Then f[x] = a f a E(a)[x] Then f[x] = a f a E(a)[x] For all f, a f a 2 = 1. For all f, a f a 2 = 1. (½ - є)-List decoding: Given f, find all a such that f a > 2є. (½ - є)-List decoding: Given f, find all a such that f a > 2є. March 2, 2011Local List IPAM22

of 31 Decoding Hadamard Code [GL/KM] Consider 2 n sized binary tree. Consider 2 n sized binary tree. Node labelled by path to root. Node labelled by path to root. Value of leaf a = f a 2 Value of leaf a = f a 2 Value of node Value of node = sum of children values = sum of children values Main idea: Can approximate value of any node Main idea: Can approximate value of any node b f ab 2 = Exp x,y,z [f(xz).f(yz).E a (x).E a (y)] b f ab 2 = Exp x,y,z [f(xz).f(yz).E a (x).E a (y)] Algorithm: Algorithm: Explore tree root downwards. Explore tree root downwards. Stop if node value less than є 2 Stop if node value less than є 2 Report all leaves found. Report all leaves found. March 2, 2011Local List IPAM23Φ

of 31 (Generalized) Reed-Muller Code Message space = m-variate, degree r polynomials over GF(q). Message space = m-variate, degree r polynomials over GF(q). Encoding: Values over all points. Encoding: Values over all points. k = ( ) k = ( ) n = q m n = q m distance = 1 – r/q (if r < q). distance = 1 – r/q (if r < q). ¼ q -r/(q-1) if r > q. ¼ q -r/(q-1) if r > q. Decoding problem: Given query access to function that is close to polynomial, find all nearby polynomials, locally. Decoding problem: Given query access to function that is close to polynomial, find all nearby polynomials, locally. March 2, 2011Local List IPAM24 m+rr

of 31 Decoding (contd.) Specifically: Specifically: Given query access to f, and x 2 GF(q) m Given query access to f, and x 2 GF(q) m Output p 1 (x),…, p L (x) consistently, where p_js are polynomials within distance ρ of f. Output p 1 (x),…, p L (x) consistently, where p_js are polynomials within distance ρ of f. How to index the codewords? How to index the codewords? By values at a few (random) points in GF(q) m. By values at a few (random) points in GF(q) m. Claim: Specifying value of p at (roughly) log q L points specifies it uniquely (given f). Claim: Specifying value of p at (roughly) log q L points specifies it uniquely (given f). March 2, 2011Local List IPAM_j25

of 31 Decoding (contd.) Refined question: Refined question: Given query access to f, and values p j (y 1 ),…,p j (y t ), and x; Given query access to f, and values p j (y 1 ),…,p j (y t ), and x; Compute p j (x) Compute p j (x) Alg [Rackoff, STV, GKZ] Alg [Rackoff, STV, GKZ] Pick random (low-dim) subspace containing y 1,…,y t and x. Pick random (low-dim) subspace containing y 1,…,y t and x. Brute force decode f restricted to this subspace. Brute force decode f restricted to this subspace. March 2, 2011Local List IPAM_j26

of 31 March 2, 2011Local List IPAM 27 Part IV: Current Directions

of 31 Many interpretations of GL List-decoder for group homomorphisms [Dinur Grigorescu Kopparty S.] List-decoder for group homomorphisms [Dinur Grigorescu Kopparty S.] Set of homomorphisms from G to H form an error-correcting code. Set of homomorphisms from G to H form an error-correcting code. Decode upto minimum distance? Decode upto minimum distance? List-decoder for sparse high-distance linear codes [Kopparty Saraf] List-decoder for sparse high-distance linear codes [Kopparty Saraf] List-decoder for Reed-Muller codes [Gopalan Klivans Zuckerman] List-decoder for Reed-Muller codes [Gopalan Klivans Zuckerman] March 2, 2011Local List IPAM28

of 31 Approximate List-Decoding Given r, approximately compute w in C that is somewhat close to r. Given r, approximately compute w in C that is somewhat close to r. Easier problem, so should be solvable for broader class of codes C (C need not have good distance). Easier problem, so should be solvable for broader class of codes C (C need not have good distance). [ODonnell, Trevisan, IJK]: If encoder for C is monotone and local, then get hardness amplification for NP. [ODonnell, Trevisan, IJK]: If encoder for C is monotone and local, then get hardness amplification for NP. [IJK] Give approximate-LLD for truncated Hadamard code. [IJK] Give approximate-LLD for truncated Hadamard code. March 2, 2011Local List IPAM29

of 31 Conclusions Intersection of Locality and List-decoding interesting and challenging. Intersection of Locality and List-decoding interesting and challenging. Ought to be explored more? Ought to be explored more? March 2, 2011Local List IPAM30

of 31 March 2, 2011Local List IPAM 31 Thank You!

of 31 Decoding Hadamard Code [GL/KM] March 2, 2011Local List IPAM32Φ Value of node labelled p = sum_a f_a^2 Value of node labelled p = sum_a f_a^2 summing over all a with prefix p Walk down from root. Stop if some node has value < epsilon^2 Walk down from root. Stop if some node has value < epsilon^2 Value(p) = easily estimated.