Sparse Random Linear Codes are Locally Decodable and Testable Tali Kaufman (MIT) Joint work with Madhu Sudan (MIT)

Slides:



Advertisements
Similar presentations
December 2, 2009 IPAM: Invariance in Property Testing 1 Invariance in Property Testing Madhu Sudan Microsoft/MIT TexPoint fonts used in EMF. Read the TexPoint.
Advertisements

Russell Impagliazzo ( IAS & UCSD ) Ragesh Jaiswal ( Columbia U. ) Valentine Kabanets ( IAS & SFU ) Avi Wigderson ( IAS ) ( based on [IJKW08, IKW09] )
Approximate List- Decoding and Hardness Amplification Valentine Kabanets (SFU) joint work with Russell Impagliazzo and Ragesh Jaiswal (UCSD)
Of 22 August 29-30, 2011 Rabin ’80: APT 1 Invariance in Property Testing Madhu Sudan Microsoft Research TexPoint fonts used in EMF. Read the TexPoint manual.
Gillat Kol joint work with Ran Raz Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist.
Property testing of Tree Regular Languages Frédéric Magniez, LRI, CNRS Michel de Rougemont, LRI, University Paris II.
Locally Decodable Codes from Nice Subsets of Finite Fields and Prime Factors of Mersenne Numbers Kiran Kedlaya Sergey Yekhanin MIT Microsoft Research.
Information and Coding Theory
List decoding Reed-Muller codes up to minimal distance: Structure and pseudo- randomness in coding theory Abhishek Bhowmick (UT Austin) Shachar Lovett.
A UNIFIED FRAMEWORK FOR TESTING LINEAR-INVARIANT PROPERTIES ARNAB BHATTACHARYYA CSAIL, MIT (Joint work with ELENA GRIGORESCU and ASAF SHAPIRA)
A 3-Query PCP over integers a.k.a Solving Sparse Linear Systems Prasad Raghavendra Venkatesan Guruswami.
Gillat Kol joint work with Ran Raz Locally Testable Codes Analogues to the Unique Games Conjecture Do Not Exist.
Proclaiming Dictators and Juntas or Testing Boolean Formulae Michal Parnas Dana Ron Alex Samorodnitsky.
Learning and Fourier Analysis Grigory Yaroslavtsev CIS 625: Computational Learning Theory.
Testing the Diameter of Graphs Michal Parnas Dana Ron.
The 1’st annual (?) workshop. 2 Communication under Channel Uncertainty: Oblivious channels Michael Langberg California Institute of Technology.
Correcting Errors Beyond the Guruswami-Sudan Radius Farzad Parvaresh & Alexander Vardy Presented by Efrat Bank.
Locally testable cyclic codes Lászl ó Babai, Amir Shpilka, Daniel Štefankovič There are no good families of locally-testable cyclic codes over. Theorem:
The Goldreich-Levin Theorem: List-decoding the Hadamard code
On Proximity Oblivious Testing Oded Goldreich - Weizmann Institute of Science Dana Ron – Tel Aviv University.
Locally Decodable Codes Uri Nadav. Contents What is Locally Decodable Code (LDC) ? Constructions Lower Bounds Reduction from Private Information Retrieval.
CS151 Complexity Theory Lecture 10 April 29, 2004.
Some 3CNF Properties are Hard to Test Eli Ben-Sasson Harvard & MIT Prahladh Harsha MIT Sofya Raskhodnikova MIT.
Hamming Codes 11/17/04. History In the late 1940’s Richard Hamming recognized that the further evolution of computers required greater reliability, in.
March 20-24, 2010 Babai-Fest: Invariance in Property Testing 1 Invariance in Property Testing Madhu Sudan Microsoft/MIT TexPoint fonts used in EMF. Read.
January 8-10, 2010 ITCS: Invariance in Property Testing 1 Invariance in Property Testing Madhu Sudan Microsoft/MIT TexPoint fonts used in EMF. Read the.
Correlation testing for affine invariant properties on Shachar Lovett Institute for Advanced Study Joint with Hamed Hatami (McGill)
DIGITAL COMMUNICATION Error - Correction A.J. Han Vinck.
Of 29 August 4, 2015SIAM AAG: Algebraic Codes and Invariance1 Algebraic Codes and Invariance Madhu Sudan Microsoft Research.
Information and Coding Theory Linear Block Codes. Basic definitions and some examples. Juris Viksna, 2015.
Succinct representation of codes with applications to testing Elena Grigorescu Tali Kaufman Madhu Sudan.
Error Control Code. Widely used in many areas, like communications, DVD, data storage… In communications, because of noise, you can never be sure that.
CS717 Algorithm-Based Fault Tolerance Matrix Multiplication Greg Bronevetsky.
DIGITAL COMMUNICATIONS Linear Block Codes
ADVANTAGE of GENERATOR MATRIX:
Information Theory Linear Block Codes Jalal Al Roumy.
Perfect and Related Codes
Network RS Codes for Efficient Network Adversary Localization Sidharth Jaggi Minghua Chen Hongyi Yao.
Locally Testable Codes and Caylay Graphs Parikshit Gopalan (MSR-SVC) Salil Vadhan (Harvard) Yuan Zhou (CMU)
List Decoding Using the XOR Lemma Luca Trevisan U.C. Berkeley.
Fidelity of a Quantum ARQ Protocol Alexei Ashikhmin Bell Labs  Classical Automatic Repeat Request (ARQ) Protocol  Quantum Automatic Repeat Request (ARQ)
Fidelities of Quantum ARQ Protocol Alexei Ashikhmin Bell Labs  Classical Automatic Repeat Request (ARQ) Protocol  Qubits, von Neumann Measurement, Quantum.
RS – Reed Solomon Error correcting code. Error-correcting codes are clever ways of representing data so that one can recover the original information.
Tali Kaufman (Bar-Ilan)
On Sample Based Testers
New Locally Decodable Codes and Private Information Retrieval Schemes
Algebraic Property Testing:
Property Testing (a.k.a. Sublinear Algorithms )
Cyclic Codes 1. Definition Linear:
Locality in Coding Theory II: LTCs
Dana Ron Tel Aviv University
Sublinear-Time Error-Correction and Error-Detection
Locality in Coding Theory
Sublinear-Time Error-Correction and Error-Detection
Pseudorandom bits for polynomials
Local Decoding and Testing Polynomials over Grids
Algebraic Codes and Invariance
General Strong Polarization
Maximally Recoverable Local Reconstruction Codes
Local Error-Detection and Error-correction
Locally Decodable Codes from Lifting
Invariance in Property Testing
Beer Therapy Madhu Ralf Vote early, vote often
Algebraic Property Testing
General Strong Polarization
Locality in Coding Theory II: LTCs
Every set in P is strongly testable under a suitable encoding
General Strong Polarization
General Strong Polarization
Zeev Dvir (Princeton) Shachar Lovett (IAS)
Presentation transcript:

Sparse Random Linear Codes are Locally Decodable and Testable Tali Kaufman (MIT) Joint work with Madhu Sudan (MIT)

Error-Correcting Codes Code C ⊆ {0,1} n - collection of vectors (codewords) of length n. Linear Code - codewords form a linear subspace Codeword weight – For c  C, w(c) is #non-zero’s in c. C is n t sparse if |C| = n t n -ƴ biased if n/2 – n 1-ƴ  w(c)  n/2 + n 1-ƴ (for every c  C ) distance d if for every c  C w(c)  d

Local Testing / Correcting / Decoding Given C ⊆ {0,1} n, vector v, make k queries into v: k - local testing - decide if v is in C or far from every c  C. k - local correcting - if v is close to c  C, recover c(i) w.h.p. k - local decoding - if v is close to c  C, and c encodes a message m, recover m(i) w.h.p. [C = {E(m) | m  { 0,1} s }, E: {0,1} s → {0,1} n, s < n] Example: Hadamard Code, Linear functions. a  {0,1} logn, f(x) =  a i x i (k=3) - testing: f(x)+f(y)+f(x+y) =0 ? For random x,y. (k=2) - correcting: correct f(x) by f(x+y) + f(y) for a random y. (k=2) - decoding : recover a(i) by f(e i +y) + f(y) for a random y.

Brief History Local Correction: [Blum, Luby, Rubinfeld] In the context of Program Checking. Local Testability : [Blum,Luby,Rubinfeld] [Rubinfeld, Sudan], [Goldreich, Sudan] The core hardness of PCP. Local Decoding: [Katz, Trevisan], [Yekhanin] In the context of Private Information Retrieval (PIR) schemes. Most previous results (apart from [K, Litsyn] ) focus on specific codes obtained by their “nice” algebraic structures. This work: results for general codes based only on their density and distance.

Theorem (local-correction): For every t, ƴ > 0 const, If C ⊆ {0,1} n is n t sparse and n -ƴ biased then it is k=k(t, ƴ ) local corrected. Corollary (local-decoding): For every t, ƴ > 0 const, If E: {0,1} t logn → {0,1} n is a linear map such that C = {E(m) | m  { 0,1} t logn } is n t sparse and n -ƴ biased then E is k=k(t, ƴ ) local decoded. Proof: C E = {(m,E(m))| m  { 0,1} t logn } is k local corrected. Theorem (local-testing): For every t, ƴ > 0 const, If C ⊆ {0,1} n is n t sparse with distance n/2 – n 1-ƴ then it is k=k(t, ƴ ) local tested. Our Results Recall, C is n t sparse if |C| = n t n -ƴ biased if n/2 – n 1-ƴ  w(c)  n/2 + n 1-ƴ (for every c  C ) distance d if for every c  C w(c)  d

Reproduce testability of Hadamard, dual-BCH codes. Random code - A random code C ⊆ {0,1} n obtained by the linear span of a random t logn ∗ n matrix is n t sparse and O(logn/√n) biased, i.e. it is k=  (t) local corrected, local decoded and local tested. Can not get denser random code: Similar random code obtained by a random (logn) 2 ∗ n matrx doesn’t have such properties. There are linear subspaces of high degree polynomials that are sparse and un- biased so we can local correct, decode and test them. Example: Tr(ax^{2 logn/4+1 } + bx^{2 logn/8+1 } ) a,b  F_{2 logn } Nice closure properties: Subcodes, Addition of new coordinates, removal of few coordinates. Corollaries

Main Idea Study weight distribution of “dual code” and some related codes. –Weight distribution = ? –Dual code = ? –Which related codes? How? – MacWilliams Identities + Johnson bounds

Weight Distribution, Duals Weight distribution: (B 0 C,…,B n C ) B k C - # of codewords of weight k in the code C. 0  k  n Dual Code : C ┴ ⊆ {0,1} n - vectors orthogonal to all codewords in C ⊆ {0,1} n. Codeword v  C iff v ┴ C ┴ : for every c’  C ┴, = 0.

Which Related Codes? Local-Decoding: Same applied to C’. C’ = {(m,E(m))}. E(m): {0,1} s → {0,1} n, s < n Local-Testing: Duals of C, and of C  v C C - i i ij C - i,j Len nLen n-1 Len n-2 Local-Correction: Duals of C, C - i, C - i j

C is n t sparse and n -ƴ biased. B k C┴ = ? Duals of Sparse Unbiased Codes have Many k-Weight Codewords n ~n k 0 P k (i) < (n-2i) k Krawtchouk Polynomial n/2 ~n k/2 ~ -n k/2 n/2 -√(kn) n/2 +√(kn) P k (0) = B k C┴  [P k (0) + n (1-ƴ) k · n t ] /|C| If k   ( t / ƴ) B k C┴ ~= P k (0)/|C| n/2 –n 1-γ n/2 +n 1-ƴ n ~n k 0 MacWilliams Transform : B k C┴ =  B i C P k (i) / |C|

Canonical k-Tester Tester: Pick a random c’  [C ┴ ] k = 0 accept else reject Total number of possible tests: | [C ┴ ] k | = B k C┴ For v  C bad tests: | [C  v ┴ ] k | = B k [C  v]┴ Works if number of bad tests is bounded. Goal: Decide if v is in C or far from every c  C.

Proof of Local Testing Theorem ( un-biased ) n ~n k 0 P k (i) < (n-2i) k n/2 ~n k/2 ~ -n k/2 n/2 -√(kn) n/2 +√(kn) n/2 –n 1-γ n/2 +n 1-ƴ P k (0) = δnδn Johnson Bound Reduces to show (Gap): for v at distance  from C: B k [C  v]┴  (1-  ) B k C┴ Using Macwilliams and the estimation B k C┴ ½  B i C  v P k (i)  (1-  ) P k (0) C is n t sparse and n -ƴ biased Good:  loss Bad:  gain C  v = C  C +v

Canonical k-Corrector Corrector: Pick a random c’  [C ┴ ] k,i k-weight words w. 1 in i‘th coordinate. Return  s  1 c ’ – { i } v s 1 c ’ = { i | c’ i = 1} A random location in v is corrupted w.p . If for every i, every other coordinate j that the corrector considers is “random” then probability of error <  k Goal: Given v is  -close to c  C, recover c(i) w.h.p.

Reduces to show (2-wise independence property in [C ┴ ] k ): For every i,j [ C ┴ ] k, i,j / [C ┴ ] k, i  k/n (as if the code is random) [C ┴ ] k, i,( [C ┴ ] k, i,j ) k-weight codewords of C ┴ with 1 in i, (i & j) coordinates. Proof of Self Correction Theorem C C - i i ij C - i,j Len nLen n-1Len n-2 [C ┴ ] k, i = [C ┴ ] k - [ C - i ┴ ] k [C ┴ ] k, i,j = [C ┴ ] k - [ C - i ┴ ] k - [ C - j ┴ ] k + [ C - i j ┴ ] k All involved codes are sparse and unbiased

Local Correction based on distance. Obtain general k-local correction, local-decoding local testing results for denser codes. Which denser codes? Open Issues

Thank You!!!