Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sparse Random Linear Codes are Locally Decodable and Testable Tali Kaufman (MIT) Joint work with Madhu Sudan (MIT)

Similar presentations


Presentation on theme: "Sparse Random Linear Codes are Locally Decodable and Testable Tali Kaufman (MIT) Joint work with Madhu Sudan (MIT)"— Presentation transcript:

1 Sparse Random Linear Codes are Locally Decodable and Testable Tali Kaufman (MIT) Joint work with Madhu Sudan (MIT)

2 Error-Correcting Codes Code C ⊆ {0,1} n - collection of vectors (codewords) of length n. Linear Code - codewords form a linear subspace Codeword weight – For c  C, w(c) is #non-zero’s in c. C is n t sparse if |C| = n t n -ƴ biased if n/2 – n 1-ƴ  w(c)  n/2 + n 1-ƴ (for every c  C ) distance d if for every c  C w(c)  d

3 Local Testing / Correcting / Decoding Given C ⊆ {0,1} n, vector v, make k queries into v: k - local testing - decide if v is in C or far from every c  C. k - local correcting - if v is close to c  C, recover c(i) w.h.p. k - local decoding - if v is close to c  C, and c encodes a message m, recover m(i) w.h.p. [C = {E(m) | m  { 0,1} s }, E: {0,1} s → {0,1} n, s < n] Example: Hadamard Code, Linear functions. a  {0,1} logn, f(x) =  a i x i (k=3) - testing: f(x)+f(y)+f(x+y) =0 ? For random x,y. (k=2) - correcting: correct f(x) by f(x+y) + f(y) for a random y. (k=2) - decoding : recover a(i) by f(e i +y) + f(y) for a random y.

4 Brief History Local Correction: [Blum, Luby, Rubinfeld] In the context of Program Checking. Local Testability : [Blum,Luby,Rubinfeld] [Rubinfeld, Sudan], [Goldreich, Sudan] The core hardness of PCP. Local Decoding: [Katz, Trevisan], [Yekhanin] In the context of Private Information Retrieval (PIR) schemes. Most previous results (apart from [K, Litsyn] ) focus on specific codes obtained by their “nice” algebraic structures. This work: results for general codes based only on their density and distance.

5 Theorem (local-correction): For every t, ƴ > 0 const, If C ⊆ {0,1} n is n t sparse and n -ƴ biased then it is k=k(t, ƴ ) local corrected. Corollary (local-decoding): For every t, ƴ > 0 const, If E: {0,1} t logn → {0,1} n is a linear map such that C = {E(m) | m  { 0,1} t logn } is n t sparse and n -ƴ biased then E is k=k(t, ƴ ) local decoded. Proof: C E = {(m,E(m))| m  { 0,1} t logn } is k local corrected. Theorem (local-testing): For every t, ƴ > 0 const, If C ⊆ {0,1} n is n t sparse with distance n/2 – n 1-ƴ then it is k=k(t, ƴ ) local tested. Our Results Recall, C is n t sparse if |C| = n t n -ƴ biased if n/2 – n 1-ƴ  w(c)  n/2 + n 1-ƴ (for every c  C ) distance d if for every c  C w(c)  d

6 Reproduce testability of Hadamard, dual-BCH codes. Random code - A random code C ⊆ {0,1} n obtained by the linear span of a random t logn ∗ n matrix is n t sparse and O(logn/√n) biased, i.e. it is k=  (t) local corrected, local decoded and local tested. Can not get denser random code: Similar random code obtained by a random (logn) 2 ∗ n matrx doesn’t have such properties. There are linear subspaces of high degree polynomials that are sparse and un- biased so we can local correct, decode and test them. Example: Tr(ax^{2 logn/4+1 } + bx^{2 logn/8+1 } ) a,b  F_{2 logn } Nice closure properties: Subcodes, Addition of new coordinates, removal of few coordinates. Corollaries

7 Main Idea Study weight distribution of “dual code” and some related codes. –Weight distribution = ? –Dual code = ? –Which related codes? How? – MacWilliams Identities + Johnson bounds

8 Weight Distribution, Duals Weight distribution: (B 0 C,…,B n C ) B k C - # of codewords of weight k in the code C. 0  k  n Dual Code : C ┴ ⊆ {0,1} n - vectors orthogonal to all codewords in C ⊆ {0,1} n. Codeword v  C iff v ┴ C ┴ : for every c’  C ┴, = 0.

9 Which Related Codes? Local-Decoding: Same applied to C’. C’ = {(m,E(m))}. E(m): {0,1} s → {0,1} n, s < n Local-Testing: Duals of C, and of C  v C C - i i ij C - i,j Len nLen n-1 Len n-2 Local-Correction: Duals of C, C - i, C - i j

10 C is n t sparse and n -ƴ biased. B k C┴ = ? Duals of Sparse Unbiased Codes have Many k-Weight Codewords n ~n k 0 P k (i) < (n-2i) k Krawtchouk Polynomial n/2 ~n k/2 ~ -n k/2 n/2 -√(kn) n/2 +√(kn) P k (0) = B k C┴  [P k (0) + n (1-ƴ) k · n t ] /|C| If k   ( t / ƴ) B k C┴ ~= P k (0)/|C| n/2 –n 1-γ n/2 +n 1-ƴ n ~n k 0 MacWilliams Transform : B k C┴ =  B i C P k (i) / |C|

11 Canonical k-Tester Tester: Pick a random c’  [C ┴ ] k = 0 accept else reject Total number of possible tests: | [C ┴ ] k | = B k C┴ For v  C bad tests: | [C  v ┴ ] k | = B k [C  v]┴ Works if number of bad tests is bounded. Goal: Decide if v is in C or far from every c  C.

12 Proof of Local Testing Theorem ( un-biased ) n ~n k 0 P k (i) < (n-2i) k n/2 ~n k/2 ~ -n k/2 n/2 -√(kn) n/2 +√(kn) n/2 –n 1-γ n/2 +n 1-ƴ P k (0) = δnδn Johnson Bound Reduces to show (Gap): for v at distance  from C: B k [C  v]┴  (1-  ) B k C┴ Using Macwilliams and the estimation B k C┴ ½  B i C  v P k (i)  (1-  ) P k (0) C is n t sparse and n -ƴ biased Good:  loss Bad:  gain C  v = C  C +v

13 Canonical k-Corrector Corrector: Pick a random c’  [C ┴ ] k,i k-weight words w. 1 in i‘th coordinate. Return  s  1 c ’ – { i } v s 1 c ’ = { i | c’ i = 1} A random location in v is corrupted w.p . If for every i, every other coordinate j that the corrector considers is “random” then probability of error <  k Goal: Given v is  -close to c  C, recover c(i) w.h.p.

14 Reduces to show (2-wise independence property in [C ┴ ] k ): For every i,j [ C ┴ ] k, i,j / [C ┴ ] k, i  k/n (as if the code is random) [C ┴ ] k, i,( [C ┴ ] k, i,j ) k-weight codewords of C ┴ with 1 in i, (i & j) coordinates. Proof of Self Correction Theorem C C - i i ij C - i,j Len nLen n-1Len n-2 [C ┴ ] k, i = [C ┴ ] k - [ C - i ┴ ] k [C ┴ ] k, i,j = [C ┴ ] k - [ C - i ┴ ] k - [ C - j ┴ ] k + [ C - i j ┴ ] k All involved codes are sparse and unbiased

15 Local Correction based on distance. Obtain general k-local correction, local-decoding local testing results for denser codes. Which denser codes? Open Issues

16 Thank You!!!


Download ppt "Sparse Random Linear Codes are Locally Decodable and Testable Tali Kaufman (MIT) Joint work with Madhu Sudan (MIT)"

Similar presentations


Ads by Google