Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Local optimality in Tanner codes June 2011 Guy Even Nissim Halabi.

Similar presentations


Presentation on theme: "1 Local optimality in Tanner codes June 2011 Guy Even Nissim Halabi."— Presentation transcript:

1 1 Local optimality in Tanner codes June 2011 Guy Even Nissim Halabi

2 2 Error Correcting Codes for Memoryless Binary-Input Output-Symmetric Channels (1) Memoryless Binary-Input Output-Symmetric Channel Binary input but output is real Stochastic: characterized by conditional probability function P( y | c ) Memoryless: errors are independent (from bit to bit) Symmetric channel: Errors affect ‘0’s and ‘1’s symmetrically Example: Binary Symmetric Channel (BSC) Noisy Channel Decoding Channel Encoding 0 0 1 1 yiyi cici p p 1-p codeword noisy codeword

3 3 Error Correcting Codes for Memoryless Binary-Input Output-Symmetric Channels (2) Log-Likelihood Ratio (LLR) i for a received observation y i : i > 0  y i is more likely to be ‘0’ i < 0  y i is more likely to be ‘1’  y : injective mapping  replace y by Noisy Channel Decoding Channel Encoding codeword noisy codeword ( )

4 4 Maximum-Likelihood (ML) Decoding Maximum-likelihood (ML) decoding can be formulated by: (for any binary-input memory-less channel) or a linear program: No Efficient Representation  {0,1} N conv(  )  [0, 1] N

5 5 Linear Programming (LP) Decoding Linear Programming (LP) decoding [Fel03, FWK05] – relaxation of the polytope conv (C) P : (1) All codewords x in C are vertices (2) All new vertices are fractional (therefore, new vertices are not in C) (3) Has an efficient representation Solve LP P  {0,1} N conv(  )  [0,1] N fractional conv(  )  P

6 6 Linear Programming (LP) Decoding Linear Programming (LP) decoding [Fel03, FWK05] – relaxation of the polytope conv (C) P : (1) All codewords x in C are vertices (2) All new vertices are fractional (therefore, new vertices are not in C) (3) Has an efficient representation LP decoder finds ML codeword Solve LP P  {0,1} N conv(  )  [0,1] N fractional conv(  )  P

7 7 Linear Programming (LP) Decoding Linear Programming (LP) decoding [Fel03, FWK05] – relaxation of the polytope conv (C) P : (1) All codewords x in C are vertices (2) All new vertices are fractional (therefore, new vertices are not in C) (3) Has an efficient representation LP decoder fails Solve LP P  {0,1} N conv(  )  [0,1] N fractional conv(  )  P

8 8 Tanner Codes [Tan81] Factor graph representation of Tanner codes: Every Local-code node  j is associated with linear code of length deg G (  j ) Tanner code C (G) and codewords x : Extended local-code  j  {0,1} N : extend to bits outside the local-code Example: Expander codes [SS’96] Tanner graph is an expander; Simple bit flipping decoding algorithm. Variable nodes Local-Code nodes G = ( I  J, E )

9 9 LP Decoding of Tanner Codes conv(extended local-code  1 ) conv(extended local-code  2 ) Maximum-likelihood (ML) decoding: where Linear Programming (LP) decoding [following Fel03, FWK05]: where P satisfies the requirements.

10 10 Why study (irregular) Tanner Codes? Tighter LP relaxation because local codes are not parity codes Irregularity of the Tanner graph  Codes with better performance in the case of LDPC codes [Luby et al., 98] Regular Low-Density Tanner Code with non- trivial local codes  Irregular LDPC code. Irregular Tanner code with non-trivial local codes  Might yield improved designs.

11 11 Criterions of Interest Let  N denote an LLR vector received from the channel. Let x   (G) denote a codeword. Consider the following questions: E.g., efficient test via local computations  “Local Optimality” criterion Efficient Test with One Sided Error x Definitely Yes / Maybe No NP-Hard

12 12 Let x   (G)  {0,1} N f   [0,1] N   N [Fel03] Define relative point x  f by Consider a finite set B  [0,1] N of deviations Definition: A codeword x   is locally optimal for   N if for all vectors b  B, Goal: find a set B deviations such that: (1) x  LO( )  x  ML( ) and ML( ) unique (2) x  LO( )  x  LP( ) and LP( ) unique (3) Combinatorial Characterization of Local Optimality (1) All-Zeros Assumption LO( ) ML( ) LP( ) integral

13 13 Goal: find a set B such that: (1) x  LO( )  x  ML( ) and ML( ) unique (2) x  LO( )  x  LP( ) and LP( ) unique (3) Suppose we have properties (1), (2). Large support(b)  property (3).(e.g., Chernoff-like bounds) If B = C, then: x  LO( )  x  ML( ) and ML( ) unique However, analysis of property (3) ??? b – GLOBAL Structure Combinatorial Characterization of Local Optimality (2)

14 14 Goal: find a set B such that: (1) x  LO( )  x  ML( ) and ML( ) unique (2) x  LO( )  x  LP( ) and LP( ) unique (3) Suppose we have properties (1), (2). Large support(b)  property (3).(e.g., Chernoff-like bounds) For analysis purposes, consider structures with a local nature  B is a set of TREES [following KV’06] (height < ½ girth) Strengthen analysis by introducing layer weights! [following ADS’09]  better bounds on Finally, height(subtrees(G)) < ½ girth(G) = O(log N)  Take path prefix trees – not bounded by girth! Combinatorial Characterization of Local Optimality (3)

15 15 Captures the notion of computation tree Consider a graph G=(V,E) and a node r  V: – set of all backtrackless paths in G emanating from node r with length at most h. – path-prefix tree of G rooted at node r with height h. Path-Prefix Tree 1 1 1 1 1 2 1 1 1 2 2 1 2 3 2 1 2 4 2 G:

16 16 d-Tree Consider a path-prefix tree of a Tanner graph G = ( I  J, E) d -tree T [r,h,d] – subgraph of root = r ∀ v ∈ T ∩ I : deg T (v) = deg G (v). ∀ c ∈ T ∩ J : deg T (c) = d. v0v0 2-tree = skinny tree / minimal deviation v0v0 3-tree v0v0 4-tree Not necessarily a valid configuration!

17 17 Consider layer weights  : {1,…,h} → , and a subtree of a path prefix tree. Define a weight function for the subtree induced by  : where – projection to Tanner graph G. Cost of a Projected Weighted Subtree 1 1 1 1 1 2 1 1 1 2 2 1 2 3 2 1 2 4 2 1 2 project

18 18 Combinatorial Characterization of Local Optimality Setting:  (G)  {0,1} N Tanner code with minimal local distance d * 2 ≤ d ≤ d *   [0,1] h \{0 N } – set of all vectors corresponding to projections to G by  -weighted d -trees of height 2h rooted at variable nodes Definition: A codeword x is (h, , d)- locally optimal for    N if for all vectors,

19 19 Thm: Local Opt ⇒ ML Opt / LP Opt Theorem: If x   (G) is (h, , d) -locally optimal for, then: (1) x is the unique ML codeword for. (2) x is the unique optimal LP solution given. Proof sketch – Two steps: STEP 1: Local Opt.  ML Opt. Key structural lemma: Every codeword x equals a convex combination of projected  -weighted d- trees in G (up to a scaling factor  ): For every codeword x’ ≠ x, let Key structural lemma local-optimality of x  x x

20 20 Thm: Local Opt ⇒ ML Opt / LP Opt Theorem: If x   (G) is (h, , d) -locally optimal for, then: (1) x is the unique ML codeword for. (2) x is the unique optimal LP solution given. Proof sketch – Two steps: STEP 2: Local Opt.  LP Opt. Local opt.  ML Opt. used as “black box”. Reduction using graph covers and graph cover decoding [VK’05]

21 21 Thm: Local Opt ⇒ ML Opt / LP Opt Theorem: If x   (G) is (h, , d) -locally optimal for, then: (1) x is the unique ML codeword for. (2) x is the unique optimal LP solution given. Interesting outcomes: Characterizes the event of LP decoding failure. For example: Work in progress: Design an iterative message passing decoding algorithm that computes an (h, , d) - locally-optimal codeword after h iterations.  x locally optimal codeword for  weighted message passing algorithm finds x after h iterations + guarantee that x is the ML codeword. All-Zeros Assumption

22 22 Thm: Local Opt ⇒ ML Opt / LP Opt Theorem: If x   (G) is (h, , d) -locally optimal for, then: (1) x is the unique ML codeword for. (2) x is the unique optimal LP solution given. LO( ) ML( ) LP( ) integral Goals achieved: (1) x  LO( )  x  ML( ) and ML( ) unique (2) x  LO( )  x  LP( ) and LP( ) unique Left to show: (3) Pr{x  LO( )} = 1 – o(1)

23 23 Bounds for LP Decoding Based on d-Trees (analysis for regular factor graphs) (d L,d R ) -regular Tanner codes with minimum local distance d* - Form of finite length bounds:  c > 1.  t.  noise < t. Pr(LP decoder success)  1 - exp(-c girth ) - If girth = Θ(logn), then Pr(LP decoder success)  1 - exp(-n γ ), for 0 <  < 1 n → ∞ : t is a lower bound on the threshold of LP decoding Results for BSC using analysis of uniform weighted d -trees: 4-trees3-treesSkinny-Tree analysis (2-tree) [ADS’09] p LP > 0.4512p LP > 0.076p LP > 0.02 (3,6)-regular Tanner code with d*=4 p LP > 0.365p LP > 0.057p LP > 0.017 (4,8)-regular Tanner code with d*=4

24 24 Message Passing Decoding for Irregular LDPC codes Dynamic Programming on coupled computation trees of the Tanner graph. Input: Channel observations for variable nodes v  V. Output: An assignment to variable nodes v  V, s.t.: assignment to v agrees with the “Tree” codeword that minimizes an objective function on the computation tree rooted at v. Algorithm principle – Messages are sent iteratively: variable nodes → check nodes check nodes → variable nodes Goal: A locally optimal codeword x exists  Message passing algorithm finds x

25 25  -Weighted Min-Sum: Message Rules Variable node → Check node message (iteration l ): Check node → Variable node message (iteration l ):

26 26  -Weighted Min-Sum: Decision Rule Decision at variable node v on iteration h :

27 27 Thm: Local Opt. ⇒  -Weighted Min-Sum Opt. Theorem: If x   (G) is (h, , 2 ) -locally optimal for, then: (1)  -weighted min-sum computes x in h iterations. (2) x is the unique ML codeword given. The  -Weighted Min-Sum algorithm generalizes the attenuated max-product algorithm [Frey and Koetter ’00] [Jian and Pfister ’10]. Open question: probabilistic analysis of local- optimality in the irregular case.

28 28 Local Optimality Certificates for LP decoding Previous results: Koetter and Vontobel ’06 – Characterized LP solutions and provide a criterion for certifying the optimality of a codeword for LP decoding of regular LDPC codes. Characterization is based on combinatorial structure of skinny trees of height h in the factor graph. (h<  girth(G)) Arora, Daskalakis and Steurer ’09 – Certificate for LP decoding of regular LDPC codes over BSC. Extension to MBIOS channels in [Halabi-Even ’10]. Certificate characterization is based on weighted skinny trees of height h. (h<  girth(G)) Note: Bounds on the word error probability of LP decoding are computed by analyzing the certificates. By introducing layer weights, ADS certificate occurs with high probability for much larger noise rates. Vontobel ’10 – Implies a certificate for LP decoding of Tanner codes based on weighted skinny subtrees of graph covers.

29 29 Local Optimality Certificates for LP decoding – Summary of Main Techniques Current work[KV06,ADS09,HE10] h is unbounded. characterization uses computation trees h <  girth(G) Girth Irregular factor graph – normalization by node degrees.Regular factor graph. Regularity Linear Codes. Tighter relaxation for the generalized fundamental polytope. Parity code.Check Nodes / Constraints

30 30 Local Optimality Certificates for LP decoding – Summary of Main Techniques Current work[KV06,ADS09,HE10] h is unbounded. characterization uses computation trees h <  girth(G) Girth Irregular factor graph – normalization by node degrees.Regular factor graph. Regularity Linear Codes. Tighter relaxation for the generalized fundamental polytope. Parity code.Check Nodes / Constraints “fat” – Certificates based on “fat” structures likely to occur with high probability for larger noise rates. Not necessarily a valid configuration! “skinny” Locally satisfies parity checks. Deviations v0v0

31 31 Local Optimality Certificates for LP decoding – Summary of Main Techniques Current work[KV06,ADS09,HE10] h is unbounded Characterization using computation trees h <  girth(G) No dependencies on the factor graph. Girth Irregular factor graph – add normalization factors according to node degrees. Regular factor graph. Regularity Linear Codes. Tighter relaxation for the generalized fundamental polytope. Parity code.Check Nodes / Constraints “fat” – Certificates based on “fat” structures likely to occur with high probability for larger noise rates. Not necessarily a valid configuration! “skinny” Locally satisfies inner parity checks. Deviations Use reduction to ML via characterization of graph cover decoding. Dual / Primal LP analysis. Polyhedral analysis. LP solution analysis / characterization

32 32 Conclusions Combinatorial, graph theoretic, and algorithmic methods for analyzing decoding of (modern) error correcting codes over any memoryless channel: New local optimality combinatorial certificate for LP decoding of irregular Tanner codes, i.e., Local Opt.  LP Opt  ML Opt. The certificate is based on weighted “fat” subtrees of computation trees of height h. h is not bounded by girth. Proofstechniques: combinatorial decompositions and graph covers arguments. Efficient algorithm (dynamic programming), runs in O(|E|  h) time, for the computation of local optimality certificate of a codeword given an LLR vector. Degree of local-code nodes is not limited to 2

33 33 Future Work Work in progress: Design an iterative message passing decoding algorithm for Tanner codes that computes an (h, , d) - locally-optimal codeword after h iterations.  x locally optimal codeword for  weighted message passing algorithm computes x in h iterations + guarantee that x is the ML codeword. Asymptotic analysis of LP decoding and weighted min-sum decoding for ensembles of irregular Tanner codes. apply density evolution techniques and the combinatorial characterization of local optimality.

34 34 Thank You!


Download ppt "1 Local optimality in Tanner codes June 2011 Guy Even Nissim Halabi."

Similar presentations


Ads by Google