Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Finite-Length Scaling and Error Floors Abdelaziz Amraoui Andrea Montanari Ruediger Urbanke Tom Richardson.

Similar presentations


Presentation on theme: "1 Finite-Length Scaling and Error Floors Abdelaziz Amraoui Andrea Montanari Ruediger Urbanke Tom Richardson."— Presentation transcript:

1 1 Finite-Length Scaling and Error Floors Abdelaziz Amraoui Andrea Montanari Ruediger Urbanke Tom Richardson

2 2 Approach to Asymptotic

3 3 Finite Length Scaling

4 4

5 5

6 6

7 7 Analysis (BEC): Covariance evolution Fraction of check nodes of degree greater than one and equal to one. Covariance terms. As a function of residual graph fractional size.

8 8 Covariance evolution

9 9 Finite Length Curves

10 10 Analysis (BEC) Follow Luby et al: single variable at a time with the trajectory converging to a differential equation. Covariance of state space variables also follows a d.e. Increments have Markov property and regularity.

11 11 Results

12 12 Finite Threshold Shift

13 13 Generalizing from the BEC? No obvious incremental form (diff. Eq.) No state space characterization of failure. No clear finite dimensional state space. Not clear what the right coordinates are for the general case (Capacity?). Nevertheless, it is useful in practice to have this interpretation of iterative failure and to have the basic form of the scaling law.

14 14 Empirical Evidence

15 15 Error Floors

16 16

17 17

18 18 Error Floors: Review of the BEC

19 19 Error floors on the erasure channel: Stopping sets.

20 20 Error floors on the erasure channel: average performance.

21 21 Error floors on the erasure channel: decomposition

22 22 Error floors on the erasure channel: average and typical performance.

23 23 Error floors for general channels: Expurgated Ensemble Experiments. AWGN channel rate 51/64 block lengths 4k Random Girth (8) optimized Neighborhood optimized

24 24 Error floors for general channels: Trapping set distribution. AWGN channel rate 51/64 block lengths 4k (3,1) (5,1) (7,1)

25 25 Observations. Error floor region dominated by small weight errors. Subset on which error occur usually induces a subgraph with only degree 2 and degree 1 check nodes where the number of degree 1 check nodes is relatively small. Optimized graphs exhibit concentration of types of errors.

26 26 Intuition In the error floor event, nodes in the trapping sets receive 1’s with some reliability. Other nodes receive typical inputs. (Reliable 1) (Definite 0) After a few iterations ‘exterior’ nodes and messages converge to high reliability 0s. Internally messages are 1s. (Definite 0) 1 1 1 1 1 Nevertheless, if internal received are 1s, internal messaging reaches highly reliable 1 and message state gets trapped. (Definite 0) (9,3)

27 27 A decoder on an input ℇ Y is a sequence of maps: D l : ℇ {0,1} n Defining Failure: Trapping Sets (Assume the all-0 codeword is the desired decoding. For the BEC let 1 denote an erasure.) We say that bit j is eventually correct if there exists L so that l > L implies D l ( ℇ ) = 0. Assuming failure, the trapping set T is the set of all bits that are not eventually correct.

28 28 Defining Failure for BP: Practice Decode for 200 iterations. If the decoding is not successful decode an additional 20 iterations and take the union of all bits that do not decode to 0 during this time.

29 29 Trapping Sets: Examples 1.Let the decoder be the maximum likelihood decoder in one step. Then the trapping sets are the non-zero codewords. 2.Let the decoder be belief propagation over the BEC. Then the trapping sets are the stopping sets. 3.Let the decoder be serial (strict) flipping over the BSC. T is a trapping set if and only if the in the subgraph induced by T each node has more even then odd degree neighbors, and the same holds for the complement of T.

30 30 Analysis with Trapping Sets: Decomposition of failure FER(  ) =  T P( ℇ T,  ) ℇ T : The set of all inputs giving rise to failure on trapping set T. Error Floors dominated by “small” events.

31 31 1.Find (cover) all trapping sets likely to have significant contribution to the error floor region. T 1,T 2,T 3,….,T m 2.Evaluate contribution of each set to the error floor. P( ℇ T 1,  ), P( ℇ T 2,  ),… Predicting Error Floors: A two pronged attack. Strictly speaking, we get a (tight) lower bound FER(  ) >  i P( ℇ T i,  )

32 32 Finding Trapping Sets Simulation of decoding can be viewed as stochastic process for finding trapping sets. It is very inefficient, however. We could use (aided) flipping to get some speed up. It is still too inefficient.

33 33 Finding Trapping Sets (Flipping) Trapping sets can be viewed as “local” extrema of certain functions. E.g., number of odd degree induced checks. “Local” means, e.g., under single element removal, addition, or swap. Therefore, we can look for subsets that are “local” extrema.

34 34 Finding Trapping Sets (Flipping) Basic idea: Build up a connected subset with bias towards minimizing induced odd degree checks. Check occasionally for containment of a in-flipping stable set by applying flipping decoding. Eventually such a set is contained. Check now for other types of variation: Out-flipping stability. Single aided flip stability (chains). ……

35 35 Differences: BP and Flipping

36 36 Differences: BP and Flipping r1r1 r 1 +r 2 +r 3 r3r3 r2r2

37 37 Differences: BP and Flipping

38 38 Differences: BP and Flipping

39 39 Find random variable x on which to condition the decoder input Y that “mostly” determines membership in ℇ T. I.e., Pr{ ℇ T | x} is nearly a step function in x. Perform in situ simulation of trapping set while varying x to measure Pr{ ℇ T | x}. Combine with density of x to get Pr{ ℇ T }. Evaluating Trapping Sets Basic idea:

40 40 Evaluating Trapping Sets Condition input to trapping set Otherwise simulate channel

41 41 Evaluating Trapping Sets: BEC X is the number of erasures in T (=S).

42 42 Evaluating Trapping Sets: AWGN X is the mean noise input in T.

43 43 Evaluating Trapping Sets: AWGN

44 44 Evaluating Trapping Sets: Margulis 12,4

45 45 A tougher test case:G

46 46 Evaluating Trapping Sets: G

47 47 A Curve from a single point

48 48 Extrapolating a curve

49 49 Variation in Trapping Sets: (10,4) (10,2) (10,0)

50 50 Variation in Trapping Sets: (12,4) (12,2) (12,0)

51 51 Variation in Trapping Sets

52 52 Conclusions Error floor performance is predictable with considerable computational effort. (Would be nice to have scaling law for “best” codes.) Trade off between error floor and threshold (waterfall) can be optimized even for very deep error floors.

53 53 Conclusions “It is interesting to observe that the search for theoretical understanding of turbo codes has transformed coding theorists into experimental scientists.” physicists.”


Download ppt "1 Finite-Length Scaling and Error Floors Abdelaziz Amraoui Andrea Montanari Ruediger Urbanke Tom Richardson."

Similar presentations


Ads by Google