Download presentation

Presentation is loading. Please wait.

Published byKeagan Tarr Modified about 1 year ago

1
On error correction for networks and deadlines Tracey Ho Caltech INC, 8/5/12

2
Introduction Network error correction [Yeung & Cai 06] s t Errors in some bits, locations unknown → Code across bits Errors in some links/packets, locations unknown → Code across links/packets s2 unknown erroneous links t1t1 t 2 network s1 Classical error correction

3
Problem statement Given a network and error model −What communication rates are feasible? (info theory) −How to achieve with practical codes? (coding theory) s2 unknown erroneous links t1t1 t 2 network s r2r2 r1r1

4
Background – network error correction Rich literature on single-source multicast with uniform errors −All sinks demand the same information −Equal capacity network links/packets, any z can be erroneous −Various capacity-achieving codes, e.g. [Cai & Yeung 06, Jaggi et al. 08, Koetter & Kschischang 08] and results on code properties, e.g. [Yang & Yeung 07, Balli, Yan & Zhang 07, Prasad & Sundar Rajan 09]

5
This talk Generalizations −Non-multicast demands, multiple sources, rateless codes, non-uniform link capacities −New capacity bounding and coding techniques Applications −Streaming (non-multicast nested network) −Distribution of keys/decentralized data (multi-source network) −Computationally limited networks (rateless codes)

6
Outline Non-multicast nested networks, streaming communication Multiple-source multicast, key distribution Rateless codes, computationally limited networks Non-uniform link capacities

7
Outline Non-multicast nested networks, streaming communication O. Tekin, S. Vyetrenko, T. Ho and H. Yao, "Erasure correction for nested receivers," Allerton O. Tekin, T. Ho, H. Yao and S. Jaggi, “On erasure correction coding for streaming,” ITA D. Leong and T. Ho, “Erasure coding for real-time streaming,” ISIT Multiple-source multicast, key distribution Rateless codes, computationally limited networks Non-uniform link capacities

8
Background - non-multicast Not all sinks demand the same information Capacity even without errors is an open problem − May need to code across different sinks’ data (inter- session coding) − Not known in general when intra-session coding suffices Non-multicast network error correction − Capacity bounds from analyzing three-layer networks (Vyetrenko, Ho & Dikaliotis 10) − We build on this work to analyze coding for streaming of stored and online content

9
Streaming stored content m 1 I 1 m2I2m2I2 m3I3m3I3 Demanded information xx Initial play-out delayDecoding deadlines Forward error correction Source Packet erasures packet erasure link (unit size packets)

10
Nested network model I 1, I 2, I 3 I1I1 I 1, I 2 t1t1 t2t2 t3t3 m 1 I 1 m2I2m2I2 m3I3m3I3 DeadlinesSinks Demanded info xx Spatial network problem Temporal coding problem Each sink sees a subset of the info received by the next (nested structure) Non-multicast demands Unit capacity links Unit size packets source

11
Nested network model I 1, I 2, I 3 I1I1 I 1, I 2 t1t1 t2t2 t3t3 m 1 I 1 m2I2m2I2 m3I3m3I3 xx Packet error/erasure correction streaming code Capacity outer bound Finite blocklength network error/ erasure correction code Capacity outer bound source

12
Problem and results Problem Given an erasure model and deadlines m 1, m 2, …, what rate vectors u 1, u 2, … are achievable? Results We find the capacity and a simple optimal coding scheme for a uniform erasure model − At most z erasures, locations unknown a priori We show this scheme achieves at least a guaranteed fraction of the capacity region for a sliding window erasure model − Constraints on number of erasures in sliding windows of certain length − Exact optimal scheme is sensitive to model parameters

13
Problem and results Problem Given an erasure model and deadlines m 1, m 2, …, what rate vectors u 1, u 2, … are achievable? Results We find the capacity and a simple optimal coding scheme for a uniform erasure model − At most z erasures, locations unknown a priori We show this scheme achieves at least a guaranteed fraction of the capacity region for a sliding window erasure model − Constraints on number of erasures in sliding windows of certain length − Exact optimal scheme is sensitive to model parameters

14
z erasures – upper bounding capacity Want to find the capacity region of achievable rates u 1,u 2,…,u n We can write a cut-set bound for each sink: u 1 ≤ m 1 ̶ z u 1 +u 2 ≤ m 2 ̶ z … u 1 +u 2 +…+u n ≤ m n ̶ z Can we combine bounds for multiple erasure patterns and sinks to obtain tighter bounds? I1I1 I 1, I 2 I 1, I 2, I 3 t1t1 t2t2 t3t3

15
Cut-set combining procedure Obtain bounds involving progressively more links and rates u i, by iteratively applying steps: Extend: H(X |I 1 i-1 )+ |Y| ≥ H(X,Y |I 1 i-1 )= H(X,Y |I 1 i )+ u i where X,Y is a decoding set for I i Combine: Example: m 1 =3,m 2 =5, m 3 =7, m 4 =11, z= u 1 +H(X 1 X 2 |I 1 ) ≤2 u 1 +H(X 1 X 3 |I 1 ) ≤2 u 1 +H(X 2 X 3 |I 1 ) ≤2

16
Upper bound derivation graph Different choices of links at each step give different upper bounds Exponentially large number of bounds Only some are tight – how to find them? We use an achievable scheme as a guide and show a matching upper bound Example: m 1 =3,m 2 =5, m 3 =7, m 4 =11, z=1 u 1 ≤2 3u 1 +2u 2 ≤8 3u 1 +2u 2 +u 3 ≤9 6u 1 +5u 2 +4u 3 ≤24 6u 1 +4u 2 +2u 3 +u 4 ≤20 9u 1 +6u 2 +4u 3 +3u 4 ≤36 6u 1 +5u 2 +4u 3 +2u 4 ≤28 6u u 2 +4u 3 +3u 4 ≤30 9u u 2 +7u 3 +6u 4 ≤54 Capacity region:

17
Intra-session Coding A rate vector (u 1,u 2,…,u n ) is achieved if and only if for every unerased set P : 12…mnmn ΣPΣP I1I1 y 1,1 y 1,2 …y 1,m_n ≥ u 1 I2I2 y 2,1 y 2,2 …y 2,m_n ≥ u 2 …………… InIn y n,1 …y n,m_n ≥ u n Σ ≤1≤1≤1≤1≤1 Separate erasure coding over each sink’s data Code design → capacity allocation problem y j,k : capacity on k th link allocated to j th sink’s data We may assume y j,k = 0 for k>m j

18
1,2,…,9,1011,12,13,1415,16,17,1819,20,21,22 I1I1 I2I2 I3I3 I4I “As uniform as possible” intra-session coding scheme m 1 = 10, m 2 = 14, m 3 = 18, m 4 = 22, u 1 = 6, u 2 = 3, u 3 = 3, u 4 = 4, z=2 For a given rate vector, fill each row as uniformly as possible subject to constraints from previous rows Example: Can we do better?

19
Capacity region Theorem: The z -erasure (or error) correction capacity region is achieved by the “as uniform as possible” coding scheme. Characterization of the capacity region in a form that is simple to specify and calculate Intra-session coding is also relatively simple to implement

20
Proof Idea Consider any given rate vector (u 1,u 2,…,u n ) and let T i,j denote its corresponding “as uniform as possible” allocation: Show inductively: the conditional entropy of any set of unerased links given messages I 1,…, I k matches the residual capacity from the table Use T i,j values to find the appropriate path through upper bound derivation graph 1,2,…,m 1 m 1 +1,…,m 2 m 2 +1,…,m 3 …m n-1 +1,…,m n I1I1 T 1,1 I2I2 T 2,1 T 2,2 …………… InIn T n,1 T n,2 T n,3 …T n,n

21
Streaming online content Messages arrive every c time steps at the source, and must be decoded within d time steps packet erasure link (unit size packets) Message decoding deadlines (d=8) Message creation times (c=3)

22
Problem and results Problem Given an erasure model and parameters c and d, what is the maximum size of independent uniform messages? Results We find the capacity and a simple coding scheme that is asymptotically optimal for the following erasure models: − #1: Limited number of erasures per sliding window − #2: Erasure bursts and guard intervals of certain lengths For other values of burst length and guard interval, optimal inter-session convolutional code constructions [Martinian & Trott 07, Leong & Ho 12]

23
Problem and results Problem Given an erasure model and parameters c and d, what is the maximum size of independent uniform messages? Results We find the capacity and a simple coding scheme that is asymptotically optimal for the following erasure models: − #1: Limited number of erasures per sliding window − #2: Erasure bursts and guard intervals of certain lengths For other values of burst length and guard interval, optimal inter-session convolutional code constructions [Martinian & Trott 07, Leong & Ho 12]

24
Code construction Divide each packet evenly among current messages Intra-session coding within each message when d is a multiple of c … messages 2, 3, 4 are current at t = 12 constant number of current messages at each time step

25
Code construction Divide each packet evenly among current messages Intra-session coding within each message variable number of current messages at each time step messages 3, 4, 5 are current at t = 13messages 3, 4 are current at t = 12 when d is not a multiple of c …

26
Capacity result Like the previous case, converse obtained by − combining bounds for multiple erasure patterns and sinks (deadlines) − inductively obtaining upper bounds on the entropy of sets of unerased packets, conditioned on previous messages The converse bound coincides with the rate achieved by our coding scheme asymptotically in the number of messages n Gap for small n corresponds to underutilization of capacity at the start and end by the time-invariant coding scheme

27
Outline Non-multicast nested networks, streaming communication Multiple-source multicast, key distribution T. Dikaliotis, T. Ho, S. Jaggi, S. Vyetrenko, H. Yao, M. Effros, J. Kliewer and E. Erez, "Multiple-access Network Information-flow and Correction Codes," IT Transactions H. Yao, T. Ho and C. Nita-Rotaru, "Key Agreement for Wireless Networks in the Presence of Active Adversaries,"Asilomar Rateless codes, computationally limited networks Non-uniform link capacities

28
Multiple-source multicast, uniform z errors Coherent (known topology) and noncoherent (unknown topology) cases s2 t s1 Sources with independent information We could partition network capacity among different sources… But could rate be improved by coding across different sources? To what extent can different sources share network capacity? Challenge: owing to the need for coding across sources in the network and independent encoding at sources, straightforward extensions of single-source codes are suboptimal Related work: code construction in (Siavoshani, Fragouli & Diggavi 08) achieves capacity for C1+C2=C

29
Multiple-source multicast, uniform z errors Sources with independent information We could partition network capacity among different sources… But could rate be improved by coding across different sources? To what extent can different sources share network capacity? Challenge: owing to the need for coding across sources in the network and independent encoding at sources, straightforward extensions of single-source codes are suboptimal Related work: code construction in (Siavoshani, Fragouli & Diggavi 08) achieves capacity for C1+C2=C s2 t s1 Coherent (known topology) and noncoherent (unknown topology) cases

30
Multiple-source multicast, uniform z errors Sources with independent information We could partition network capacity among different sources… But could rate be improved by coding across different sources? To what extent can different sources share network capacity? Challenge: owing to the need for coding across sources in the network and independent encoding at sources, straightforward extensions of single-source codes are suboptimal Related work: code construction in (Jafari, Fragouli & Diggavi 08) achieves capacity for C1+C2=C s2 t s1 Coherent (known topology) and noncoherent (unknown topology) cases

31
Capacity region Theorem: The coherent and non-coherent capacity region under any z link errors is given by the cut set bounds − U = set of source nodes − m S = min cut capacity between sources in subset S of U and each sink − r i = rate from the i th source Redundant capacity can be fully shared via coding

32
Capacity-achieving non-coherent code constructions 1.Probabilistic construction − Joint decoding of sources, using injection distance metric − Subspace distance metric used in single-source case is insufficient in multi-source case 2.Lifted Gabidulin rank metric codes over nested fields − Successive decoding of sources − Linear transformation to separate out other sources’ interference increases the field size of errors − Sources encode over nested extension fields

33
An application: key distribution Robust distribution of keys from a pool (or other decentralized data) Nodes hold subsets of keys, some pre-distributed Further exchange of keys among nodes Want to protect against some number of corrupted nodes Questions: − How many redundant transmissions are needed? − Can coding help? V1V2V3V4V5V6V7V8V9 k1, k2 k1, k3 k2, k3 R wants k1, k2, k3

34
An application: key distribution Problem is equivalent to multi-source network error correction Coding across keys strictly outperforms forwarding in general S2 V1V2V3V4V5V6V7V8V9 V1V2V3V4V5V6V7V8V9 k1, k2 k1, k3 k2, k3 R S1S3 R wants k1, k2, k3

35
Outline Non-multicast nested networks, streaming communication Multiple-source multicast, key distribution Rateless codes, computationally limited networks S. Vyetrenko, A. Khosla & T. Ho, “On combining information-theoretic and cryptographic approaches to network coding security against the pollution attack,” Asilomar W. Huang, T. Ho, H. Yao & S. Jaggi, “Rateless resilient network coding against Byzantine adversaries,” Non-uniform link capacities

36
Background – adversarial errors in multicast Information theoretic network error correction − Prior codes designed for a given mincut and max no. of errors z U − Achieve mincut -2z U, e.g. [Cai and Yeung 06, Jaggi et al. 08, Koetter & Kschischang 08] − No computational assumptions on adversaries − Use network diversity and redundant capacity as resources Cryptographic signatures with rateless network codes − Signatures for checking network coded packets, e.g. [Charles et al. 06, Zhao et al. 07, Boneh et al. 09] − Achieve realized value of mincut after erasures − Use computation, key infrastructure as resources

37
Motivation Cryptographic approach +Does not require a priori estimates of network capacity and errors (rateless) +Achieves higher rate −Performing signature checks requires significant computation; checking all packets at all nodes can limit throughput if nodes are computationally weak, e.g. low-power wireless nodes Questions: Can we achieve the rateless benefits without the computational drawback? Can we use both network diversity as well as computation as resources, to do better than with each separately?

38
Rateless network error correction codes Incrementally send redundancy until decoding succeeds Without an a priori bound on the number of errors, need a means to verify decoding We give code constructions using respectively: 1. Shared secret randomness between source and sink (small compared to message size) 2. Cryptographic signatures These constructions are asymptotically optimal: − Decoding succeeds w.h.p. once received information/errors satisfy cut set bound − Overhead becomes negligible with increasing packet length

39
Rateless code using shared secret Shared secret is random and independent of the message Non-rateless case [Nutman and Langberg 08] − Redundancy Y added to message W so as to satisfy a matrix hash equation [Y W I ]V= H defined by shared secret (V, H) − Hash is used to extract [Y W I ] from received subspace Challenges in the rateless case: 1. Calculate redundancy incrementally such that it is cumulatively useful for decoding 2. Send redundancy incrementally Growth in dimension of subspace to be recovered in turn necessitates more redundancy

40
Each adversarial error packet can correspond to an addition (of erroneous information) and/or an erasure (of good information) Code structure: y k = w V (k) +h k, where w is the vectorized message, V ij (k) =a k ij, and h k and a k are shared secrets Rateless code using shared secret y 3 Message W d 31 y d 11 d 32 y 2+ d 33 y 3 y 2 d 31 d 32 d 33 d 22 y 2 d 11 y 1 d 21 y 1 + d 21 d 22 y Linearly dep redundancy for erasures Long packets C2C2 C 2 W C3C3 C 3 W C4C4 C 4 W C1C1 C 1 W Linearly indep redundancy for additions Short packets

41
Rateless code using signatures Each adversarial error packet can correspond to an addition (of erroneous information) and/or an erasure (of good information) Code structure: y i = w S i, where w is the vectorized message and S i is a generic known matrix Message W Y 1 C 2 W+D 21 Y 1 +D 22 Y 2 Linearly dependent redundancy for erasures Y 2 C 2 Linearly independent redundancy for additions D 22 C 1 C 1 W+D 11 Y 1 D 11 D 21

42
Example: simple hybrid strategy on wireless butterfly network Node D has limited computation and outgoing capacity → Probabilistically checks/codes a fraction of packets −Proportion of packets checked/coded chosen to maximize expected information rate subject to computational budget

43
Example: simple hybrid strategy on wireless butterfly network

44
Outline Non-multicast nested networks, streaming communication Multiple-source multicast, key distribution Rateless codes, computationally limited networks Non-uniform link capacities S. Kim, T. Ho, M. Effros and S. Avestimehr, "Network error correction with unequal link capacities," IT Transactions T. Ho, S. Kim, Y. Yang, M. Effros and A. S. Avestimehr, "On network error correction with limited feedback capacity," ITA

45
Uniform and non-uniform links Adversarial errors on any z fixed but unknown links Uniform links: − Multicast error correction capacity = min cut – 2z − Worst-case errors occur on the min cut Non-uniform links: − Not obvious what are worst-case errors Cut size versus link capacities Feedback across cuts matters (can provide information about errors on upstream links) − Related work: Adversarial nodes (Kosut, Tong & Tse 09)

46
Tighter cut set bounding approach The classical cut set bound is equivalent to adding reliable, infinite-capacity bidirectional links between each pair of nodes on each side of the cut Tighter bounds can be obtained by taking into account which forward links affect or are affected by which feedback links Equivalent to adding a link (i,j) only if there is a directed path from node i to node j on that does not cross the cut 46 Zigzag network

47
New cut-set bound For any cut Q, adversary can erase a set of k ≤ z forward links adversary then chooses two sets Z 1,Z 2 of z-k links s.t. decoder cannot distinguish which set is adversarial: − no feedback links downstream of Z 1,Z 2, − downstream feedback links are included in Z i, or − downstream feedback links W i that are not in Z i have relatively small capacity s.t. distinct codewords have the same feedback link values sum of capacities of remaining forward links + capacities of links in W 1,W 2 is an upper bound Bound is tight on some families of zigzag networks

48
z=1 Achieve rate 3 using new code construction Achievability - example For z=1, upper bound = 5 Without feedback link, capacity = 2 Can we use feedback link to achieve rate 5 ? ∞ ∞

49
z=1 Achieve rate 3 using new code construction b r2r2 a e Some network capacity is allocated to redundancy enabling partial error detection at intermediate nodes Nodes that detect errors forward additional information allowing the sink to locate errors Use feedback capacity to increase the number of symbols transmitted with error detection Remaining network capacity carries an MDS error correction code over all information symbols r1r1 c +c a d b “Detect and forward” coding strategy z=1 capacity = 5 ∞ ∞

50
Conclusion Network error correction − New coding and outer bounding techniques for non- multicast demands, multiple sources, non-uniform errors − A model for analysis and code design in various applications, e.g. robust streaming, key distribution − Rateless and hybrid codes for computationally limited networks with adversaries

51
Thank you

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google