Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fighting Byzantine Adversaries in Networks: Network Error-Correcting Codes Michelle Effros Michael Langberg Tracey Ho Sachin Katti Muriel Médard Dina Katabi.

Similar presentations


Presentation on theme: "Fighting Byzantine Adversaries in Networks: Network Error-Correcting Codes Michelle Effros Michael Langberg Tracey Ho Sachin Katti Muriel Médard Dina Katabi."— Presentation transcript:

1 Fighting Byzantine Adversaries in Networks: Network Error-Correcting Codes Michelle Effros Michael Langberg Tracey Ho Sachin Katti Muriel Médard Dina Katabi Sidharth Jaggi

2 Obligatory Example/History s t1t1 t2t2 b1b1 b2b2 b2b2 b2b2 b1b1 b1b1 b1b1 b1b1 b1b1 b1b1 (b 1,b 2 ) b 1 +b 2 (b 1,b 2 ) [ACLY00] [ACLY00] Characterization Non-constructive [LYC03], [KM02] Constructive (linear) Exp-time design [JCJ03], [SET03] Poly-time design Centralized design [HKMKE03], [JCJ03] Decentralized design EVERBETTEREVERBETTER... C=2 [This work] All the above, plus security Tons of work [SET03] Gap provably exists

3 Multicast Wired Wireless Simplifying assumptions All links unit capacity (1 packet/transmission) Acyclic network Network = Hypergraph ALL of Alice’s information decodable EXACTLY by EACH Bob Network Model [GDPHE04],[LME04] – No intereference

4 Multicast Networks Webcasting P2P networks Sensor networks

5 Multicast Network Model ALL of Alice’s information decodable EXACTLY by EACH Bob 3 2 2 Upper bound for multicast capacity C, C ≤ min{C i } [ACLY00] With mixing, C = min{C i } achievable! [LCY02],[KM01],[JCJ03],[HKMKE03] Simple (linear) distributed codes suffice!

6 Mixing b1b1 b2b2 bmbm β1β1 β2β2 βkβk F(2 m )-linear network [KM01] Source:- Group together m bits, Every node:- Perform linear combinations over finite field F(2 m ) Generalization: The X are length n vectors over F(2 m ) X1X1 X2X2 XkXk

7 Problem! Eavesdropped links Attacked links Corrupted links

8 Setup 1.Scheme A B C 2.Network C 3.Message A C 4.Code C 5.Bad links C 6.Coin A 7.Transmit B C 8.Decode B Eurek a Eavesdropped links Z I Attacked links Z O Who knows what Stage Privacy

9 Result(s) First codes Optimal rates (C-2Z O,C-Z O ) Poly-time Distributed Unknown topology End-to-end Rateless Information theoretically secure Information theoretically private Wired/wireless [HLKMEK04],[JLHE05],[CY06],[CJL06],[GP06]

10 Error Correcting Codes Y=TX+E Generator matrix Low-weight vector Y X (Reed-Solomon Code) T E

11 Error Correcting Codes X T Y TZTZ Z Y=TX+E =TX+T Z Z Network transform matrices Low-weight vector Unknown

12 When stuck… “ε-rate secret uncorrupted channels” Useful abstraction/ building block Existing model ([GP06],[CJL06]) We improve!

13 Example C=3 Z O =1 n-length vectors 3n known4n unknown scalars 4n+6 unknown X 3 =X 1 +X 2 non-linear R = C - Z o 2 3 1 6 secret hashes of X 4n+6 known4n known Redundancy added at source Solve for

14 Example C=3 Z O =1 X 3 =X 1 +X 2 6 secret hashes of X 4n+6 known4n+6 unknown Invertible with high probability Z=(0 z(2) z(3)… z(n))

15 Thm 1,Proof Theorem 1: Rate C-Z O -ε achievable with Z I ={E}, ε-rate secret uncorrupted channel Improves on [GP06/Avalanche] (Decentralized) and [CJL06] (optimal) R = C - Z o CxC identity matrix n>>C [HKMKE03] T packets

16 Thm 1,Proof Theorem 1: Rate C-Z O -ε achievable with Z I ={E}, ε-rate secret uncorrupted channel T TZTZ CxC matrix Invertible w.h.p.

17 Thm 2 Theorem 2: Rate C-2Z O -ε achievable with Z I ={E}

18 Example revisited X 3 =X 1 +X 2 n more constraints added on X Z=(0 z(2) z(3)… z(n)) DX=0DX=0 Z=(0 0 0… 0) R = C – Z o - redundancyR = C – Z o 2 3 11 3 1 1 R = C – 2Z o Tight (ECC, [CY06]) nZ O

19 Thm 2,“Proof” Theorem 2: Rate C-2Z O -ε achievable with Z I ={E} R = C - 2Z o nZ O extra constraints D chosen uniformly at random, known to Alice, Bob and Calvin

20 Theorem 2: Rate C-2Z O -ε achievable with Z I ={E} Disjoint ? T’’ non-linearlinear Invertible Basis change May not be D of appropriate dimensions crucial Thm 2,“Proof”

21 Thm 3,Proof Theorem 3: Rate C-Z O -ε achievable, with Z I +2Z O <C Z I <C-2Z O Using algorithm 2 for small header, can transmit secret, correct information… … which can be used for algorithm 1 decoding! Algorithm 2 rate Eavesdropping rate Z I <R Information-theoretic Privacy Theorem 4, etc:

22 Summary RateConditions Thm 1C-Z O Secret Thm 2C-2Z O Omniscient Thm 3C-Z O Limited Optimal rates Poly-time Distributed Unknown topology End-to-end Rateless Information theoretically secure/private Wired/wireless

23

24 Backup slides

25 Network Coding “Justification” R. Ahlswede, N. Cai, S.-Y. R. Li and R. W. Yeung, "Network information flow," IEEE Trans. on Information Theory, vol. 46, pp. 1204-1216, 2000. http://tesla.csl.uiuc.edu/~koetter/NWC/Bibliography.html ≈ 200 papers in 3 years NetCod Workshops, DIMACS working group, ISIT 2005 - 4+ sessions, tutorials, … Several patents, theses…

26 “The core notion of network coding is to allow and encourage mixing of data at intermediate network nodes.” (Network Coding homepage) But what IS Network Coding?

27 Point-to-point flows C Min-cut Max-flow (Menger’s) Theorem [M27] Ford-Fulkerson Algorithm [FF62] s t

28 Multicasting Webcasting P2P networks Sensor networks s1s1 t1t1 t2t2 t |T| Network s |S|

29 Justifications revisited - I s t1t1 t2t2 b1b1 b2b2 b2b2 b2b2 b1b1 b1b1 ? b1b1 b1b1 b1b1 b1b1 (b 1,b 2 ) b 1 +b 2 (b 1,b 2 ) [ACLY00] Throughput

30 Gap Without Coding... Coding capacity = h Routing capacity≤2 [JSCEEJT05] s

31 Multicasting Upper bound for multicast capacity C, C ≤ min{C i } s t1t1 t2t2 t |T| C |T| C1C1 C2C2 Network [ACLY00] - achievable! [LYC02] - linear codes suffice!! [KM01] - “finite field” linear codes suffice!!!

32 Multicasting b1b1 b2b2 bmbm β1β1 β2β2 βkβk F(2 m )-linear network [KM01] Source:- Group together `m’ bits, Every node:- Perform linear combinations over finite field F(2 m )

33 Multicasting Upper bound for multicast capacity C, C ≤ min{C i } s t1t1 t2t2 t |T| C |T| C1C1 C2C2 Network [ACLY00] - achievable! [LYC02] - linear codes suffice!! [KM01] - “finite field” linear codes suffice!!! [JCJ03],[SET03] - polynomial time code design!!!!

34 Thms: Deterministic Codes For m ≥ log(|T|), exists an F(2 m )-linear network which can be designed in O(|E||T|C(C+|T|)) time. [JCJ03],[SET03] Exist networks for which minimum m≈0.5(log(|T|)) [JCJ03],[LL03]

35 Justifications revisited - II s t1t1 t2t2 One link breaks Robustness/Distributed design

36 Justifications revisited - II s t1t1 t2t2 b1b1 b2b2 b2b2 b2b2 b1b1 b1b1 (b 1,b 2 ) b 1 +b 2 Robustness/Distributed design (b 1,b 2 ) b 1 +2b 2 (Finite field arithmetic) b 1 +b 2 b 1 +2b 2

37 Thm: Random Robust Codes s t1t1 t2t2 t |T| C |T| C1C1 C2C2 Original Network C = min{C i }

38 Thm: Random Robust Codes s t1t1 t2t2 t |T| C |T| ' C1'C1' C2'C2' Faulty Network C' = min{C i '} If value of C' known to s, same code can achieve C' rate! (interior nodes oblivious)

39 Thm: Random Robust Codes m sufficiently large, rate R<C Choose random [ß] at each node Probability over [ß] that code works >1-|E||T|2 -m(C-R)+|V| [JCJ03] [HKMKE03] (different notions of linearity) Decentralized design  b1b1 b2b2 bmbm b’ 1 b’ 2 b’ m b’’ 1 b’’ 2 b’’ m  ’’  ’’ Much “sparser” linear operations (O(m) instead of O(m 2 )) [JCE06] Vs. prob of error - necessary evil?

40 Zero-error Decentralized Codes No a priori network topological information available - information can only be percolated down links Desired - zero-error code design One additional resource - each node v i has a unique ID number i (GPS coordinates/IP address/…) Need to use yet other types of linear codes [JHE06?]

41 Inter-relationships between notions of linearity C B M M Multicast G General Global Local I/O ≠ Local I/O = a Acyclic A Algebraic B Block C Convolutional Does not exist Є epsilon rate loss G a G Є A M a M a M a G? M G a G M a G G [JEHM04]

42

43 Justifications revisited - III s t1t1 t2t2 Security Evil adversary hiding in network eavesdropping, injecting false information [JLHE05],[JLHKM06?]

44 Multicasting Simplifying assumptions (for this talk) Single source Directed, acyclic graph. Each link has unit capacity. Links have zero delay. s t1t1 t2t2 t |T| C |T| C1C1 C2C2 Network

45 Kinds of linearity b1b1 b2b2 bmbm β1β1 β2β2 βkβk Algebraic codes b0b0 b1b1 b m-1 Block codes b0b0 b1b1 Convolutional codes

46 p (“Noise parameter”) 0 1 1 C (Capacity) Model 1 - ResultsModel 1 - Encoding 0.5

47 Model 1 - Encoding … T |E| … T 1... r1r1 r |E| nεnε D 11 …D 1|E| D |E|1 …D |E||E| D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) … T j riri D ij j

48 Model 1 - Encoding … T |E| … T 1... r1r1 r |E| nεnε D 11 …D 1|E| D |E|1 …D |E||E| D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) … T j riri D ij i

49 Model 1 - Transmission … T |E| … T 1... r1r1 r |E| D 11 …D 1|E| D |E|1 …D |E||E| … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’

50 Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ “Quick consistency check” D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) ? … T j ’ ri’ri’D ij ’

51 Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ “Quick consistency check” D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) ? … T j ’ ri’ri’D ij ’ D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε) ?

52 Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ Edge i consistent with edge j D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε) Consistency graph

53 Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ Consistency graph 1 2 4 5 3 (Self-loops… not important) T r,D T 1 2 3 4 5 Edge i consistent with edge j D ij ’=T j (1)’.1+T j (2)’.r i ’+…+ T j (n(1- ε))’.r i ’ n(1- ε) D ji ’=T i (1)’.1+T i (2)’.r j ’+…+ T i (n(1- ε))’.r j ’ n(1- ε)

54 Model 1 - Decoding … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ 1 2 4 5 3 T r,D T 1 2 3 4 5 Consistency graph Detection – select vertices connected to at least |E|/2 other vertices in the consistency graph. Decode using T i s on corresponding edges.

55 Model 1 - Proof … T |E| ’ … T 1 ’... r1’r1’ r |E| ’ D 11 ’…D 1|E| ’ D |E|1 ’…D |E||E| ’ 1 2 4 5 3 T r,D T 1 2 3 4 5 Consistency graph D ij =T j (1)’.1+T j (2)’.r i +…+ T j (n(1- ε))’.r i n(1- ε) D ij =T j (1).1+T j (2).r i +…+ T j (n(1- ε)).r i n(1- ε) ∑ k (T j (k)-T j (k)’).r i k =0 Polynomial in r i of degree n over F q, value of r i unknown to Zorba Probability of error < n/q<<1

56 Greater throughput Robust against random errors... Aha! Network Coding!!!

57

58 ? ? ?

59 Xavier Yvonne 1 Zorba ? ? ? Yvonne |T| ? ? ?......

60 Setup 1.Scheme X Y Z 2.Network Z 3.Message X Z 4.Code Z 5.Bad links Z 6.Coin X 7.Transmit Y Z 8.Decode Y Eurek a Wired Wireless (packet losses, fading) Eavesdropped links Z I Attacked links Z O Who knows what Stage

61 Unicast 1.Code (X,Y,Z) 2.Message (X,Z) 3.Bad links (Z) 4.Coin (X) 5.Transmission (Y,Z) 6.Decode correctly (Y) Eurek a

62 Xavier Yvonne 1 ? Zorba ? ? Zorba sees M I links Z I, controls M O links Z O p I =M I /C, p O =M O /C Xavier and Yvonnes share no resources (private key, randomness) Zorba computationally unbounded; Xavier and Yvonnes -- “simple” computations Setup Zorba knows protocols and already knows almost all of Xavier’s message (except Xavier’s private coin tosses) Goal: Transmit at “high” rate and w.h.p. decode correctly Zorba (hidden) knows network; Xavier and Yvonnes don’t C MOMO Yvonne |T| ? ? ? Distributed design (interior nodes oblivious/overlay to network coding)

63 Background Noisy channel models (Shannon,…)  Binary Symmetric Channel p (“Noise parameter”) 0 1 1 C (Capacity) 01 H(p) 0.5

64 Background Noisy channel models (Shannon,…)  Binary Symmetric Channel  Binary Erasure Channel p (“Noise parameter”) 0 1 1 C (Capacity) 0E 1-p 0.5

65 Background Adversarial channel models  “Limited-flip” adversary, p I =1 (Hamming,Gilbert-Varshanov,McEliece et al…) Large alphabets (F q instead of F 2 )  Shared randomness, cryptographic assumptions… p O (“Noise parameter”) 0 1 1 C (Capacity) 01 0.5

66 p O (“Noise parameter”) 0 1 1 C (Capacity) Upper bounds 0.5 1-p O

67 p O (“Noise parameter”) 0 1 1 C (Capacity) Upper bounds 0.5 ? ? ? 0

68 p I =p O (“Noise parameter” = “Knowledge parameter”) 0 1 1 C (Capacity) Unicast – Results [JLHE05] 0.5

69 p O (“Noise parameter”) 0 1 1 C (Capacity) Full knowledge [Folklore] 0.5 ( “Knowledge parameter” p I =1)

70 Ignorant Zorba 1.Code (X,Y,Z) 2.Message X p,X s (X) 3.Bad links (Z) 4.Coin (X) 5.Transmission (Y,Z) 6.Decode correctly (Y,Z) I(Z;X s )=0 Eurek a

71 p = |Z|/h 0 1 1 C (Normalized by h) General Multicast Networks 0.5 h Z S R1R1 R |T| Slightly more intricate proof

72 |E|-|Z| |E| |E|-|Z| Unicast - Encoding

73 |E|-|Z| |E| MDS Code X |E|-|Z| Block-length n over finite field F q |E|-|Z| n(1-ε) x1x1 … n Vandermonde matrix T |E| |E| n(1-ε) T1T1... n Rate fudge-factor “Easy to use consistency information” nεnε Symbol from F q Unicast - Encoding

74 … T |E| … T 1... r r nεnε D 1 …D |E| D i =T i (1).1+T i (2).r+…+ T i (n(1- ε)).r n(1- ε) TiTi rDiDi i Unicast - Encoding

75 … T |E| … T 1... r r D 1 …D |E| … T |E| ’ … T 1 ’... r’ D 1 ’…D |E| ’ Unicast - Transmission

76 D i =T i (1)’.1+T i (2)’.r+…+ T i (n(1- ε))’.r n(1- ε) ? If so, accept T i, else reject T i Unicast - Quick Decoding … T |E| ’ … T 1 ’... r r’ D 1 …D |E| D 1 ’…D |E| ’ Choose majority (r,D 1,…,D |E| ) ∑ k (T i (k)-T i (k)’).r k =0 Polynomial in r of degree n over F q, value of r unknown to Zorba Probability of error < n/q<<1 Use accepted T i s to decode

77 ? ? ? General Multicast Networks

78 t1t1 t |T| S Multicast Networks [HKMKE03] y s (j)=Tx s (j) x y1y1 β1β1 βiβi βhβh y |T| x b (i) x s (j) x b (1) x b (h) Rate h=C-M O Block Slice hxh identity matrix x ’ b (i) h<<n T x s (j)=T -1 y s (j)

79 pOpO 0 1 1 C (Normalized by h) 0.5 Multicast Networks R1R1 R |T| S S’ |Z| S’ 2 S’ 1 Observation 1: Can treat adversaries as new sources

80 Multicast Networks y’ s (j)=Tx s (j)+T’x’ s (j) SS Supersource Observation 2: w.h.p. over network code design, {Tx S (j)} and {T’x’ S (j)} do not intersect (robust codes…). Corrupted Unknown

81 Multicast Networks y’ s (j)=Tx s (j)+T’x’ s (j) ε redundancy x s (2)+x s (5)- x s (3)=0 y s (2)+y s (5)-y s (3)= vector in {T’x’ s (j)} { T’x’ s (j)} { Tx s (j)} x s (3)+2x s (9)-5 x s (1)=0 y s (3)+2y s (9)-5y s (1)= another vector in {T’x’ s (j)}

82 Multicast Networks y’ s (j)=Tx s (j)+T’x’ s (j) ε redundancy { T’x’ s (j)} { Tx s (j)} Repeat M O times Discover {T’x’ s (j)} “Zero out” {T’x’ s (j)} when you have eliminated the impossible, whatever remains, however improbable, must be the truth Estimate T (redundant x s (j) known) Linear algebra Decode

83 Multicast Networks y’ s (j)=Tx s (j)+T’x’ s (j) x s (2)+x s (5)- x s (3)=0 y s (2)+y s (5)-y s (3)= vector in {T’x’ s (j)} x’ s (2)+x’ s (5)-x’ s (3)=0 y s (2)+y s (5)-y s (3)= 0

84 Scheme 1(a) “ε-rate secret uncorrupted channels” Useful abstraction

85 Scheme 1(b) “sub-header based scheme” Works … kind of… … for “many” networks

86 Scheme 2 “distributed network error-correcting code” ( Knowledge parameter p I =1) [CY06] – bounds, high complexity construction [JHLMK06?] – tight, poly-time construction p O (“Noise parameter”) 0 1 1 C (Capacity) 0.5

87 Scheme 2 “distributed network error-correcting code” pOpO pOpO y’ s (j)=Tx s (j)+T’x’ s (j) error vector 1-2p O

88 Scheme 2 “distributed network error-correcting code” y’ s (j)=Tx s (j)+T’x’ s (j)

89 Scheme 2 “distributed network error-correcting code” y’ s (j)=T’’x s (j)+T’x’ s (j) e e e’

90 Scheme 2 “distributed network error-correcting code” y’ s (j)=T’’x s (j)+T’x’ s (j) e e e’ Linear algebra

91 Scheme 3 “non-omniscient adversary” y’ s (j)=T’’x s (j)+T’x’ s (j) M I +2M O <C M I <C-2M O Scheme 2 rate Zorba’s observations Using Scheme 2 as small header, can transmit secret, correct information… … which can be used for Scheme 1(a) decoding!

92 Variations - Feedback C p 0 1 1

93 Variations – Know thy enemy C p 0 1 1 C p 0 1 1

94 Variations – Omniscient but not Omnipresent C p 0 1 1 0.5 Achievability: Gilbert-Varshamov, Algebraic Geometry Codes Converse: Generalized MRRW bound

95 Variations – Random Noise C p 0 CNCN 1 SEPARATIONSEPARATION

96 p (“Noise parameter”) 0 1 1 C (Capacity) Ignorant Zorba - Results 0.5 1 X p +X s XsXs 1-2p

97 p (“Noise parameter”) 0 1 1 C (Capacity) Ignorant Zorba - Results 0.5 1 X p +X s XsXs 1-2p a+b+c a+2b+4c a+3b+9c MDS code


Download ppt "Fighting Byzantine Adversaries in Networks: Network Error-Correcting Codes Michelle Effros Michael Langberg Tracey Ho Sachin Katti Muriel Médard Dina Katabi."

Similar presentations


Ads by Google