Download presentation
Presentation is loading. Please wait.
Published byAlessandro Wildman Modified over 9 years ago
1
© 2013, A. Datta & F. Oggier, NTU Singapore Storage codes: Managing Big Data with Small Overheads Presented by Anwitaman Datta & Frédérique E. Oggier Nanyang Technological University, Singapore Tutorial at NetCod 2013, Calgary, Canada.
2
© 2013, A. Datta & F. Oggier, NTU Singapore Big Data Storage: Disclaimer A note from the trenches: "You know you have a large storage system when you get paged at 1 AM because you only have a few petabytes of storage left." – from Andrew Fikes’ (Principal Engineer, Google) faculty summit talk ` Storage Architecture and Challenges `, 2010. 2
3
© 2013, A. Datta & F. Oggier, NTU Singapore Big Data Storage: Disclaimer A note from the trenches: "You know you have a large storage system when you get paged at 1 AM because you only have a few petabytes of storage left." – from Andrew Fikes’ (Principal Engineer, Google) faculty summit talk ` Storage Architecture and Challenges `, 2010. We never get such calls!! 3
4
© 2013, A. Datta & F. Oggier, NTU Singapore Big data June 2011 EMC 2 study – world’s data is more than doubling every 2 years faster than Moore’s Law – 1.8 zettabytes of data to be created in 2011 Big data: - big problem? - big opportunity ? * http://www.emc.com/about/news/press/2011/20110628-01.htm Zetta: 10 21 Zettabyte: If you stored all of this data on DVDs, the stack would reach from the Earth to the moon and back. 4
5
© 2013, A. Datta & F. Oggier, NTU Singapore The data deluge: Some numbers Facebook “currently” (in 2010) stores over 260 billion images, which translates to over 20 petabytes of data. Users upload one billion new photos (60 terabytes) each week and Facebook serves over one million images per second at peak. [quoted from a paper on “Haystack” from Facebook] On “Saturday”, photo number four billion was uploaded to photo sharing site Flickr. This comes just five and a half months after the 3 billionth and nearly 18 months after photo number two billion. – Mashable (13 th October 2009 ) [http://mashable.com/2009/10/12/flickr-4- billion/] 5
6
© 2013, A. Datta & F. Oggier, NTU Singapore Scale how? * Definitions from Wikipedia 6
7
© 2013, A. Datta & F. Oggier, NTU Singapore Scale how? Scale up To scale vertically (or scale up ) means to add resources to a single node in a system* * Definitions from Wikipedia 7
8
© 2013, A. Datta & F. Oggier, NTU Singapore Scale how? Scale upScale out To scale horizontally (or scale out ) means to add more nodes to a system, such as adding a new computer to a distributed software application* To scale vertically (or scale up ) means to add resources to a single node in a system* * Definitions from Wikipedia 8 not distributing is not an option!
9
© 2013, A. Datta & F. Oggier, NTU Singapore Failure (of parts) is Inevitable 9
10
© 2013, A. Datta & F. Oggier, NTU Singapore Failure (of parts) is Inevitable But, failure of the system is not an option either! – Failure is the pillar of rivals’ success … 10
11
© 2013, A. Datta & F. Oggier, NTU Singapore Deal with it Data from Los Alamos National Laboratory (DSN 2006), gathered over 9 years, 4750 machines, 24101 CPUs. Distribution of failures: – Hardware 60% – Software 20% – Network/Environment/Humans 5% Failures occurred between once a day to once a month. 11
12
© 2013, A. Datta & F. Oggier, NTU Singapore Failure happens without fail, so … But, failure of the system is not an option either! – Failure is the pillar of rivals’ success … Solution: Redundancy 12
13
© 2013, A. Datta & F. Oggier, NTU Singapore Many Levels of Redundancy Physical Virtual resource Availability zone Region Cloud From: http://broadcast. oreilly.com /2011/04/the-aws-outage-the-clouds-shining-moment.html 13
14
© 2013, A. Datta & F. Oggier, NTU Singapore Redundancy Based Fault Tolerance Replicate data – e.g., 3 or more copies – In nodes on different racks Can deal with switch failures Power back-up using battery between racks (Google) 14
15
© 2013, A. Datta & F. Oggier, NTU Singapore But At What Cost? Failure is not an option, but … – … are the overheads acceptable? 15
16
© 2013, A. Datta & F. Oggier, NTU Singapore Erasure codes – Much lower storage overhead – High level of fault-tolerance In contrast to replication or RAID based systems Has the potential to significantly improve the “bottomline” – e.g., Both Google’s new DFS Collossus, as well as Microsoft’s Azure now use ECs Reducing the Overheads of Redundancy 16
17
© 2013, A. Datta & F. Oggier, NTU Singapore Erasure Codes (ECs) 17
18
© 2013, A. Datta & F. Oggier, NTU Singapore Erasure Codes (ECs) An (n,k) erasure code = a map that takes as input k blocks and outputs n blocks, thus introducing n-k blocks of redundancy. 18
19
© 2013, A. Datta & F. Oggier, NTU Singapore Erasure Codes (ECs) An (n,k) erasure code = a map that takes as input k blocks and outputs n blocks, thus introducing n-k blocks of redundancy. 3 way replication is a (3,1) erasure code! 19
20
© 2013, A. Datta & F. Oggier, NTU Singapore Erasure Codes (ECs) An (n,k) erasure code = a map that takes as input k blocks and outputs n blocks, thus introducing n-k blocks of redundancy. 3 way replication is a (3,1) erasure code! k=1 block 20
21
© 2013, A. Datta & F. Oggier, NTU Singapore Erasure Codes (ECs) An (n,k) erasure code = a map that takes as input k blocks and outputs n blocks, thus introducing n-k blocks of redundancy. 3 way replication is a (3,1) erasure code! Encoding k=1 block n=3 encoded blocks 21
22
© 2013, A. Datta & F. Oggier, NTU Singapore Erasure Codes (ECs) An (n,k) erasure code = a map that takes as input k blocks and outputs n blocks, thus introducing n-k blocks of redundancy. 3 way replication is a (3,1) erasure code! An erasure code such that the k original blocks can be recreated out of any k encoded blocks is called MDS (maximum distance separable). Encoding k=1 block n=3 encoded blocks 22
23
© 2013, A. Datta & F. Oggier, NTU Singapore Decoding … Receive any k’ (≥ k) blocks … B1B1 B2B2 BnBn O1O1 O2O2 OkOk Lost blocks Erasure Codes (ECs) Encoding n encoded blocks Original k blocks k blocks … Data = message Reconstruct Data O1O1 O2O2 OkOk Originally designed for communication – EC(n,k) 23
24
© 2013, A. Datta & F. Oggier, NTU Singapore Erasure Codes for Networked Storage Data = Object Encoding k blocks … O1O1 O2O2 OkOk B2B2 B1B1 BnBn n encoded blocks (stored in storage devices in a network) … … Lost blocks Retrieve any k’ (≥ k) blocks Original k blocks … Reconstruct Data O1O1 O2O2 OkOk Decoding BlBl 24
25
© 2013, A. Datta & F. Oggier, NTU Singapore HDFS-RAID Distributed RAID File system (DRFS) client – provides application access to the files in the DRFS – transparently recovers any corrupt or missing blocks encountered when reading a file (degraded read) Does not carry out repairs RaidNode, a daemon that creates and maintains parity files for all data files stored in the DRFS BlockFixer, which periodically recomputes blocks that have been lost or corrupted – RaidShell allows on demand repair triggered by administrator Two kinds of erasure codes implemented – XOR code and Reed-Solomon code (typically 10+4 w/ 1.4x overhead) 25 From http://wiki.apache.org/hadoop/HDFS-RAID
26
© 2013, A. Datta & F. Oggier, NTU Singapore B2B2 B1B1 BnBn … … Lost blocks Retrieve any k’ (≥ k) blocks Original k blocks … O1O1 O2O2 OkOk Decoding Encoding BlBl Recreate lost blocks Re-insert Reinsert in (new) storage devices, so that there is (again) n encoded blocks Replenishing Lost Redundancy for ECs Repair needed for long term resilience. 26
27
© 2013, A. Datta & F. Oggier, NTU Singapore B2B2 B1B1 BnBn … … Lost blocks Retrieve any k’ (≥ k) blocks Original k blocks … O1O1 O2O2 OkOk Decoding Encoding BlBl Recreate lost blocks Re-insert Reinsert in (new) storage devices, so that there is (again) n encoded blocks Replenishing Lost Redundancy for ECs Repair needed for long term resilience. Repairs are expensive! 27
28
© 2013, A. Datta & F. Oggier, NTU Singapore Tailored-Made Codes for Storage 28
29
© 2013, A. Datta & F. Oggier, NTU Singapore Tailored-Made Codes for Storage Desired code properties include: Low storage overhead Good fault tolerance 29 Traditional MDS erasure codes achieve these.
30
© 2013, A. Datta & F. Oggier, NTU Singapore Tailored-Made Codes for Storage Desired code properties include: Low storage overhead Good fault tolerance Better repairability 30 Traditional MDS erasure codes achieve these.
31
© 2013, A. Datta & F. Oggier, NTU Singapore Tailored-Made Codes for Storage Desired code properties include: Low storage overhead Good fault tolerance Better repairability 31 Traditional MDS erasure codes achieve these. Smaller repair fan-in Reduced I/O for repairs Possibility of multiple simultaneous repairs Fast repairs Efficient B/W usage …
32
© 2013, A. Datta & F. Oggier, NTU Singapore Tailored-Made Codes for Storage Desired code properties include: Low storage overhead Good fault tolerance Better repairability Better … 32 Traditional MDS erasure codes achieve these. Smaller repair fan-in Reduced I/O for repairs Possibility of multiple simultaneous repairs Fast repairs Efficient B/W usage …
33
© 2013, A. Datta & F. Oggier, NTU Singapore Tailored-Made Codes for Storage Desired code properties include: Low storage overhead Good fault tolerance Better repairability Better … 33 Traditional MDS erasure codes achieve these. Smaller repair fan-in Reduced I/O for repairs Possibility of multiple simultaneous repairs Fast repairs Efficient B/W usage … Better data-insertion Better migration to archival …
34
© 2013, A. Datta & F. Oggier, NTU Singapore Pyramid (Local Reconstruction) Codes 34 Pyramid Codes: Flexible Schemes to Trade Space for Access Efficiency in Reliable Data Storage Systems, C. Huang et al. @ NCA 2007 Erasure Coding in Windows Azure Storage, C. Huang et al. @ USENIX ATC 2012
35
© 2013, A. Datta & F. Oggier, NTU Singapore Pyramid (Local Reconstruction) Codes 35 Pyramid Codes: Flexible Schemes to Trade Space for Access Efficiency in Reliable Data Storage Systems, C. Huang et al. @ NCA 2007 Erasure Coding in Windows Azure Storage, C. Huang et al. @ USENIX ATC 2012 – Good for degraded reads (data locality)
36
© 2013, A. Datta & F. Oggier, NTU Singapore Pyramid (Local Reconstruction) Codes 36 Pyramid Codes: Flexible Schemes to Trade Space for Access Efficiency in Reliable Data Storage Systems, C. Huang et al. @ NCA 2007 Erasure Coding in Windows Azure Storage, C. Huang et al. @ USENIX ATC 2012 – Good for degraded reads (data locality) – Not all repairs are cheap (only partial parity locality)
37
© 2013, A. Datta & F. Oggier, NTU Singapore Regenerating Codes Network information flow based arguments to determine “optimal” trade-off of storage/repair-bandwidth 37
38
© 2013, A. Datta & F. Oggier, NTU Singapore Locally Repairable Codes Codes satisfying: low repair fan-in, for any failure The name is reminiscent of “locally decodable codes” 38
39
© 2013, A. Datta & F. Oggier, NTU Singapore Self-repairing Codes 39
40
© 2013, A. Datta & F. Oggier, NTU Singapore Self-repairing Codes Usual disclaimer: “To the best of our knowledge” – First instances of locally repairable codes Self-repairing Homomorphic Codes for Distributed Storage Systems – Infocom 2011 Self-repairing Codes for Distributed Storage Systems – A Projective Geometric Construction – ITW 2011 – Since then, there have been many other instances from other researchers/groups 40
41
© 2013, A. Datta & F. Oggier, NTU Singapore Self-repairing Codes Usual disclaimer: “To the best of our knowledge” – First instances of locally repairable codes Self-repairing Homomorphic Codes for Distributed Storage Systems – Infocom 2011 Self-repairing Codes for Distributed Storage Systems – A Projective Geometric Construction – ITW 2011 – Since then, there have been many other instances from other researchers/groups Note – k encoded blocks are enough to recreate the object Caveat: not any arbitrary k (i.e., SRCs are not MDS) However, there are many such k combinations 41
42
© 2013, A. Datta & F. Oggier, NTU Singapore Self-repairing Codes: Blackbox View B2B2 B1B1 BnBn n encoded blocks (stored in storage devices in a network) … … Lost blocks Retrieve some k” (< k) blocks (e.g. k”=2) to recreate a lost block BlBl Re-insert Reinsert in (new) storage devices, so that there is (again) n encoded blocks 42
43
© 2013, A. Datta & F. Oggier, NTU Singapore PSRC Example
44
© 2013, A. Datta & F. Oggier, NTU Singapore PSRC Example
45
© 2013, A. Datta & F. Oggier, NTU Singapore PSRC Example (o 1 +o 2 +o 4 ) + (o 1 ) => o 2 +o 4 (o 3 ) + (o 2 +o 3 ) => o 2 (o 1 ) + (o 2 ) => o 1 + o 2 Repair using two nodes Four pieces needed to regenerate two pieces Say N 1 and N 3
46
© 2013, A. Datta & F. Oggier, NTU Singapore PSRC Example (o 1 +o 2 +o 4 ) + (o 1 ) => o 2 +o 4 (o 3 ) + (o 2 +o 3 ) => o 2 (o 1 ) + (o 2 ) => o 1 + o 2 Repair using two nodes Four pieces needed to regenerate two pieces Say N 1 and N 3 (o 1 +o 2 +o 4 ) + (o 4 ) => o 1 +o 2 (o 2 ) + (o 4 ) => o 2 + o 4 Repair using three nodes Three pieces needed to regenerate two pieces Say N 2, N 3 and N 4 46
47
© 2013, A. Datta & F. Oggier, NTU Singapore47 Replicas Erasure coded data Data access Recap
48
© 2013, A. Datta & F. Oggier, NTU Singapore48 Replicas Erasure coded data Data access fault-tolerant data access (MSR’s Reconstruction code) Recap
49
© 2013, A. Datta & F. Oggier, NTU Singapore49 Replicas Erasure coded data Data access fault-tolerant data access (MSR’s Reconstruction code) Recap (partial) re-encode/repair (e.g., Self-Repairing Codes)
50
© 2013, A. Datta & F. Oggier, NTU Singapore50 Data insertion Replicas Erasure coded data pipelined insertion e.g., data for analytics Data access fault-tolerant data access (MSR’s Reconstruction code) Recap (partial) re-encode/repair (e.g., Self-Repairing Codes)
51
© 2013, A. Datta & F. Oggier, NTU Singapore51 Data insertion Replicas Erasure coded data pipelined insertion e.g., data for analytics In-network coding e.g., multimedia Data access fault-tolerant data access (MSR’s Reconstruction code) Next (partial) re-encode/repair (e.g., Self-Repairing Codes)
52
© 2013, A. Datta & F. Oggier, NTU Singapore Inserting Redundant Data 52
53
© 2013, A. Datta & F. Oggier, NTU Singapore Data insertion Inserting Redundant Data 53
54
© 2013, A. Datta & F. Oggier, NTU Singapore Data insertion – Replicas can be inserted in a pipelined manner Inserting Redundant Data 54
55
© 2013, A. Datta & F. Oggier, NTU Singapore Data insertion – Replicas can be inserted in a pipelined manner Inserting Redundant Data 55
56
© 2013, A. Datta & F. Oggier, NTU Singapore Data insertion – Replicas can be inserted in a pipelined manner – Traditionally, erasure coded systems used a central point of processing Inserting Redundant Data 56
57
© 2013, A. Datta & F. Oggier, NTU Singapore Data insertion – Replicas can be inserted in a pipelined manner – Traditionally, erasure coded systems used a central point of processing Inserting Redundant Data 57 Can the process of redundancy generation be distributed among the storage nodes?
58
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Ref: In-Network Redundancy Generation for Opportunistic Speedup of Backup, Future Generation Comp. Syst. 29(1), 2013 58
59
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Ref: In-Network Redundancy Generation for Opportunistic Speedup of Backup, Future Generation Comp. Syst. 2013 Motivations – Reduce the bottleneck at a single point The “source” (or first point of processing) still needs to inject “enough information” for the network to be able to carry out the rest of the redundancy generation 59
60
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Ref: In-Network Redundancy Generation for Opportunistic Speedup of Backup, Future Generation Comp. Syst. 2013 Motivations – Reduce the bottleneck at a single point The “source” (or first point of processing) still needs to inject “enough information” for the network to be able to carry out the rest of the redundancy generation 60
61
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Ref: In-Network Redundancy Generation for Opportunistic Speedup of Backup, Future Generation Comp. Syst. 2013 Motivations – Reduce the bottleneck at a single point The “source” (or first point of processing) still needs to inject “enough information” for the network to be able to carry out the rest of the redundancy generation – Utilize network resources opportunistically Data-centers: When network/nodes are not busy doing other things P2P/F2F: When nodes are online 61
62
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Ref: In-Network Redundancy Generation for Opportunistic Speedup of Backup, Future Generation Comp. Syst. 2013 Motivations – Reduce the bottleneck at a single point The “source” (or first point of processing) still needs to inject “enough information” for the network to be able to carry out the rest of the redundancy generation – Utilize network resources opportunistically Data-centers: When network/nodes are not busy doing other things P2P/F2F: When nodes are online 62 More traffic (ephemeral resource) but higher data insertion throughput
63
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding 63
64
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! 64
65
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! Consider a SRC(3,7) code with the following dependencies: 65
66
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! Consider a SRC(3,7) code with the following dependencies: – r1, r2, r3=r1+r2, r4, r5=r1+r4, r6=r2+r4, r7=r1+r6 66
67
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! Consider a SRC(3,7) code with the following dependencies: – r1, r2, r3=r1+r2, r4, r5=r1+r4, r6=r2+r4, r7=r1+r6 Note: r6=r1+r7, etc. also … [details in Infocom 2011 paper] 67
68
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! Consider a SRC(3,7) code with the following dependencies: – r1, r2, r3=r1+r2, r4, r5=r1+r4, r6=r2+r4, r7=r1+r6 Note: r6=r1+r7, etc. also … [details in Infocom 2011 paper] A naïve approach 68
69
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! Consider a SRC(3,7) code with the following dependencies: – r1, r2, r3=r1+r2, r4, r5=r1+r4, r6=r2+r4, r7=r1+r6 Note: r6=r1+r7, etc. also … [details in Infocom 2011 paper] A naïve approach 69 r1 r2 r4
70
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! Consider a SRC(3,7) code with the following dependencies: – r1, r2, r3=r1+r2, r4, r5=r1+r4, r6=r2+r4, r7=r1+r6 Note: r6=r1+r7, etc. also … [details in Infocom 2011 paper] A naïve approach 70 r1 r2 r4 r6
71
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! Consider a SRC(3,7) code with the following dependencies: – r1, r2, r3=r1+r2, r4, r5=r1+r4, r6=r2+r4, r7=r1+r6 Note: r6=r1+r7, etc. also … [details in Infocom 2011 paper] A naïve approach 71 r1 r2 r4 r6 r7
72
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! Consider a SRC(3,7) code with the following dependencies: – r1, r2, r3=r1+r2, r4, r5=r1+r4, r6=r2+r4, r7=r1+r6 Note: r6=r1+r7, etc. also … [details in Infocom 2011 paper] A naïve approach 72 r1 r2 r4 r6 r7 Need a good “schedule” to insert the redundancy! - Avoid cycles/dependencies
73
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! Consider a SRC(3,7) code with the following dependencies: – r1, r2, r3=r1+r2, r4, r5=r1+r4, r6=r2+r4, r7=r1+r6 Note: r6=r1+r7, etc. also … [details in Infocom 2011 paper] A naïve approach 73 r1 r2 r4 r6 r7 Need a good “schedule” to insert the redundancy! - Avoid cycles/dependencies Subject to “unpredictable” availability of resource!!
74
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding Dependencies among self-repairing coded fragments can be exploited for in-network coding! Consider a SRC(3,7) code with the following dependencies: – r1, r2, r3=r1+r2, r4, r5=r1+r4, r6=r2+r4, r7=r1+r6 Note: r6=r1+r7, etc. also … [details in Infocom 2011 paper] A naïve approach 74 r1 r2 r4 r6 r7 Need a good “schedule” to insert the redundancy! - Avoid cycles/dependencies Subject to “unpredictable” availability of resource!! Turns out to be O(n!) w/ Oracle
75
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding 75
76
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding 76 Heuristics – Several other policies (such as max data) were also tried
77
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding 77
78
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding 78
79
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding 79 RndFlw was the best heuristic – Among those we tried
80
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding 80 RndFlw was the best heuristic – Among those we tried Provided 40% (out of a possible 57%) bandwidth savings at source for a SRC(7,3) code
81
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding 81 RndFlw was the best heuristic – Among those we tried Provided 40% (out of a possible 57%) bandwidth savings at source for a SRC(7,3) code – An increase in the data-insertion throughput btw. 40-60%
82
© 2013, A. Datta & F. Oggier, NTU Singapore In-network coding 82 RndFlw was the best heuristic – Among those we tried Provided 40% (out of a possible 57%) bandwidth savings at source for a SRC(7,3) code – An increase in the data-insertion throughput btw. 40-60% No free lunch: Increase of 20-30% overall network traffic
83
© 2013, A. Datta & F. Oggier, NTU Singapore83 Data insertion Replicas Erasure coded data pipelined insertion e.g., data for analytics In-network coding e.g., multimedia Data access fault-tolerant data access (MSR’s Reconstruction code) Recap (partial) re-encode/repair (e.g., Self-Repairing Codes)
84
© 2013, A. Datta & F. Oggier, NTU Singapore84 Data insertion Replicas Erasure coded data pipelined insertion e.g., data for analytics In-network coding e.g., multimedia (partial) re-encode/repair (e.g., Self-Repairing Codes) archival of “cold” data Data access fault-tolerant data access (MSR’s Reconstruction code) Next
85
© 2013, A. Datta & F. Oggier, NTU Singapore RapidRAID Ref: – RapidRAID: Pipelined Erasure Codes for Fast Data Archival in Distributed Storage Systems (Infocom 2013) Has some local repairability properties, but that aspect is yet to be explored – Another code instance @ ICDCN 2013 Decentralized Erasure Coding for Ecient Data Archival in Distributed Storage Systems – Systematic code (unlike RapidRAID) – Found using numerical methods, and a general theory for the construction of such codes, as well as their repairability properties are open issues 85
86
© 2013, A. Datta & F. Oggier, NTU Singapore RapidRAID Ref: – RapidRAID: Pipelined Erasure Codes for Fast Data Archival in Distributed Storage Systems (Infocom 2013) Has some local repairability properties, but that aspect is yet to be explored – Another code instance @ ICDCN 2013 Decentralized Erasure Coding for Ecient Data Archival in Distributed Storage Systems – Systematic code (unlike RapidRAID) – Found using numerical methods, and a general theory for the construction of such codes, as well as their repairability properties are open issues Problem statement: Can the existing (replication based) redundancy be exploited to create an erasure coded archive? 86
87
© 2013, A. Datta & F. Oggier, NTU Singapore Slight change of view 87 S1 S2 S3 S4 S1 S2 S3 S4 Two ways to look at replicated data
88
© 2013, A. Datta & F. Oggier, NTU Singapore RapidRAID 88 centralized encoding process
89
© 2013, A. Datta & F. Oggier, NTU Singapore RapidRAID 89 Decentralizing the hitherto centralized encoding process
90
© 2013, A. Datta & F. Oggier, NTU Singapore RapidRAID – Example (8,4) code 90
91
© 2013, A. Datta & F. Oggier, NTU Singapore Initial configuration RapidRAID – Example (8,4) code 91
92
© 2013, A. Datta & F. Oggier, NTU Singapore Initial configuration Logical phase 1: Pipelined coding RapidRAID – Example (8,4) code 92
93
© 2013, A. Datta & F. Oggier, NTU Singapore RapidRAID – Example (8,4) code 93
94
© 2013, A. Datta & F. Oggier, NTU Singapore Logical phase 2: Further local coding RapidRAID – Example (8,4) code 94
95
© 2013, A. Datta & F. Oggier, NTU Singapore Logical phase 2: Further local coding RapidRAID – Example (8,4) code 95 Resulting Linear Code
96
© 2013, A. Datta & F. Oggier, NTU Singapore RapidRAID: Some results 96
97
© 2013, A. Datta & F. Oggier, NTU Singapore RapidRAID: Some results 97
98
© 2013, A. Datta & F. Oggier, NTU Singapore98 Data insertion Replicas Erasure coded data pipelined insertion e.g., data for analytics In-network coding e.g., multimedia archival of “cold” data Data access fault-tolerant data access (MSR’s Reconstruction code) Big pic Agenda: A composite system achieving all these properties … (partial) re-encode/repair (e.g., Self-Repairing Codes)
99
© 2013, A. Datta & F. Oggier, NTU Singapore99 Wrapping up: A moment of reflection Revisiting repairability – an engineering alternative – Redundantly Grouped Cross-object Coding for Repairable Storage (APSys2012) – The CORE Storage Primitive: Cross-Object Redundancy for Efficient Data Repair & Access in Erasure Coded Storage (arXiv: arXiv:1302.5192) HDFS-RAID compatible implementation http://sands.sce.ntu.edu.sg/StorageCORE/
100
© 2013, A. Datta & F. Oggier, NTU Singapore100 Wrapping up: A moment of reflection e 11 e 21 e m1 p1p1 … e 12 e 22 e m2 p1p1 … e 1k e 2k e mk pkpk … … e 1k+1 e 2k+1 e mk+1 p k+1 … e 1n e 2n e mn pnpn … … RAID-4 of erasure coded pieces of different objects Erasure coding of individual objects (reminiscent of product codes!)
101
© 2013, A. Datta & F. Oggier, NTU Singapore Separation of concerns Two distinct design objectives for distributed storage systems – Fault-tolerance – Repairability An extremely simple idea – Introduce two different kinds of redundancy Any (standard) erasure code – for fault-tolerance RAID-4 like parity (across encoded pieces of different objects) – for repairability
102
© 2013, A. Datta & F. Oggier, NTU Singapore CORE repairability Choosing a suitable m < k – Reduction in data transfer for repair – Repair fan-in disentangled from base code parameter “k” Large “k” may be desirable for faster (parallel) data access Codes typically have trade-offs between repair fan-in, code parameter “k” and code’s storage overhead (n/k) However: The gains from reduced fan-in is probabilistic – For i.i.d. failures with probability “f” Possible to reduce repair time – By pipelining data through the live nodes, and computing partial parity
103
© 2013, A. Datta & F. Oggier, NTU Singapore Interested to – Follow: http://sands.sce.ntu.edu.sg/CodingForNetworkedStorage/ Also, two surveys on (repairability of) storage codes – one short, at high level (SIGACT Distr. Comp. News, Mar. 2013) – one detailed (FnT, June 2013) – Get involved: {anwitaman,frederique}@ntu.edu.sg 103 ଧନ୍ଯବାଦ
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.