Presentation on theme: "XtremIO Data Protection (XDP) Explained"— Presentation transcript:
1XtremIO Data Protection (XDP) Explained View this presentation in Slide Show mode
2XDP BenefitsCombines the best traits of traditional RAID with none of its drawbacksUltra-low 8% fixed capacity overheadNo RAID levels, stripe sizes, chunk sizes, etc.High levels of data protectionSustains up to two simultaneous failures per DAE*Multiple consecutive failures (with adequate free capacity)“Hot Space” - spare capacity is distributed (no hot spares)Rapid rebuild timesSuperior flash endurancePredictable, consistent, sub-millisecond performance*v2.2 encodes data for N+2 redundancy and supports a single rebuild per DAE. A future XIOS release will add double concurrent rebuild support.
3XDP Stripe – Logical View 2 Parity columnsC1C2C3C4C5C6C7PQ6 Data rowsP1Q1P – is a column that contains parity per rowThe following slides show a simplified example of XDP. In reality, XDP uses a (23+2) x 28 stripe.Q – is a column that contains parity per diagonal.P2Q2P3Q3P4Q4P5Q5P6Q6XDP strip here is reduced in size, this is to simplify the explanations of the XDP technologyQ74K7 Data columnsEvery block in the XDP stripe is 4KB in size.
4Each SSD contains the same numbers of P and Q columns Physical ViewAlthough each column is represented in this diagram as a logical block,The system has the ability to read or write in granularity of 4KB or lessC1C2C3C4C5C6C7PQCEach SSD contains the same numbers of P and Q columnsStripe’s columns are randomly distributed across the SSDs to avoid hot spots and congestion
5SSD FailureRemaining data blocks are recovered using the diagonal parity, blocks previously read and stored in the controller memory, along with minimal reads from SSDXDP always reads the first two rows in a stripe and recovers C1’s blocks using row parity stored at PIf the SSD where C1 is stored has failed, let’s see how XDP efficiently recovers the stripeThe data is recovered, using the Q parity and data blocks from C2 and C3 that are already in the Storage Controller memoryXDP minimizes reads required to recover data by 25% (30 vs. 42) increasing rebuild performance compared with traditional RAID.The system reads the rest of the diagonal data (columns C5, C6 and C7), and computes the value of C1Next, XDP recovers data using the diagonal parity Q. It first reads the parity information from row QExpedited recovery process completes with fewer reads and parity compute cycles.C1C2C3C4C5C6C7PQController MemoryC1C2C3C4C5C6C7PPQThe C1 column will be placed at a spare capacity in the system.Note: this slide presents the XDP capabilities, not all of those capabilities will be available at first GA (but are planned for future release).Number of WritesNumber of Reads652143151473022232419182627
6XDP Rebuilds & Hot Space Allows SSDs to fail-in-placeRapid rebuildsNo performance impact after rebuild completes for up to five failed SSDs per X-Brick3 failed SSDs~330K IOPS4 failed SSDs~330K IOPS5 failed SSDs~330K IOPS
7Stripe update at 80% utilization % Free BlocksExample shows new I/Os overwriting addresses with existing data – there is no net increase in capacity consumed (space frees up in other stripes)At least one stripe is guaranteed to be 40% empty => hosts benefit from the performance of a 40% empty array vs. a 20% empty arrayRe-ranking stripes according to % of free blocksSubsequent updates are performed using this algorithmDiagram shows an array that is 80% fullStripe numberS9S8S7S6S5S4S3S2S140%0%0%The system ranks stripes according to utilization levelAlways writes to the stripe that is most freeWrites to SSD as soon as enough blocks arrive to fill the entire emptiest stripe in the system (in this example 17 blocks are required)40%Stripe Number% Free BlocksS30%S8S2S620%S5S1S940%S7S4Stripe Number% Free BlocksS90%S8S2S620%S5S1S740%S4S320%20%40%40%0%0%20%
8XDP Stripe - the Real Numbers Number of 4KB data blocks in a stripeAmount of data in a stripeAmount of Parity blocks in a stripeTotal number of blocks in a stripeTotal number of stripes in one X-Brick28 X 23 = 6444KB X 644= 2576KB= 57= 7017.5TB per X-Brick/2,024KB ≈ 3M stripes23 data columnsParityPQRAID Overhead (of P,Q) = 57/701 = 8%The previous slides used an XDP stripe that is based on 7 x 6 data blocks, in reality the system uses 28 X 23 data blocks28 data rows25 SSDs
9Update Overhead Compared RAID SchemeReads per UpdateWrites per UpdateCapacity OverheadRAID-52N + 1RAID-63N + 2RAID-1N × 2XtremIO (at 80%)1.22XtremIO (at 90%)1.44