Presentation is loading. Please wait.

Presentation is loading. Please wait.

XtremIO Data Protection (XDP) Explained

Similar presentations

Presentation on theme: "XtremIO Data Protection (XDP) Explained"— Presentation transcript:

1 XtremIO Data Protection (XDP) Explained
View this presentation in Slide Show mode

2 XDP Benefits Combines the best traits of traditional RAID with none of its drawbacks Ultra-low 8% fixed capacity overhead No RAID levels, stripe sizes, chunk sizes, etc. High levels of data protection Sustains up to two simultaneous failures per DAE* Multiple consecutive failures (with adequate free capacity) “Hot Space” - spare capacity is distributed (no hot spares) Rapid rebuild times Superior flash endurance Predictable, consistent, sub-millisecond performance *v2.2 encodes data for N+2 redundancy and supports a single rebuild per DAE. A future XIOS release will add double concurrent rebuild support.

3 XDP Stripe – Logical View
2 Parity columns C1 C2 C3 C4 C5 C6 C7 P Q 6 Data rows P1 Q1 P – is a column that contains parity per row The following slides show a simplified example of XDP. In reality, XDP uses a (23+2) x 28 stripe. Q – is a column that contains parity per diagonal. P2 Q2 P3 Q3 P4 Q4 P5 Q5 P6 Q6 XDP strip here is reduced in size, this is to simplify the explanations of the XDP technology Q7 4K 7 Data columns Every block in the XDP stripe is 4KB in size.

4 Each SSD contains the same numbers of P and Q columns
Physical View Although each column is represented in this diagram as a logical block, The system has the ability to read or write in granularity of 4KB or less C1 C2 C3 C4 C5 C6 C7 P Q C Each SSD contains the same numbers of P and Q columns Stripe’s columns are randomly distributed across the SSDs to avoid hot spots and congestion

5 SSD Failure Remaining data blocks are recovered using the diagonal parity, blocks previously read and stored in the controller memory, along with minimal reads from SSD XDP always reads the first two rows in a stripe and recovers C1’s blocks using row parity stored at P If the SSD where C1 is stored has failed, let’s see how XDP efficiently recovers the stripe The data is recovered, using the Q parity and data blocks from C2 and C3 that are already in the Storage Controller memory XDP minimizes reads required to recover data by 25% (30 vs. 42) increasing rebuild performance compared with traditional RAID. The system reads the rest of the diagonal data (columns C5, C6 and C7), and computes the value of C1 Next, XDP recovers data using the diagonal parity Q. It first reads the parity information from row Q Expedited recovery process completes with fewer reads and parity compute cycles. C1 C2 C3 C4 C5 C6 C7 P Q Controller Memory C1 C2 C3 C4 C5 C6 C7 P P Q The C1 column will be placed at a spare capacity in the system. Note: this slide presents the XDP capabilities, not all of those capabilities will be available at first GA (but are planned for future release). Number of Writes Number of Reads 6 5 2 1 4 3 15 14 7 30 22 23 24 19 18 26 27

6 XDP Rebuilds & Hot Space
Allows SSDs to fail-in-place Rapid rebuilds No performance impact after rebuild completes for up to five failed SSDs per X-Brick 3 failed SSDs ~330K IOPS 4 failed SSDs ~330K IOPS 5 failed SSDs ~330K IOPS

7 Stripe update at 80% utilization
% Free Blocks Example shows new I/Os overwriting addresses with existing data – there is no net increase in capacity consumed (space frees up in other stripes) At least one stripe is guaranteed to be 40% empty => hosts benefit from the performance of a 40% empty array vs. a 20% empty array Re-ranking stripes according to % of free blocks Subsequent updates are performed using this algorithm Diagram shows an array that is 80% full Stripe number S9 S8 S7 S6 S5 S4 S3 S2 S1 40% 0% 0% The system ranks stripes according to utilization level Always writes to the stripe that is most free Writes to SSD as soon as enough blocks arrive to fill the entire emptiest stripe in the system (in this example 17 blocks are required) 40% Stripe Number % Free Blocks S3 0% S8 S2 S6 20% S5 S1 S9 40% S7 S4 Stripe Number % Free Blocks S9 0% S8 S2 S6 20% S5 S1 S7 40% S4 S3 20% 20% 40% 40% 0% 0% 20%

8 XDP Stripe - the Real Numbers
Number of 4KB data blocks in a stripe Amount of data in a stripe Amount of Parity blocks in a stripe Total number of blocks in a stripe Total number of stripes in one X-Brick 28 X 23 = 644 4KB X 644= 2576KB = 57 = 701 7.5TB per X-Brick/2,024KB ≈ 3M stripes 23 data columns Parity P Q RAID Overhead (of P,Q) = 57/701 = 8% The previous slides used an XDP stripe that is based on 7 x 6 data blocks, in reality the system uses 28 X 23 data blocks 28 data rows 25 SSDs

9 Update Overhead Compared
RAID Scheme Reads per Update Writes per Update Capacity Overhead RAID-5 2 N + 1 RAID-6 3 N + 2 RAID-1 N × 2 XtremIO (at 80%) 1.22 XtremIO (at 90%) 1.44

Download ppt "XtremIO Data Protection (XDP) Explained"

Similar presentations

Ads by Google