Presentation is loading. Please wait.

Presentation is loading. Please wait.

A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems

Similar presentations


Presentation on theme: "A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems"— Presentation transcript:

1 A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems
Hitachi Data System’s WebTech Series RAID Concepts A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems INSTRUCTIONS ON HOW TO COMPRESS YOUR POWERPOINTS TO THE OPTIMUM SIZE FOR SCREEN DISPLAY in Powerpoint 2003 Highlight any picture in the presentation (even the title image in the title master) Right click on the picture and select Format Picture Under the picture tab there is a button called Compress.  Click on the button. In the Compress Pictures dialog box select Apply to All Pictures in Document and Change Resolution to Web/Screen Click OK. Any possible picture optimization will be applied in a few seconds. Click OK to close the Format Picture dialog. Use Save As to save the File under the same or a different filename

2 Hitachi Data Systems WebTech Educational Seminar Series
RAID Concepts Who should attend: Systems and Storage Administrators Storage Specialists & Consultants IT Team Lead System and Network Architects IT Staff Operations and IT Managers Others who are looking for storage management techniques

3 How RAID type impacts cost
The factors we will examine Disk drive capacity vs. disk drive IOPS capability The impact of RAID level on disk drive activity Topics to cover along the way RAID concepts (RAID-1 vs. RAID-5 vs. RAID-6). The 30-second “elevator pitch” on data flow through the subsystem. Conclusion will be That I/O access pattern very often is the determining factor, rather than storage capacity in GB.

4 Growth in recording density drives $/GB
Perpendicular 40% /yr Areal Density Progress 6 Recording 1E+6 10 5 1E+5 10 1st GMR Head 1E+4 10 4 100% CGR 3 1E+3 10 1st MR Head 2 60% CGR 1E+2 10 Areal Density Megabits/in2 1E+1 10 25% CGR 1E+0 1 -1 1E-1 10 -2 1E-2 10 IBM RAMAC (First Hard Disk Drive) 10 -3 1E-3 60 70 80 90 100 110 Production Year

5 Areal density growth will continue
Thermally-assisted writing 2,000-15,000 Areal Density (Gb/in2) 1,500-4,000 Bit Patterned Media Perpendicular Longitudinal 10,000 Gb/in2 = 10 Tb/in2 50 TB 3.5-inch drive 12 TB 2.5-inch drive 1 TB 1-inch drive ~ 60 M fold increase 50 Years > 50 Million increase in areal density Time 2006 2011 2014

6 Here’s the problem Drive capacities keep doubling every 1.5 years or so If you take the data that used to be on two disk drives and put it onto one drive that’s twice as big, you will also be combining the I/O activity that was on the original two drives onto the one double-size drive. The problem is that as drive capacity keeps increasing, the number of I/Os per second (IOPS) that a drive can handle has not been increasing. An I/O operation consists of a seek, ½ turn latency, and data transfer. Data transfer for a 4K block is now down to around 1 % of a rotation. To position the head takes over 1 rotation (seek + ½ turn latency) IOPS capability is ALL about mechanical positioning

7 IOPS capability at 50% busy by drive type
4k random IOPS at 50% busy by drive type 63 65 86 38 59 61 81 23* 10 20 30 40 50 60 70 80 90 100 10K73 10K146 10K300 15K73 15K146 SATA 7K400 Read Write * Includes read verify after write Note that IOPS capability is the same for different drive capacities with same RPM These are green zone upper limits per drive for back-end I/O, including RAID-penalty I/Os

8 Access density capability
When we talked about combining the data that used to be on two drives onto one double-size drive, and how that also combines (doubles) the I/O activity to the bigger drive, this illustrates that for a given workload there is a certain amount of I/O activity per GB of data. This activity per GB is called the “access density” of the workload, and is measured in IOPS per GB. Over the last few decades, as disk drive storage capacity has become much cheaper, from a humble beginning it became economic to store graphics, then audio, and now video. The introduction of these new data types has reduced typical access densities by about a factor of 10 over the last 20 years. However, access density is going down slower than disk drive capacity is going up. Typical access densities are reported in the 0.6 to 1.0 IOPS per GB range

9 Random read IOPS capability by drive type
This chart shows what access density each drive type can handle if you fill it up with data.  marks green zone upper limit at 50% busy. The position of the  left to right shows the maximum access density that the drive can comfortably handle. 7K400 10K300 15K300 10K146 15K146 10K73 7200 10K 15K73 15K

10 RAID makes the access density problem worse
The basic idea behind RAID is to make sure that you don’t lose any data when a single drive fails. So what this means is that whenever a host writes data to the subsystem, that at least two disks need to be updated. The amount of extra disk drive I/O activity needed to handle write activity is the key factor in determining the lowest cost solution as a combination of disk drive RPM, disk drive capacity, and RAID type. So that’s why we will look at how different RAID levels work It is very rare that the access density is so low that you can completely fill up the cheapest drive. Only for things like a home PVR will a 750 GB SATA drive make the smallest dent in your wallet while getting the job done.

11 30 second “elevator pitch” on subsystem data flow
Random read hits are stripped off by cache and do not reach the back end. Random read misses go through cache unaltered and go straight to the appropriate back end disk drive. This is the only type of “I/O” operation where the host always “sees” the performance of the back-end disk drive. Random writes Host sees random writes complete at electronic speed Host only sees delay if too many pending writes build up. Each host random write is transformed going through cache into a multiple I/O pattern that depends on RAID type Sequential I/O Host sequential I/O is at electronic speed. Cache acts like a “holding tank”. Back end puts [removes] “back-end buckets” of data into [out of] the tank to keep the tank at an appropriate level

12 What is RAID? 1993 paper by a group of researchers at UC Berkeley
“Redundant Array of Inexpensive Disks” The original idea was to use cheap (i.e. PC) disk drives arranged in a RAID to give you “mainframe” reliability. Now most call it Redundant Array of Independent Disks A RAID is an arrangement of data on disk drives in such a way that if a disk drive fails, you can still get the data back somehow from the remaining disks RAID-1 is mirroring – just keep two copies RAID-5 uses parity – recovers from single drive failures RAID-6 uses dual parity – recovers from double drive failures

13 RAID-1 random reads / writes
Copy #1 Copy #2 XYZ For writes, a copy must be written to both disk drives Two parity group disk drive writes for every host write Don’t care about what the previous data was, just over-write with new data XYZ Copy #1 Copy #2 or For reads, the data can be read from either disk drive Read activity distributed over both copies reduces disk drive busy (due to reads) to ½ of what it would be to read from a single (non-RAID) disk drive ABC ABC Copy #1 Copy #2 Also called “mirroring” Two copies of the data Requires 2x number of disk drives

14 RAID-1 sequential read 2 sets of parallel I/O operations, each set reading 4 data chunks (2 MB) Parity group data MB/s = 4 x drive MB/s Chunk 1 Chunk 2 Chunk 3 Chunk 4 Chunk 5 Chunk 6 Chunk 7 Chunk 8 Chunk 1’ Chunk 1 Chunk 2 Chunk 2’ Chunk 3 Chunk 3’ Chunk 4’ Chunk 4 2+2 shown Chunk 5’ Chunk 5 Chunk 6 Chunk 6’ Chunk 7 Chunk 7’ Chunk 8’ Chunk 8

15 RAID-1 sequential write
4 sets of parallel I/O operations, each writing 2 data chunks (1MB) and 2 parity chunks Parity group data MB/s = 2 x drive MB/s Chunk 1 Chunk 2 Chunk 3 Chunk 4 Chunk 5 Chunk 6 Chunk 7 Chunk 8 Chunk 1’ Chunk 1 Chunk 2 Chunk 2’ Chunk 3 Chunk 3’ Chunk 4’ Chunk 4 2+2 shown Chunk 5’ Chunk 5 Chunk 6 Chunk 6’ Chunk 7 Chunk 7’ Chunk 8’ Chunk 8

16 RAID-1 comments Since RAID-1 requires doubling the number of disk drives to store the data, people tend to think of RAID-1 as the most expensive type of RAID. However, due to the intensity of host access, in RAID subsystems often one cannot completely “fill up” the disk drive with data because the disk drive would become too busy. RAID-1 offers the lowest “RAID penalty” of only having two disk drive I/Os per random write, compared to four for RAID-5, and six for RAID-6. For this reason, when the workload is sufficiently active and has a lot of random writes, RAID-1 will be the cheapest RAID type because it has the least disk drive I/O operations per random write.

17 RAID-1’s “RAID penalty”
Penalty in space Double the number of disk drives required Penalty in disk drive utilization (disk drive % busy) Twice the number of I/O operations required for all writes No penalty for read operations; read operation distributed over twice the number of drives.

18 RAID-5 parity concept 0 XOR 1 XOR 0 = 1 10011 11111 00000 01100
There is an odd number of 1s in this bit position, so parity bit is 1 10011 11111 00000 01100 (odd) parity Data Data Data 1 XOR 1 XOR 0 = 0 With an even number of 1s in this bit position, parity bit is set to 0. Each parity bit indicates whether or not there is an odd number of “1” bits in that bit position across the whole parity group (“odd parity”). If you add more data drives, you don’t add any more parity.

19 RAID-5 – if drive containing parity fails
10011 11111 00000 01100 Data Data Data Parity You still have the data. Better reconstruct the parity on a spare disk drive right away just in case a second drive fails

20 RAID-5 – if drive containing data fails
Since on the remaining data disks, there is now an even number of “1” bits, we know that the missing data bit is a “1”` 10011 11111 00000 01100 A “1” bit here says there originally was an odd number of “1” data bits in this position across the data drives 11111 Data Data Data Parity If a drive that had data on it fails, you can reconstruct the missing data. Read the corresponding “chunk” from all the remaining data drives, and see how many “1” bits there are in each position. By comparing how many “1” bits there are in each bit position out of the remaining disk drives with what the parity tells you there originally was, you can reconstruct the data Better reconstruct the parity on a spare disk drive right away just in case a second drive fails

21 RAID-5 random read hit Read hits operate at electronic speed
Read data #3 Read hits operate at electronic speed Just transfer data from cache Copy of data #3 00000 Cache 10011 11111 00000 01100 Data #1 Data #2 Data #3 Parity

22 RAID-5 random read miss Read data #1 Read misses are the ONLY operation that “sees” the speed of the disk drive during normal (not overloaded) operation I.e. read misses are the only type of host I/O operation that does not complete at electronic speed with just an access to cache Copy of data #1 10011 Copy of data #3 00000 Cache 10011 11111 00000 01100 Data #1 Data #2 Data #3 Parity

23 RAID-5 random write + - Read old data, read old parity
01010 New data #2 from host 11001 01010 New data New parity 11001 + New data Partial parity corresponds to remaining part of stripe without old data - 10011 Partial parity 01100 11111 ..... Old data Old parity Cache 10011 11111 00000 01100 Data #1 Data #2 Data #3 Parity Read old data, read old parity Remove old data from old parity giving “partial parity” (parity for the rest of the row) Add new data into partial parity to generate “new parity” Write new data and new parity to disk

24 RAID-5 sequential read The subsystem “detects” that the host is reading sequentially after a few sequential I/Os (The first few are treated as random reads.) The subsystem performs “sequential pre-fetch” to load stripes of data from the parity group into cache in advance of when the host will request the data The subsystem can usually easily keep up with the host as transfers from the parity group are performed in parallel Cache 10101 00110 10101 11001 10101 00110 10101 11001

25 RAID-5 sequential read example
In parallel, read a chunk from each drive in the parity group. 3 sets of parallel I/O operations to read 12 chunks (6 MB) Parity group MB/s = 4 x drive MB/s Chunk 1 Chunk 2 Chunk 3 Chunk 4 Chunk 5 Chunk 6 Chunk 7 Chunk 8 Chunk 9 Chunk 10 Chunk 11 Chunk 12 Chunk 1 Chunk 2 Chunk 3 Parity 1, 2, 3 Chunk 5 Chunk 6 Parity 4, 5, 6 Chunk 4 Chunk 9 Parity 7, 8, 9 Chunk 7 Chunk 8 Parity 10, 11, 12 Chunk 10 Chunk 11 Chunk 12

26 RAID-5 sequential write
First compute the parity chunk for a row Then write row to disk. 4 sets of parallel I/O operations to write 12 data chunks (6 MB) with 4 parity chunks Parity group data MB/s = 3 x drive MB/s Chunk 4 Chunk 5 Chunk 6 Chunk 1 Chunk 2 Chunk 3 Chunk 8 Chunk 7 Chunk 9 Chunk 10 Chunk 11 Chunk 12 Parity 10, 11, 12 Parity 7, 8, 9 Parity 4, 5, 6 Parity 1, 2, 3 Chunk 1 Chunk 2 Chunk 3 Parity 1, 2, 3 Parity 4, 5, 6 Chunk 4 Chunk 5 Chunk 6 Parity 7, 8, 9 Chunk 7 Chunk 8 Chunk 9 Parity 10, 11, 12 Chunk 10 Chunk 11 Chunk 12

27 RAID-5 comments For sequential reads and writes, RAID-5 is very good.
It’s very space efficient (smallest space for parity), and sequential reads and writes are efficient, since they operate on whole stripes. For low access density (light activity), RAID-5 is very good. The 4x RAID-5 write penalty is (nearly) invisible to the host, because it’s non-synchronous. For workloads with higher access density and more random writes, RAID-5 can be throughput-limited due to all the extra parity group I/O operations to handle the RAID-5 “write penalty”

28 RAID-5 “RAID penalty” Penalty in space
For 3+1, 33% extra space for parity For 7+1, 14% extra space for parity Penalty in disk drive utilization (disk drive % busy) Random writes Four times the number of I/O operations (300% extra I/Os) Sequential writes For 3+1, 33% extra I/Os for sequential writes For 7+1, 14% extra I/Os for sequential writes

29 RAID-6 “6D + 2P” parity group
Q “6D + 2P” parity group RAID-6 is an extension of the RAID-5 concept which uses two separate parity-type fields usually called “P” and “Q”. The mathematics are beyond a basic course*, but RAID-6 allows data to be reconstructed from the remaining drives in a parity group when any one or two drives have failed. *The math is the same as for ECC used to correct errors in DRAM memory or on the surface of disk drives. Each RAID-6 host random write turns into 6 parity group I/O operations Read old data, read old P, read old Q (Compute new P, Q) Write new data, write new P, write new Q RAID-6 parity group sizes usually start at 6+2. This has the same space efficiency as RAID

30 RAID-6 “RAID penalty” 6+2 penalty in space
33% extra space for parity 6+2 penalty in disk drive utilization (disk drive % busy) Random writes Six times the number of I/O operations (500% extra I/Os) Sequential writes 33% extra I/Os

31 RAID-1 vs RAID-5 vs RAID-6 summary
The concept of RAID with parity groups permits data to be recovered even upon a single drive failure for RAID-1 and RAID-5, or a double drive failure for RAID-6 RAID-1 trades off more space utilization for lower RAID penalty for writes, and lower degradation after drive failure. RAID-1 can be cheaper (require less disk drives) than RAID-5 where there is concentrated random write activity RAID-5 achieves redundancy with less parity space overhead, but at the expense of having a higher “RAID penalty” for random writes, and having a larger performance degradation upon a drive failure

32 30 second “elevator pitch” on subsystem data flow
Random read hits are stripped off by cache and do not reach the back end. Random read misses go through cache unaltered and go straight to the appropriate back end disk drive. This is the only type of “I/O” operation where the host always “sees” the performance of the back-end disk drive. Random writes Host sees random writes complete at electronic speed Host only sees delay if too many pending writes build up. Each host random write is transformed going through cache into a multiple I/O pattern that depends on RAID type Sequential I/O Host sequential I/O is at electronic speed. Cache acts like a “holding tank”. Back end puts [removes] “back-end buckets” of data into [out of] the tank to keep the tank at an appropriate level

33 RAID-5 can often be more expensive
See how much busier the “back end” disk drives are for the RAID-5 configuration, all due to random writes (solid blue) In this case, the RAID-1 configuration was cheaper, because fewer disk drives were needed to handle the back-end I/O activity. RAID-1 drives could be completely filled, whereas the RAID-5 drives could only be filled to 55% of their capacity.

34 Conclusions – factors driving lowest cost
The lowest cost configuration in terms of disk drive RPM, disk drive capacity, and RAID type depends strongly on the access density and the read:write ratio. If there is even moderate access density with significant random write activity, RAID-1 will often turn out to be the lowest cost total solution, due to being able to fill up more of the drives’ capacity with data. Where access densities are higher, 15K RPM drives will often turn out to offer the lowest cost overall solution. SATA drives, due to their low IOPS capability, can only be filled if the data has very low access density, and therefore are rarely the cheapest.

35 www.hds.com/webtech Upcoming WebTech Sessions:
19 September - Enterprise Data Replication Architectures that Work: Overview and Perspectives 17 October – 10 Steps To Determine if SANs Are Right For You

36 Questions/Discussion


Download ppt "A. Ian Vogelesang Tools Competency Center (TCC) Hitachi Data Systems"

Similar presentations


Ads by Google