Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks.

Similar presentations


Presentation on theme: "1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks."— Presentation transcript:

1 1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks

2 2 I/O Performance Amdahl's Law: Continuing improve only CPU performance Performance is not the only concern –Reliability –Availability –Dependenability –Serviceability CPUI/OOverall Improvement.9.11.09.1~5.009.1~10

3 3 Magnetic Disks Average Access Time: (mostly due to seek and rotation) = Average Seek Time + Ave. Rotation Delay + Transfer Time + Controller Delay Areal Density = Tracks/Inch on disk surface * Bits/Inch on track –Increase beyond Moore’s Law lately –<$1 per gigabyte today Cost vs. Access Time: Still huge gap among SRAM, DRAM, and magnetic disks –Technology to fill the gap? Other Technology –Optical disks, Flash memory

4 4 Technology Trend Component –IC technology: transistor increases 55% per year –DRAM: density increases 40-60% per year –Disk: density increases 100% per year lately –Network: Ethernet from 10->100Mb for 10 years; 100Mb->1Gb for 5 years DRAM/Disk:

5 5 Buses Shared communication links between subsystems: CPU bus, I/O bus, MP System bus etc. Bus design considerations –Bus physics: Driver design, flight-time, reflection, skew, glitches, cross talk, etc. –Bus width, separated or combined address / data buses. –Multiple bus masters and bus arbitration mechanism (must be fair and dead-lock free) –Simple bus (non-pipelined) vs split-transaction bus (pipelined) –Synchronous vs asynchronous buses –Multiprocessor bus: May include cache coherence control protocol (snooping bus)

6 6 RAID RAID 0: Striping across a set of disks makes collection appears as a single large disk, but no redundancy RAID 1: Mirroring; Maintain two copies, when one fails, goes to the backup Combined RAID 0, 1 –RAID10: striped mirrows –RAID01: mirrored stripes RAID 2: Memory-style ECC (not used) RAID 3: Bit-Interleaved Parity; Keep Parity bit in redundant disk to recover when single failure Mirror is a special case with one parity per bit

7 7 RAID (cont.) RAID 4 and 5:RAID 4 and 5: Block Interleaved Parity and Distributed Block Interleaved Parity RAID4 RAID5 Disk 0 Disk 1 Disk 2 Disk 3 Disk 4 0123P0 4567P1 891011P2 12131415P3 16171819P4 Disk 0 Disk 1 Disk 2 Disk 3 Disk 4 0123P0 456P17 89P21011 12P3131415 P416171819

8 8 Assume 4 data disks, D 0, D 1, D 2, D 3 and one parity disk P For RAID 3, an small update of D 0 requires: For RAID 4&5, a small update of D 0 only requires Small Update: RAID 3 vs. RAID 4&5

9 9 Inspiration for RAID 5 RAID 4 works well for small reads Small writes (write to one disk): –Option 1: read other data disks, create new sum and write to Parity Disk –Option 2: since P has old sum, compare old data to new data, add the difference to P Small writes are limited by Parity Disk: Write to D0, D5 both also write to P disk D0 D1D2 D3 P D4 D5 D6 P D7

10 10 Redundant Arrays of Inexpensive Disks RAID 5: High I/O Rate Interleaved Parity Independent writes possible because of interleaved parity Independent writes possible because of interleaved parity D0D1D2 D3 P D4D5D6 P D7 D8D9P D10 D11 D12PD13 D14 D15 PD16D17 D18 D19 D20D21D22 D23 P.............................. Disk Columns Increasing Logical Disk Addresses Example: write to D0, D5 uses disks 0, 1, 3, 4

11 11 Problems of Disk Arrays: Small Writes D0D1D2 D3 P D0' + + D1D2 D3 P' new data old data old parity XOR (1. Read) (2. Read) (3. Write) (4. Write) RAID-5: Small Write Algorithm 1 Logical Write = 2 Physical Reads + 2 Physical Writes

12 12 RAID 6: Recovering from 2 failures Why > 1 failure recovery? –operator accidentally replaces the wrong disk during a failure –since disk bandwidth is growing more slowly than disk capacity, the MTT Repair a disk in a RAID system is increasing  increases the chances of a 2nd failure during repair since takes longer –reading much more data during reconstruction meant increasing the chance of an uncorrectable media failure, which would result in data loss

13 13 RAID 6: Recovering from 2 failures Network Appliance’s row-diagonal parity or RAID- DP Like the standard RAID schemes, it uses redundant space based on parity calculation per stripe Since it is protecting against a double failure, it adds two check blocks per stripe of data. –If p+1 disks total, p-1 disks have data; assume p=5 Row parity disk is just like in RAID 4 –Even parity across the other 4 data blocks in its stripe Each block of the diagonal parity disk contains the even parity of the blocks in the same diagonal

14 14 Example p = 5 Row diagonal parity starts by recovering one of the 4 blocks on the failed disk using diagonal parity –Since each diagonal misses one disk, and all diagonals miss a different disk, 2 diagonals are only missing 1 block Once the data for those blocks is recovered, then the standard RAID recovery scheme can be used to recover two more blocks in the standard RAID 4 stripes Process continues until two failed disks are restored Data Disk 0 Data Disk 1 Data Disk 2 Data Disk 3 Row Parity Diagona l Parity 012340 123401 234012 340123 401234 012340

15 15 Summary: RAID Techniques: Goal was performance, popularity due to reliability of storage Disk Mirroring, Shadowing (RAID 1) Each disk is fully duplicated onto its "shadow" Logical write = two physical writes 100% capacity overhead Parity Data Bandwidth Array (RAID 3) Parity computed horizontally Logically a single high data bw disk High I/O Rate Parity Array (RAID 5) Interleaved parity blocks Independent reads and writes Logical write = 2 reads + 2 writes 1001001110010011 1100110111001101 1001001110010011 0011001000110010 1001001110010011 1001001110010011


Download ppt "1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks."

Similar presentations


Ads by Google