Presentation is loading. Please wait.

Presentation is loading. Please wait.

EECC722 - Shaaban #1 Lec # 8 Fall2000 10-9-2000 I/O Systems Processor Cache Memory - I/O Bus Main Memory I/O Controller Disk I/O Controller I/O Controller.

Similar presentations


Presentation on theme: "EECC722 - Shaaban #1 Lec # 8 Fall2000 10-9-2000 I/O Systems Processor Cache Memory - I/O Bus Main Memory I/O Controller Disk I/O Controller I/O Controller."— Presentation transcript:

1 EECC722 - Shaaban #1 Lec # 8 Fall2000 10-9-2000 I/O Systems Processor Cache Memory - I/O Bus Main Memory I/O Controller Disk I/O Controller I/O Controller Graphics Networkinterrupts Time(workload) = Time(CPU) + Time(I/O) - Time(Overlap)

2 EECC722 - Shaaban #2 Lec # 8 Fall2000 10-9-2000 I/O Controller Architecture Request/response block interface Backdoor access to host memory

3 EECC722 - Shaaban #3 Lec # 8 Fall2000 10-9-2000 I/O: A System Performance Perspective CPU Performance: Improvement of 60% per year. I/O Sub-System Performance: Limited by mechanical delays (disk I/O). Improvement less than 10% per year (IO rate per sec or MB per sec). From Amdahl's Law: overall system speed-up is limited by the slowest component: If I/O is 10% of current processing time: Increasing CPU performance by 10 times  5 times system performance increase (50% loss in performance) Increasing CPU performance by 100 times  10 times system performance (90% loss of performance) The I/O system performance bottleneck diminishes the benefit of faster CPUs on overall system performance.

4 EECC722 - Shaaban #4 Lec # 8 Fall2000 10-9-2000 Magnetic Disks Characteristics: Diameter: 2.5in - 5.25in Rotational speed: 3,600RPM-10,000 RPM Tracks per surface. Sectors per track: Outer tracks contain more sectors. Recording or Areal Density: Tracks/in X Bits/in Cost Per Megabyte. Seek Time: The time needed to move the read/write head arm. Reported values: Minimum, Maximum, Average. Rotation Latency or Delay: The time for the requested sector to be under the read/write head. Transfer time: The time needed to transfer a sector of bits. Type of controller/interface: SCSI, EIDE Disk Controller delay or time. Average time to access a sector of data = average seek time + average rotational delay + transfer time + disk controller overhead

5 EECC722 - Shaaban #5 Lec # 8 Fall2000 10-9-2000 Since the 1980's smaller form factor disk drives have grown in storage capacity. Today's 3.5 inch form factor drives designed for the entry-server market can store more than 75 Gbytes at the 1.6 inch height on 5 disks.

6 EECC722 - Shaaban #6 Lec # 8 Fall2000 10-9-2000 Drive areal density has increased by a factor of 8.5 million since the first disk drive, IBM's RAMAC, was introduced in 1957. Since 1991, the rate of increase in areal density has accelerated to 60% per year, and since 1997 this rate has further accelerated to an incredible 100% per year.

7 EECC722 - Shaaban #7 Lec # 8 Fall2000 10-9-2000 The price per megabyte of disk storage has been decreasing at about 40% per year based on improvements in data density,-- even faster than the price decline for flash memory chips. Recent trends in HDD price per megabyte show an even steeper reduction.

8 EECC722 - Shaaban #8 Lec # 8 Fall2000 10-9-2000

9 EECC722 - Shaaban #9 Lec # 8 Fall2000 10-9-2000

10 EECC722 - Shaaban #10 Lec # 8 Fall2000 10-9-2000

11 EECC722 - Shaaban #11 Lec # 8 Fall2000 10-9-2000

12 EECC722 - Shaaban #12 Lec # 8 Fall2000 10-9-2000 Disk Access Time Example Given the following Disk Parameters: –Transfer size is 8K bytes –Advertised average seek is 12 ms –Disk spins at 7200 RPM –Transfer rate is 4 MB/sec Controller overhead is 2 ms Assume that the disk is idle, so no queuing delay exist. What is Average Disk Access Time for a 512-byte Sector? –Ave. seek + ave. rot delay + transfer time + controller overhead –12 ms + 0.5/(7200 RPM/60) + 8 KB/4 MB/s + 2 ms –12 + 4.15 + 2 + 2 = 20 ms Advertised seek time assumes no locality: typically 1/4 to 1/3 advertised seek time: 20 ms => 12 ms

13 EECC722 - Shaaban #13 Lec # 8 Fall2000 10-9-2000 I/O Data Transfer Methods Programmed I/O (PIO): PollingProgrammed I/O (PIO): Polling –The I/O device puts its status information in a status register. –The processor must periodically check the status register. –The processor is totally in control and does all the work. –Very wasteful of processor time. Interrupt-Driven I/O:Interrupt-Driven I/O: –An interrupt line from the I/O device to the CPU is used to generate an I/O interrupt indicating that the I/O device needs CPU attention. –The interrupting device places its identity in an interrupt vector. –Once an I/O interrupt is detected the current instruction is completed and an I/O interrupt handling routine is executed to service the device.

14 EECC722 - Shaaban #14 Lec # 8 Fall2000 10-9-2000 I/O data transfer methods Direct Memory Access (DMA): Implemented with a specialized controller that transfers data between an I/O device and memory independent of the processor. The DMA controller becomes the bus master and directs reads and writes between itself and memory. Interrupts are still used only on completion of the transfer or when an error occurs. DMA transfer steps: –The CPU sets up DMA by supplying device identity, operation, memory address of source and destination of data, the number of bytes to be transferred. –The DMA controller starts the operation. When the data is available it transfers the data, including generating memory addresses for data to be transferred. –Once the DMA transfer is complete, the controller interrupts the processor, which determines whether the entire operation is complete.

15 EECC722 - Shaaban #15 Lec # 8 Fall2000 10-9-2000 I/O Performance Measures Diversity: The variety of I/O devices that can be connected to the system. Capacity: The maximum number of I/O devices that can be connected to the system. Producer/server Model of I/O: The producer (CPU, human etc.) creates tasks to be performed and places them in a task buffer (queue); the server (I/O device or controller) takes tasks from the queue and performs them. I/O Throughput: The maximum data rate that can be transferred to/from an I/O device or sub-system, or the maximum number of I/O tasks or transactions completed by I/O in a certain period of time  Maximized when task buffer is never empty. I/O Latency or response time: The time an I/O task takes from the time it is placed in the task buffer or queue until the server (I/O system) finishes the task. Includes buffer waiting or queuing time.  Maximized when task buffer is always empty.

16 EECC722 - Shaaban #16 Lec # 8 Fall2000 10-9-2000 Producer-ServerModel Throughput Throughput vs. vs. Response Time Response Time = Time System = Time Queue + Time Server

17 EECC722 - Shaaban #17 Lec # 8 Fall2000 10-9-2000 Components of A User/Computer System Transaction In an interactive user/computer environment, each interaction or transaction has three parts: –Entry Time: Time for user to enter a command –System Response Time: Time between user entry & system reply. –Think Time: Time from response until user begins next command.

18 EECC722 - Shaaban #18 Lec # 8 Fall2000 10-9-2000 User/Interactive Computer Transaction Time

19 EECC722 - Shaaban #19 Lec # 8 Fall2000 10-9-2000 Introduction to Queuing Theory Concerned with long term, steady state than in startup: –where => Arrivals = Departures Little’s Law: Mean number tasks in system = arrival rate x mean response time Applies to any system in equilibrium, as long as nothing in the black box is creating or destroying tasks. ArrivalsDepartures

20 EECC722 - Shaaban #20 Lec # 8 Fall2000 10-9-2000 I/O Performance & Little’s Queuing Law Given: An I/O system in equilibrium input rate is equal to output rate) and: –T ser : Average time to service a task –T q : Average time per task in the queue –T sys : Average time per task in the system, or the response time, the sum of T ser and T q –r : Average number of arriving tasks/sec –L ser : Average number of tasks in service. –L q : Average length of queue –L sys : Average number of tasks in the system, the sum of L q and L ser Little’s Law states: L sys = r x T sys Server utilization = u = r / Service rate = r x T ser u must be between 0 and 1 otherwise there would be more tasks arriving than could be serviced. ProcIOCDevice Queue server System

21 EECC722 - Shaaban #21 Lec # 8 Fall2000 10-9-2000 A Little Queuing Theory Service time completions vs. waiting time for a busy server: randomly arriving event joins a queue of arbitrary length when server is busy, otherwise serviced immediately –Unlimited length queues key simplification A single server queue: combination of a servicing facility that accomodates 1 customer at a time (server) + waiting area (queue): together called a system Server spends a variable amount of time with customers; how do you characterize variability? –Distribution of a random variable: histogram? curve? ProcIOCDevice Queue server System

22 EECC722 - Shaaban #22 Lec # 8 Fall2000 10-9-2000 A Little Queuing Theory Server spends a variable amount of time with customers –Weighted mean m1 = (f1 x T1 + f2 x T2 +...+ fn x Tn)/F (F=f1 + f2...) –variance = (f1 x T1 2 + f2 x T2 2 +...+ fn x Tn 2 )/F – m1 2 Must keep track of unit of measure (100 ms 2 vs. 0.1 s 2 ) –Squared coefficient of variance: C = variance/m1 2 Unitless measure (100 ms 2 vs. 0.1 s 2 ) Exponential distribution C = 1 : most short relative to average, few others long; 90% < 2.3 x average, 63% < average Hypoexponential distribution C 90% < 2.0 x average, only 57% < average Hyperexponential distribution C > 1 : further from average C=2.0 => 90% < 2.8 x average, 69% < average ProcIOCDevice Queue server System Avg.

23 EECC722 - Shaaban #23 Lec # 8 Fall2000 10-9-2000 A Little Queuing Theory: Variable Service Time Server spends a variable amount of time with customers –Weighted mean m1 = (f1xT1 + f2xT2 +...+ fnXTn)/F (F=f1+f2+...) –Squared coefficient of variance C Disk response times C ­ 1.5 (majority seeks < average) Yet usually pick C = 1.0 for simplicity Another useful value is average time must wait for server to complete task: m1(z) –Not just 1/2 x m1 because doesn’t capture variance –Can derive m1(z) = 1/2 x m1 x (1 + C) –No variance => C= 0 => m1(z) = 1/2 x m1 ProcIOCDevice Queue server System

24 EECC722 - Shaaban #24 Lec # 8 Fall2000 10-9-2000 A Little Queuing Theory: Average Wait Time Calculating average wait time in queue T q –If something at server, it takes to complete on average m1(z) –Chance server is busy = u; average delay is u x m1(z) –All customers in line must complete; each avg T ser T q = u x m1(z) + L q x T s er = 1/2 x u x T ser x (1 + C) + L q x T s er T q = 1/2 x u x T s er x (1 + C) + r x T q x T s er T q = 1/2 x u x T s er x (1 + C) + u x T q T q x (1 – u) = T s er x u x (1 + C) /2 T q = T s er x u x (1 + C) / (2 x (1 – u)) Notation: raverage number of arriving customers/second T ser average time to service a customer userver utilization (0..1): u = r x T ser T q average time/customer in queue L q average length of queue:L q = r x T q

25 EECC722 - Shaaban #25 Lec # 8 Fall2000 10-9-2000 A Little Queuing Theory: M/G/1 and M/M/1 Assumptions so far: –System in equilibrium –Time between two successive arrivals in line are random –Server can start on next customer immediately after prior finishes –No limit to the queue: works First-In-First-Out –Afterward, all customers in line must complete; each avg T ser Described “memoryless” or Markovian request arrival (M for C=1 exponentially random), General service distribution (no restrictions), 1 server: M/G/1 queue When Service times have C = 1, M/M/1 queue T q = T ser x u x (1 + C) /(2 x (1 – u)) = T ser x u / (1 – u) T ser average time to service a customer userver utilization (0..1): u = r x T ser T q average time/customer in queue

26 EECC722 - Shaaban #26 Lec # 8 Fall2000 10-9-2000 I/O Queuing Performance: An Example A processor sends 10 x 8KB disk I/O requests per second, requests & service are exponentially distributed, average disk service time = 20 ms On average: –How utilized is the disk, u? –What is the average time spent in the queue, T q ? –What is the average response time for a disk request, T sys ? –What is the number of requests in the queue L q ? In system, L sys ? We have: raverage number of arriving requests/second = 10 T ser average time to service a request = 20 ms (0.02s) We obtain: userver utilization: u = r x T ser = 10/s x.02s = 0.2 T q average time/request in queue = T ser x u / (1 – u) = 20 x 0.2/(1-0.2) = 20 x 0.25 = 5 ms (0.005s) T sys average time/request in system: T sys = T q +T ser = 25 ms L q average length of queue: L q = r x T q = 10/s x.005s = 0.05 requests in queue L sys average # tasks in system: L sys = r x T sys = 10/s x.025s = 0.25

27 EECC722 - Shaaban #27 Lec # 8 Fall2000 10-9-2000 A Little Queuing Theory: Another Example Processor sends 20 x 8KB disk I/Os per sec, requests & service exponentially distrib., avg. disk service = 12 ms On average: –how utilized is the disk? –What is the number of requests in the queue? –What is the average time a spent in the queue? –What is the average response time for a disk request? Notation: raverage number of arriving customers/second= 20 T ser average time to service a customer= 12 ms userver utilization (0..1): u = r x T ser = 20/s x.012s = 0.24 T q average time/customer in queue = T s er x u / (1 – u) = 12 x 0.24/(1-0.24) = 12 x 0.32 = 3.8 ms T sys average time/customer in system: T sys =T q +T ser = 15.8 ms L q average length of queue:L q = r x T q = 20/s x.0038s = 0.076 requests in queue L sys average # tasks in system : L sys = r x T sys = 20/s x.016s = 0.32

28 EECC722 - Shaaban #28 Lec # 8 Fall2000 10-9-2000 A Little Queuing Theory: Yet Another Example Suppose processor sends 10 x 8KB disk I/Os per second, squared coef. var.(C) = 1.5, avg. disk service time = 20 ms On average: –How utilized is the disk? –What is the number of requests in the queue? –What is the average time a spent in the queue? –What is the average response time for a disk request? Notation: raverage number of arriving customers/second= 10 T ser average time to service a customer= 20 ms userver utilization (0..1): u = r x T ser = 10/s x.02s = 0.2 T q average time/customer in queue = T ser x u x (1 + C) /(2 x (1 – u)) = 20 x 0.2(2.5)/2(1 – 0.2) = 20 x 0.32 = 6.25 ms T sys average time/customer in system: T sys = T q +T ser = 26 ms L q average length of queue:L q = r x T q = 10/s x.006s = 0.06 requests in queue L sys average # tasks in system :L sys = r x T sys = 10/s x.026s = 0.26

29 EECC722 - Shaaban #29 Lec # 8 Fall2000 10-9-2000 Network Attached Storage Decreasing Disk Diameters Increasing Network Bandwidth Network File Services High Performance Storage Service on a High Speed Network High Performance Storage Service on a High Speed Network 14" » 10" » 8" » 5.25" » 3.5" » 2.5" » 1.8" » 1.3" »... high bandwidth disk systems based on arrays of disks 3 Mb/s » 10Mb/s » 50 Mb/s » 100 Mb/s » 1 Gb/s » 10 Gb/s networks capable of sustaining high bandwidth transfers Network provides well defined physical and logical interfaces: separate CPU and storage system! OS structures supporting remote file access

30 EECC722 - Shaaban #30 Lec # 8 Fall2000 10-9-2000 RAID (Redundant Array of Inexpensive Disks) The term RAID was coined in a 1988 paper by Patterson, Gibson and Katz of the University of California at Berkeley. In that article, the authors proposed that large arrays of small, inexpensive disks could be used to replace the large, expensive disks used on mainframes and minicomputers. In such arrays files are "striped" and/or mirrored across multiple drives. Their analysis showed that the cost per megabyte could be substantially reduced, while both performance and fault tolerance could be increased. Array Reliability: Reliability of N disks = Reliability of 1 Disk ÷ N 50,000 Hours ÷ 70 disks = 700 hours –Disk system MTTF: Drops from 6 years to 1 month! –Arrays (without redundancy) too unreliable to be useful!

31 EECC722 - Shaaban #31 Lec # 8 Fall2000 10-9-2000 Basic RAID Organizations Non-Redundant (RAID Level 0) Mirrored (RAID Level 1) Memory-Style ECC (RAID Level 2) Bit-Interleaved Parity (RAID Level 3) Block-Interleaved Parity (RAID Level 4) Block-Interleaved Distributed-Parity (RAID Level 5) P+Q Redundancy (RAID Level 6) Striped Mirrors (RAID Level 10)

32 EECC722 - Shaaban #32 Lec # 8 Fall2000 10-9-2000 Manufacturing Advantages of Disk Arrays 14” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Disk Product Families

33 EECC722 - Shaaban #33 Lec # 8 Fall2000 10-9-2000 RAID Subsystem Organization host array controller single board disk controller single board disk controller single board disk controller single board disk controller host adapter manages interface to host, DMA control, buffering, parity logic physical device control often piggy-backed in small format devices striping software off-loaded from host to array controller no applications modifications no reduction of host performance

34 EECC722 - Shaaban #34 Lec # 8 Fall2000 10-9-2000 Non-Redundant (RAID Level 0) RAID 0 simply stripes data across all drives (minimum 2 drives) to increase data throughput but provides no fault protection. –Sequential blocks of data are written across multiple disks in stripes, as follows: The size of a data block, which is known as the "stripe width", varies with the implementation, but is always at least as large as a disk's sector size. This scheme offers the best write performance since it never needs to update redundant information. It does not have the best read performance. –Redundancy schemes that duplicate data, such as mirroring, can perform better on reads by selectively scheduling requests on the disk with the shortest expected seek and rotational delays.

35 EECC722 - Shaaban #35 Lec # 8 Fall2000 10-9-2000 Optimal Size of Data Striping Unit Lee and Katz [1991] use an analytic model of non-redundant disk arrays to derive an equation for the optimal size of data striping unit. They show that the optimal size of data strip-ing is equal to: Where: –P is the average disk positioning time, –X is the average disk transfer rate, –L is the concurrency, Z is the request size, and –N is the array size in disks. Their equation also predicts that the optimal size of data striping unit is dependent only the relative rates at which a disk positions and transfers data, PX, rather than P or X individually. Lee and Katz show that the opti-mal striping unit depends on request size; Chen and Patterson show that this dependency can be ignored without significantly affecting performance.

36 EECC722 - Shaaban #36 Lec # 8 Fall2000 10-9-2000 Mirrored (RAID Level 1) Utilizes mirroring or shadowing of data using twice as many disks as a non-redundant disk array. Whenever data is written to a disk the same data is also written to a redundant disk, so that there are always two copies of the information. When data is read, it can be retrieved from the disk with the shorter queueing, seek and rotational delays If a disk fails, the other copy is used to service requests. Mirroring is frequently used in database applications where availability and transaction rate are more important than storage efficiency.

37 EECC722 - Shaaban #37 Lec # 8 Fall2000 10-9-2000 Memory-Style ECC (RAID Level 2) RAID 2 performs data striping with a block size of one bit or byte, so that all disks in the array must be read to perform any read operation. A RAID 2 system would normally have as many data disks as the word size of the computer, typically 32. In addition, RAID 2 requires the use of extra disks to store an error- correcting code for redundancy. –With 32 data disks, a RAID 2 system would require 7 additional disks for a Hamming-code ECC. –Such an array of 39 disks was the subject of a U.S. patent granted to Unisys Corporation in 1988, but no commercial product was ever released. For a number of reasons, including the fact that modern disk drives contain their own internal ECC, RAID 2 is not a practical disk array scheme.

38 EECC722 - Shaaban #38 Lec # 8 Fall2000 10-9-2000 Bit-Interleaved Parity (RAID Level 3) One can improve upon memory-style ECC disk arrays ( RAID 2) by noting that, unlike memory component failures, disk controllers can easily identify which disk has failed. Thus, one can use a single parity disk rather than a set of parity disks to recover lost information. As with RAID 2, RAID 3 must read all data disks for every read operation. –This requires synchronized disk spindles for optimal performance, and works best on a single-tasking system with large sequential data requirements. An example might be a system used to perform video editing, where huge video files must be read sequentially.

39 EECC722 - Shaaban #39 Lec # 8 Fall2000 10-9-2000 Block-Interleaved Parity (RAID Level 4) RAID 4 is similar to RAID 3 except that blocks of data are striped across the disks rather than bits/bytes. Read requests smaller than the striping unit access only a single data disk. Write requests must update the requested data blocks and must also compute and update the parity block. –For large writes that touch blocks on all disks, parity is easily computed by exclusive-or’ing the new data for each disk. –For small write requests that update only one data disk, parity is computed by noting how the new data differs from the old data and apply-ing those differences to the parity block. This can be an important performance improvement for small or random file access (like a typical database application) if the application record size can be matched to the RAID 4 block size.

40 EECC722 - Shaaban #40 Lec # 8 Fall2000 10-9-2000 Block-Interleaved Distributed-Parity (RAID Level 5) The block-interleaved distributed-parity disk array eliminates the parity disk bottleneck present in RAID 4 by distributing the parity uniformly over all of the disks. An additional, frequently overlooked advantage to distributing the parity is that it also distributes data over all of the disks rather than over all but one. RAID 5 has the best small read, large read and large write performance of any redundant disk array. –Small write requests are somewhat inefficient compared with redundancy schemes such as mirroring however, due to the need to perform read- modify-write operations to update parity.

41 EECC722 - Shaaban #41 Lec # 8 Fall2000 10-9-2000 Problems of Disk Arrays: Small Writes D0D1D2 D3 P D0' + + D1D2 D3 P' new data old data old parity XOR (1. Read) (2. Read) (3. Write) (4. Write) RAID-5: Small Write Algorithm 1 Logical Write = 2 Physical Reads + 2 Physical Writes

42 EECC722 - Shaaban #42 Lec # 8 Fall2000 10-9-2000 P+Q Redundancy (RAID Level 6) An enhanced RAID 5 with stronger error-correcting codes used. One such scheme, called P+Q redundancy, uses Reed-Solomon codes, in addition to parity, to protect against up to two disk failures using the bare minimum of two redundant disks. The P+Q redundant disk arrays are structurally very similar to the block-interleaved distributed-parity disk arrays (RAID 5) and operate in much the same manner. –In particular, P+Q redundant disk arrays also perform small write opera-tions using a read-modify-write procedure, except that instead of four disk accesses per write requests, P+Q redundant disk arrays require six disk accesses due to the need to update both the ‘P’ and ‘Q’ information.

43 EECC722 - Shaaban #43 Lec # 8 Fall2000 10-9-2000 RAID 5/6: High I/O Rate Parity A logical write becomes four physical I/Os Independent writes possible because of interleaved parity Reed-Solomon Codes ("Q") for protection during reconstruction A logical write becomes four physical I/Os Independent writes possible because of interleaved parity Reed-Solomon Codes ("Q") for protection during reconstruction D0D1D2 D3 P D4D5D6 P D7 D8D9P D10 D11 D12PD13 D14 D15 PD16D17 D18 D19 D20D21D22 D23 P.............................. Disk Columns Increasing Logical Disk Addresses Stripe Unit Targeted for mixed applications

44 EECC722 - Shaaban #44 Lec # 8 Fall2000 10-9-2000 RAID 10 (Striped Mirrors) RAID 10 (also known as RAID 1+0) was not mentioned in the original 1988 article that defined RAID 1 through RAID 5. The term is now used to mean the combination of RAID 0 (striping) and RAID 1 (mirroring). Disks are mirrored in pairs for redundancy and improved performance, then data is striped across multiple disks for maximum performance. In the diagram below, Disks 0 & 2 and Disks 1 & 3 are mirrored pairs. Obviously, RAID 10 uses more disk space to provide redundant data than RAID 5. However, it also provides a performance advantage by reading from all disks in parallel while eliminating the write penalty of RAID 5.

45 EECC722 - Shaaban #45 Lec # 8 Fall2000 10-9-2000 RAID Levels Comparison: Throughput Per Dollar Relative to RAID Level 0.

46 EECC722 - Shaaban #46 Lec # 8 Fall2000 10-9-2000 RAID Levels Comparison: Throughput Per Dollar Relative to RAID Level 0.

47 EECC722 - Shaaban #47 Lec # 8 Fall2000 10-9-2000 RAID Levels Comparison: Throughput Per Dollar Relative to RAID Level 0.

48 EECC722 - Shaaban #48 Lec # 8 Fall2000 10-9-2000 RAID Levels Comparison: Throughput Per Dollar Relative to RAID Level 0.

49 EECC722 - Shaaban #49 Lec # 8 Fall2000 10-9-2000 RAID Reliability Redundancy in disk arrays is motivated by the need to overcome disk failures. When only independent disk failures are considered, a simple parity scheme works admirably. Patterson, Gibson, and Katz derive the mean time between failures for a RAID level 5 to be: MTTF (disk) 2 / N (G - 1) MTTR(disk) where MTTF(disk) is the mean-time-to-failure of a single disk, MTTR(disk) is the mean-time-to-repair of a single disk, N is the total number of disks in the disk array G is the parity group size For illustration purposes, let us assume we have 100 disks that each had a mean time to failure (MTTF) of 200,000 hours and a mean time to repair of one hour. If we organized these 100 disks into parity groups of average size 16, then the mean time to failure of the system would be an astounding 3000 years! Mean times to failure of this magnitude lower the chances of failure over any given period of time.

50 EECC722 - Shaaban #50 Lec # 8 Fall2000 10-9-2000 System Availability: Orthogonal RAIDs Redundant Support Components: power supplies, controller, cables Array Controller String Controller String Controller String Controller String Controller String Controller String Controller... Data Recovery Group: unit of data redundancy

51 EECC722 - Shaaban #51 Lec # 8 Fall2000 10-9-2000 System-Level Availability Fully dual redundant I/O Controller Array Controller......... Recovery Group Goal: No Single Points of Failure Goal: No Single Points of Failure host with duplicated paths, higher performance can be obtained when there are no failures

52 EECC722 - Shaaban #52 Lec # 8 Fall2000 10-9-2000 RAID Case Studies: RAID Case Studies: NCR 6298 The NCR 6298 Disk Array Subsystem, released in 1992, is a low cost RAID subsystem supporting RAID levels 0, 1, 3 and 5. Designed for commercial environments, the system supports up to four controllers, redundant power supplies and fans, and up to 20 3.5” SCSI-2 drives. All components power supplies, drives, and controllers—can be replaced while the system services requests. The array controller architecture features a unique lock-step design that requires almost no buffering. For all requests except RAID level 5 writes, data flows directly through the controller to the drives. The controller duplexes the data stream for mirroring configurations and generates parity for RAID level 3 synchronously with data transfer. The host interface is fast, wide, differential SCSI-2 (20 MB/S), while the drive channels are fast, narrow SCSI-2 (10 MB/S). Because of the lock-step architecture, transfer bandwidth to the host is limited to 10 MB/S for RAID level 0, 1 and 5. However, in RAID level 3 configurations, performance on large transfers has been measured at over 14 MB/S

53 EECC722 - Shaaban #53 Lec # 8 Fall2000 10-9-2000 NCR 6298

54 EECC722 - Shaaban #54 Lec # 8 Fall2000 10-9-2000 RAID Case Studies: RAID Case Studies: The RAID-II Storage Server RAID-II is a high-bandwidth, network file server designed and implemented at the University of California at Berkeley. RAID-II interfaces a SCSI-based disk array to a HIPPI network. One of RAID-II’s unique features is its ability to provide high-bandwidth access from the network to the disks without transferring data through the relatively slow file server (a Sun4/280 workstation) memory system. To do this, the RAID project designed a custom printed-circuit board called the XBUS card. The XBUS card provides a high-bandwidth path between the major system components: the HIPPI network, four VME busses that connect to VME disk controllers, and an interleaved, multi-ported semiconductor memory. The XBUS card also contains a parity computation engine that generates parity for writes and reconstruction on the disk array. The data path between these system components is a 4  8 crossbar switch that can sustain approximately 160 MB/s. The entire system is controlled by an external Sun 4/280 file server through a memory-mapped control register interface. The maximum bandwidth of RAID-II is between 20 and 30 MB/s, enough to support the full disk bandwidth of approximately 20 disk drives.

55 EECC722 - Shaaban #55 Lec # 8 Fall2000 10-9-2000 The RAID-II Storage Server


Download ppt "EECC722 - Shaaban #1 Lec # 8 Fall2000 10-9-2000 I/O Systems Processor Cache Memory - I/O Bus Main Memory I/O Controller Disk I/O Controller I/O Controller."

Similar presentations


Ads by Google