CS61C L26 RAID & Performance (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia www.cs.berkeley.edu/~ddgarcia inst.eecs.berkeley.edu/~cs61c.

Slides:



Advertisements
Similar presentations
I/O Chapter 8. Outline Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
Advertisements

CSCE430/830 Computer Architecture
CS1104: Computer Organisation School of Computing National University of Singapore.
Lecture 36: Chapter 6 Today’s topic –RAID 1. RAID Redundant Array of Inexpensive (Independent) Disks –Use multiple smaller disks (c.f. one large disk)
RAID Technology. Use Arrays of Small Disks? 14” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Katz and Patterson.
CS 430 – Computer Architecture Disks
CS61C L40 Performance I (1) Garcia © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L40 I/O: Disks (1) Ho, Fall 2004 © UCB TA Casey Ho inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture 39 I/O : Disks Microsoft.
CS61C L38 Disks (1) Garcia, Spring 2007 © UCB The enabling power of iPod  What can you do with it? View medical images. Record flight data. Watch videos.
CS 61C L27 RAID and Performance (1) A Carle, Summer 2006 © UCB inst.eecs.berkeley.edu/~cs61c/su06 CS61C : Machine Structures Lecture #27: RAID & Performance.
CS 61C L42 Performance I (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
Computer ArchitectureFall 2007 © November 28, 2007 Karem A. Sakallah Lecture 24 Disk IO and RAID CS : Computer Architecture.
CS61C L16 Disks © UC Regents 1 CS61C - Machine Structures Lecture 16 - Disks October 20, 2000 David Patterson
Inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 37 – Disks The ST506 was their first drive in 1979 (shown here), weighed.
Other Disk Details. 2 Disk Formatting After manufacturing disk has no information –Is stack of platters coated with magnetizable metal oxide Before use,
CS61C L3 Performance (1) Staley, Fall 2005 © UCB Titleless TA Aaron Staley inst.eecs./~cs61c-tc inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L37 Disks (1) Garcia, Fall 2006 © UCB Lecturer SOE Dan Garcia inst.eecs.berkeley.edu/~cs61c UC Berkeley CS61C : Machine.
inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 36 – Performance Every 6 months (Nov/June), the fastest supercomputers.
First magnetic disks, the IBM 305 RAMAC (2 units shown) introduced in One platter shown top right. A RAMAC stored 5 million characters on inch.
CS 61C L27 RAID and Performance (1) A Carle, Summer 2005 © UCB inst.eecs.berkeley.edu/~cs61c/su05 CS61C : Machine Structures Lecture #27: RAID & Performance.
Lecture 3: A Case for RAID (Part 1) Prof. Shahram Ghandeharizadeh Computer Science Department University of Southern California.
CS 61C L35 Caches IV / VM I (1) Garcia, Fall 2004 © UCB Andy Carle inst.eecs.berkeley.edu/~cs61c-ta inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
CS61C L40 I/O: Disks (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures.
Computer ArchitectureFall 2008 © November 12, 2007 Nael Abu-Ghazaleh Lecture 24 Disk IO.
CS61C L39 Performance (1) Garcia, Spring 2007 © UCB Fast CPU!  TRIPS is a UT Austin scaleable architecture with replicated tiles (like in a Bee’s eye).
Disk Technologies. Magnetic Disks Purpose: – Long-term, nonvolatile, inexpensive storage for files – Large, inexpensive, slow level in the memory hierarchy.
Cs 61C L15 Disks.1 Patterson Spring 99 ©UCB CS61C Anatomy of I/O Devices: Magnetic Disks Lecture 15 March 10, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)
S.1 Review: Major Components of a Computer Processor Control Datapath Memory Devices Input Output Cache Main Memory Secondary Memory (Disk)
CS 61C L30 Introduction to Pipelined Execution (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS61C L41 Performance I (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
CS 61C L41 I/O Disks (1) Garcia, Spring 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine.
CS61C L221 Performance © UC Regents 1 CS61C - Machine Structures Lecture 22 - Introduction to Performance November 17, 2000 David Patterson
CS430 – Computer Architecture Lecture - Introduction to Performance
CS61C L25 Input/Output, Networks II, Disks (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c.
CS 61C: Great Ideas in Computer Architecture (Machine Structures) Lecture 39: IO Disks Instructor: Dan Garcia 1.
Redundant Array of Inexpensive Disks (RAID). Redundant Arrays of Disks Files are "striped" across multiple spindles Redundancy yields high data availability.
Eng. Mohammed Timraz Electronics & Communication Engineer University of Palestine Faculty of Engineering and Urban planning Software Engineering Department.
Lecture 4 1 Reliability vs Availability Reliability: Is anything broken? Availability: Is the system still available to the user?
CS 352 : Computer Organization and Design University of Wisconsin-Eau Claire Dan Ernst Storage Systems.
RAID Storage EEL4768, Fall 2009 Dr. Jun Wang Slides Prepared based on D&P Computer Architecture textbook, etc.
CSE431 Chapter 6A.1Irwin, PSU, 2008 Chapter 6A: Disk Systems Mary Jane Irwin ( ) [Adapted from Computer Organization.
ECE 4436ECE 5367 Introduction to Computer Architecture and Design Ji Chen Section : T TH 1:00PM – 2:30PM Prerequisites: ECE 4436.
I/O – Chapter 8 Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
1 Chapter 7: Storage Systems Introduction Magnetic disks Buses RAID: Redundant Arrays of Inexpensive Disks.
RAID COP 5611 Advanced Operating Systems Adapted from Andy Wang’s slides at FSU.
Lecture 2: Computer Performance
inst.eecs.berkeley.edu/~cs61c UCB CS61C : Machine Structures Lecture 38 – Performance Every 6 months (Nov/June), the fastest supercomputers.
CSCI-365 Computer Organization Lecture Note: Some slides and/or pictures in the following are adapted from: Computer Organization and Design, Patterson.
Performance Chapter 4 P&H. Introduction How does one measure report and summarise performance? Complexity of modern systems make it very more difficult.
C OMPUTER O RGANIZATION AND D ESIGN The Hardware/Software Interface 5 th Edition Chapter 1 Computer Abstractions and Technology Sections 1.5 – 1.11.
Lecture 16: Storage and I/O EEN 312: Processors: Hardware, Software, and Interfacing Department of Electrical and Computer Engineering Spring 2014, Dr.
Csci 136 Computer Architecture II – CPU Performance Xiuzhen Cheng
Performance.
Computer Performance Computer Engineering Department.
1 CS/COE0447 Computer Organization & Assembly Language CHAPTER 4 Assessing and Understanding Performance.
Morgan Kaufmann Publishers
Lec2.1 Computer Architecture Chapter 2 The Role of Performance.
1 Lecture 27: Disks Today’s topics:  Disk basics  RAID  Research topics.
COMPUTER ARCHITECTURE & OPERATIONS I Instructor: Yaohang Li.
COSC 6340: Disks 1 Disks and Files DBMS stores information on (“hard”) disks. This has major implications for DBMS design! » READ: transfer data from disk.
W4118 Operating Systems Instructor: Junfeng Yang.
1 Components of the Virtual Memory System  Arrows indicate what happens on a lw virtual address data physical address TLB page table memory cache disk.
Computer Architecture & Operations I
Hardware Technology Trends and Database Opportunities
Computer Architecture & Operations I
Vladimir Stojanovic & Nicholas Weaver
IT 251 Computer Organization and Architecture
Performance.
Presentation transcript:

CS61C L26 RAID & Performance (1) Garcia, Fall 2005 © UCB Lecturer PSOE, new dad Dan Garcia inst.eecs.berkeley.edu/~cs61c CS61C : Machine Structures Lecture #26 RAID & Performance There is one handout today at the front and back of the room! Samsung pleads guilty!  They were convicted of “price-fixing” DRAM from to through s, etc & ordered to pay $0.3 Billion (2 nd largest fine in criminal antitrust case). CPS today!

CS61C L26 RAID & Performance (2) Garcia, Fall 2005 © UCB Review Protocol suites allow heterogeneous networking Another form of principle of abstraction Protocols  operation in presence of failures Standardization key for LAN, WAN Magnetic Disks continue rapid advance: 60%/yr capacity, 40%/yr bandwidth, slow on seek, rotation improvements, MB/$ improving 100%/yr? Designs to fit high volume form factor

CS61C L26 RAID & Performance (3) Garcia, Fall 2005 © UCB State of the Art: Two camps (2005) Performance Enterprise apps, servers E.g., Seagate Cheetah 15K.4 Serial-Attached SCSI, Ultra320 SCSI, 2Gbit Fibre Channel interface 146 GB, 3.5-inch disk 15,000 RPM 4 discs, 8 heads 13 watts (idle) 3.5 ms avg. seek 200 MB/s transfer rate 1.4 Million hrs MTBF 5 year warrantee $1000 = $6.8 / GB source: Capacity Mainstream, home uses E.g., Seagate Barracuda Serial ATA 3Gb/s, Ultra ATA/ GB, 3.5-inch disk 7,200 RPM ? discs, ? heads 7 watts (idle) 8.5 ms avg. seek 300 MB/s transfer rate ? Million hrs MTBF 5 year warrantee $330 = $0.66 / GB

CS61C L26 RAID & Performance (4) Garcia, Fall 2005 © UCB 1 inch disk drive! 2005 Hitachi Microdrive: 40 x 30 x 5 mm, 13g 8 GB, 3600 RPM, 1 disk, 10 MB/s, 12 ms seek 400G operational shock, 2000G non-operational Can detect a fall in 4” and retract heads to safety For iPods, cameras, phones 2006 MicroDrive? 16 GB, 12 MB/s! Assuming past trends continue

CS61C L26 RAID & Performance (5) Garcia, Fall 2005 © UCB Where does Flash memory come in? Microdrives and Flash memory (e.g., CompactFlash) are going head-to-head Both non-volatile (no power, data ok) Flash benefits: durable & lower power (no moving parts) Flash limitations: finite number of write cycles (wear on the insulating oxide layer around the charge storage mechanism) -OEMs work around by spreading writes out How does Flash memory work? NMOS transistor with an additional conductor between gate and source/drain which “traps” electrons. The presence/absence is a 1 or 0. wikipedia.org/wiki/Flash_memory

CS61C L26 RAID & Performance (6) Garcia, Fall 2005 © UCB What does Apple put in its iPods? Hitachi 1 inch 4,6GB MicroDrive Thanks to Andy Dahl for the tip Samsung 2,4GB flash shufflenanominiiPod Toshiba 1.8-inch 30,60GB (MK1504GAL) Toshiba 0.5,1GB flash

CS61C L26 RAID & Performance (7) Garcia, Fall 2005 © UCB Use Arrays of Small Disks… 14” 10”5.25”3.5” Disk Array: 1 disk design Conventional: 4 disk designs Low End High End Katz and Patterson asked in 1987: Can smaller disks be used to close gap in performance between disks and CPUs?

CS61C L26 RAID & Performance (8) Garcia, Fall 2005 © UCB Replace Small Number of Large Disks with Large Number of Small Disks! (1988 Disks) Capacity Volume Power Data Rate I/O Rate MTTF Cost IBM 3390K 20 GBytes 97 cu. ft. 3 KW 15 MB/s 600 I/Os/s 250 KHrs $250K IBM 3.5" MBytes 0.1 cu. ft. 11 W 1.5 MB/s 55 I/Os/s 50 KHrs $2K x70 23 GBytes 11 cu. ft. 1 KW 120 MB/s 3900 IOs/s ??? Hrs $150K Disk Arrays potentially high performance, high MB per cu. ft., high MB per KW, but what about reliability? 9X 3X 8X 6X

CS61C L26 RAID & Performance (9) Garcia, Fall 2005 © UCB Array Reliability Reliability - whether or not a component has failed measured as Mean Time To Failure (MTTF) Reliability of N disks = Reliability of 1 Disk ÷ N (assuming failures independent) 50,000 Hours ÷ 70 disks = 700 hour Disk system MTTF: Drops from 6 years to 1 month! Disk arrays too unreliable to be useful!

CS61C L26 RAID & Performance (10) Garcia, Fall 2005 © UCB Review Magnetic disks continue rapid advance: 2x/yr capacity, 2x/2-yr bandwidth, slow on seek, rotation improvements, MB/$ 2x/yr! Designs to fit high volume form factor RAID Motivation: In the 1980s, there were 2 classes of drives: expensive, big for enterprises and small for PCs. They thought “make one big out of many small!” Higher performance with more disk arms per $ Adds option for small # of extra disks (the “R”) Cal by CS Profs Katz & Patterson

CS61C L26 RAID & Performance (11) Garcia, Fall 2005 © UCB Redundant Arrays of (Inexpensive) Disks Files are “striped” across multiple disks Redundancy yields high data availability Availability: service still provided to user, even if some components failed Disks will still fail Contents reconstructed from data redundantly stored in the array  Capacity penalty to store redundant info  Bandwidth penalty to update redundant info

CS61C L26 RAID & Performance (12) Garcia, Fall 2005 © UCB Berkeley History, RAID-I RAID-I (1989) Consisted of a Sun 4/280 workstation with 128 MB of DRAM, four dual-string SCSI controllers, inch SCSI disks and specialized disk striping software Today RAID is > $32 billion dollar industry, 80% nonPC disks sold in RAIDs

CS61C L26 RAID & Performance (13) Garcia, Fall 2005 © UCB “RAID 0”: No redundancy = “AID” Assume have 4 disks of data for this example, organized in blocks Large accesses faster since transfer from several disks at once This and next 5 slides from RAID.edu,

CS61C L26 RAID & Performance (14) Garcia, Fall 2005 © UCB RAID 1: Mirror data Each disk is fully duplicated onto its “mirror” Very high availability can be achieved Bandwidth reduced on write: 1 Logical write = 2 physical writes Most expensive solution: 100% capacity overhead

CS61C L26 RAID & Performance (15) Garcia, Fall 2005 © UCB Parity computed across group to protect against hard disk failures, stored in P disk Logically, a single high capacity, high transfer rate disk 25% capacity cost for parity in this example vs. 100% for RAID 1 (5 disks vs. 8 disks) RAID 3: Parity (RAID 2 has bit-level striping)

CS61C L26 RAID & Performance (16) Garcia, Fall 2005 © UCB RAID 4: parity plus small sized accesses RAID 3 relies on parity disk to discover errors on Read But every sector has an error detection field Rely on error detection field to catch errors on read, not on the parity disk Allows small independent reads to different disks simultaneously

CS61C L26 RAID & Performance (17) Garcia, Fall 2005 © UCB Inspiration for RAID 5 Small writes (write to one disk): Option 1: read other data disks, create new sum and write to Parity Disk (access all disks) Option 2: since P has old sum, compare old data to new data, add the difference to P: 1 logical write = 2 physical reads + 2 physical writes to 2 disks Parity Disk is bottleneck for Small writes: Write to A0, B1 => both write to P disk A0 B0C0 D0 P A1 B1 C1 P D1

CS61C L26 RAID & Performance (18) Garcia, Fall 2005 © UCB RAID 5: Rotated Parity, faster small writes Independent writes possible because of interleaved parity Example: write to A0, B1 uses disks 0, 1, 4, 5, so can proceed in parallel Still 1 small write = 4 physical disk accesses

CS61C L26 RAID & Performance (19) Garcia, Fall 2005 © UCB RAID products: Software, Chips, Systems RAID was $32 B industry in 2002, 80% nonPC disks sold in RAIDs

CS61C L26 RAID & Performance (20) Garcia, Fall 2005 © UCB Margin of Safety in CS&E? Patterson reflects… Operator removing good disk vs. bad disk Temperature, vibration causing failure before repair In retrospect, suggested RAID 5 for what we anticipated, but should have suggested RAID 6 (double failure OK) for unanticipated/safety margin…

CS61C L26 RAID & Performance (21) Garcia, Fall 2005 © UCB Peer Instruction 1. RAID 1 (mirror) and 5 (rotated parity) help with performance and availability 2. RAID 1 has higher cost than RAID 5 3. Small writes on RAID 5 are slower than on RAID 1 ABC 1: FFF 2: FFT 3: FTF 4: FTT 5: TFF 6: TFT 7: TTF 8: TTT

CS61C L26 RAID & Performance (22) Garcia, Fall 2005 © UCB Peer Instruction Answer 1. RAID 1 (mirror) and 5 (rotated parity) help with performance and availability 2. RAID 1 has higher cost than RAID 5 3. Small writes on RAID 5 are slower than on RAID 1 ABC 1: FFF 2: FFT 3: FTF 4: FTT 5: TFF 6: TFT 7: TTF 8: TTT 1. All RAID (0-5) helps with performance, only RAID 0 doesn’t help availability. TRUE 2. Surely! Must buy 2x disks rather than 1.25x (from diagram, in practice even less) TRUE 3. RAID5 (2R,2W) vs. RAID1 (2W). Latency worse, throughput (|| writes) better. TRUE

CS61C L26 RAID & Performance (23) Garcia, Fall 2005 © UCB Administrivia Please attend Wednesday’s lecture! HKN Evaluations at the end Compete in the Performance contest! Deadline is Mon, 11:59pm, 1 week from now Sp04 Final exam + solutions online! Final Review: 2pm in 10 Evans Final: 12:30pm in 2050 VLSB Only bring pen{,cil}s, two 8.5”x11” handwritten sheets + green. Leave backpacks, books, calculators, cells & pagers home!

CS61C L26 RAID & Performance (24) Garcia, Fall 2005 © UCB Upcoming Calendar Week #MonWedThu LabSat #15 Last Week o’ Classes Performance LAST CLASS Summary, Review, & HKN Evals I/O Networking & 61C Feedback Survey #16 Sun 2pm Review 10 Evans Performance competition due midnight FINAL EXAM SAT 12:30pm- 3:30pm 2050 VLSB Performance awards

CS61C L26 RAID & Performance (25) Garcia, Fall 2005 © UCB Performance Purchasing Perspective: given a collection of machines (or upgrade options), which has the -best performance ? -least cost ? -best performance / cost ? Computer Designer Perspective: faced with design options, which has the -best performance improvement ? -least cost ? -best performance / cost ? All require basis for comparison and metric for evaluation Solid metrics lead to solid progress!

CS61C L26 RAID & Performance (26) Garcia, Fall 2005 © UCB Two Notions of “Performance” Plane Boeing 747 BAD/Sud Concorde Top Speed DC to Paris Passen- gers Throughput (pmph) 610 mph 6.5 hours , mph 3 hours ,200 Which has higher performance? Time to deliver 1 passenger? Time to deliver 400 passengers? In a computer, time for 1 job called Response Time or Execution Time In a computer, jobs per day called Throughput or Bandwidth

CS61C L26 RAID & Performance (27) Garcia, Fall 2005 © UCB Definitions Performance is in units of things per sec bigger is better If we are primarily concerned with response time performance(x) = 1 execution_time(x) " F(ast) is n times faster than S(low) " means… performance(F) execution_time(S) n = = performance(S) execution_time(F)

CS61C L26 RAID & Performance (28) Garcia, Fall 2005 © UCB Example of Response Time v. Throughput Time of Concorde vs. Boeing 747? Concord is 6.5 hours / 3 hours = 2.2 times faster Throughput of Boeing vs. Concorde? Boeing 747: 286,700 pmph / 178,200 pmph = 1.6 times faster Boeing is 1.6 times (“60%”) faster in terms of throughput Concord is 2.2 times (“120%”) faster in terms of flying time (response time) We will focus primarily on execution time for a single job

CS61C L26 RAID & Performance (29) Garcia, Fall 2005 © UCB Confusing Wording on Performance Will (try to) stick to “n times faster”; its less confusing than “m % faster” As faster means both increased performance and decreased execution time, to reduce confusion we will (and you should) use “improve performance” or “improve execution time”

CS61C L26 RAID & Performance (30) Garcia, Fall 2005 © UCB What is Time? Straightforward definition of time: Total time to complete a task, including disk accesses, memory accesses, I/O activities, operating system overhead,... “real time”, “response time” or “elapsed time” Alternative: just time processor (CPU) is working only on your program (since multiple processes running at same time) “CPU execution time” or “CPU time” Often divided into system CPU time (in OS) and user CPU time (in user program)

CS61C L26 RAID & Performance (31) Garcia, Fall 2005 © UCB How to Measure Time? User Time  seconds CPU Time: Computers constructed using a clock that runs at a constant rate and determines when events take place in the hardware These discrete time intervals called clock cycles (or informally clocks or cycles) Length of clock period: clock cycle time (e.g., 2 nanoseconds or 2 ns) and clock rate (e.g., 500 megahertz, or 500 MHz), which is the inverse of the clock period; use these!

CS61C L26 RAID & Performance (32) Garcia, Fall 2005 © UCB Measuring Time using Clock Cycles (1/2) or = Clock Cycles for a program Clock Rate CPU execution time for a program = Clock Cycles for a program x Clock Cycle Time

CS61C L26 RAID & Performance (33) Garcia, Fall 2005 © UCB Measuring Time using Clock Cycles (2/2) One way to define clock cycles: Clock Cycles for program = Instructions for a program (called “Instruction Count”) x Average Clock cycles Per Instruction (abbreviated “CPI”) CPI one way to compare two machines with same instruction set, since Instruction Count would be the same

CS61C L26 RAID & Performance (34) Garcia, Fall 2005 © UCB Performance Calculation (1/2) CPU execution time for program = Clock Cycles for program x Clock Cycle Time Substituting for clock cycles: CPU execution time for program = (Instruction Count x CPI) x Clock Cycle Time = Instruction Count x CPI x Clock Cycle Time

CS61C L26 RAID & Performance (35) Garcia, Fall 2005 © UCB Performance Calculation (2/2) CPU time = Instructions x Cycles x Seconds ProgramInstructionCycle CPU time = Instructions x Cycles x Seconds ProgramInstructionCycle CPU time = Instructions x Cycles x Seconds ProgramInstructionCycle CPU time = Seconds Program Product of all 3 terms: if missing a term, can’t predict time, the real measure of performance

CS61C L26 RAID & Performance (36) Garcia, Fall 2005 © UCB How Calculate the 3 Components? Clock Cycle Time: in specification of computer (Clock Rate in advertisements) Instruction Count: Count instructions in loop of small program Use simulator to count instructions Hardware counter in spec. register -(Pentium II,III,4) CPI: Calculate: Execution Time / Clock cycle time Instruction Count Hardware counter in special register (PII,III,4)

CS61C L26 RAID & Performance (37) Garcia, Fall 2005 © UCB Calculating CPI Another Way First calculate CPI for each individual instruction ( add, sub, and, etc.) Next calculate frequency of each individual instruction Finally multiply these two for each instruction and add them up to get final CPI (the weighted sum)

CS61C L26 RAID & Performance (38) Garcia, Fall 2005 © UCB Example (RISC processor) OpFreq i CPI i Prod(% Time) ALU50%1.5(23%) Load20%5 1.0(45%) Store10%3.3(14%) Branch20%2.4(18%) 2.2 What if Branch instructions twice as fast? Instruction Mix(Where time spent)

CS61C L26 RAID & Performance (39) Garcia, Fall 2005 © UCB “And in conclusion…” RAID Motivation: In the 1980s, there were 2 classes of drives: expensive, big for enterprises and small for PCs. They thought “make one big out of many small!” Higher performance with more disk arms/$, adds option for small # of extra disks (the R) Cal by CS Profs Katz & Patterson Latency v. Throughput Performance doesn’t depend on any single factor: need Instruction Count, Clocks Per Instruction (CPI) and Clock Rate to get valid estimations User Time: time user waits for program to execute: depends heavily on how OS switches between tasks CPU Time: time spent executing a single program: depends solely on processor design (datapath, pipelining effectiveness, caches, etc.) CPU time = Instructions x Cycles x Seconds ProgramInstructionCycle