CS152 / Kubiatowicz Lec24.1 11/28/01©UCB Fall 2001 CS152 Computer Architecture and Engineering Lecture 24 Busses (continued) Queueing Theory Disk IO November.

Slides:



Advertisements
Similar presentations
Bus Design.
Advertisements

IT253: Computer Organization
Disk Storage SystemsCSCE430/830 Disk Storage Systems CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine) Fall,
Lecture 21Comp. Arch. Fall 2006 Chapter 8: I/O Systems Adapted from Mary Jane Irwin at Penn State University for Computer Organization and Design, Patterson.
1  1998 Morgan Kaufmann Publishers Interfacing Processors and Peripherals.
Datorteknik BusInterfacing bild 1 Bus Interfacing Processor-Memory Bus –High speed memory bus Backplane Bus –Processor-Interface bus –This is what we usually.
CPE 442 io.1 Introduction To Computer Architecture CpE 442 I/O Systems.
CSCE 212 Chapter 8 Storage, Networks, and Other Peripherals Instructor: Jason D. Bakos.
EE30332 Ch8 DP – 1 Ch 8 Interfacing Processors and Peripherals Buses °Fundamental tool for designing and building computer systems divide the problem into.
S. Barua – CPSC 440 CHAPTER 8 INTERFACING PROCESSORS AND PERIPHERALS Topics to be covered  How to.
Lecture 25 Buses and I/O (2)
Interfacing Processors and Peripherals Andreas Klappenecker CPSC321 Computer Architecture.
Disk Storage SystemsCSCE430/830 Disk Storage Systems CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine) Fall,
Processor Design 5Z0321 Processor Design 5Z032 Chapter 8 Interfacing Processors and Peripherals Henk Corporaal.
11/18/2004Comp 120 Fall November 3 classes to go No class on Tuesday 23 November Last 2 classes will be survey and exam review Interconnect and.
1  1998 Morgan Kaufmann Publishers Chapter 8 Interfacing Processors and Peripherals.
1 CSE SUNY New Paltz Chapters 8 Interfacing Processors and Peripherals.
1  2004 Morgan Kaufmann Publishers Chapters 8 & 9 (partial coverage)
1 Interfacing Processors and Peripherals I/O Design affected by many factors (expandability, resilience) Performance: — access latency — throughput — connection.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 22 - Input/Output.
CS152 / Kubiatowicz Lec24.1 4/30/03©UCB Spring 2003 CS152 Computer Architecture and Engineering Lecture 24 Buses (continued) Disk IO Queueing Theory April.
CS152 Computer Architecture and Engineering Lecture 23 I/O and Storage Systems April 26, 2004 John Kubiatowicz ( lecture.
EECC551 - Shaaban #1 Lec # 10 Winter Input/Output & System Performance Issues System I/O Connection StructureSystem I/O Connection Structure.
COMP381 by M. Hamdi 1 Input/Output Systems. COMP381 by M. Hamdi 2 Motivation: Who Cares About I/O? CPU Performance: 60% per year I/O system performance.
12/3/2004EE 42 fall 2004 lecture 391 Lecture #39: Magnetic memory storage Last lecture: –Dynamic Ram –E 2 memory This lecture: –Future memory technologies.
Chapter 8: Part II Storage, Network and Other Peripherals.
I/0 devices.
CS162 Operating Systems and Systems Programming Lecture 17 Disk Management and File Systems October 29, 2008 Prof. John Kubiatowicz
CPU Chips The logical pinout of a generic CPU. The arrows indicate input signals and output signals. The short diagonal lines indicate that multiple pins.
Storage & Peripherals Disks, Networks, and Other Devices.
I/O – Chapter 8 Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
1 (Based on text: David A. Patterson & John L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, 3 rd Ed., Morgan Kaufmann,
I/O Example: Disk Drives To access data: — seek: position head over the proper track (8 to 20 ms. avg.) — rotational latency: wait for desired sector (.5.
3/8/2002CSE Buses Buses Pentium 4 Processor L1 and L2 caches Memory Controller Hub RDRAM Disks RDRAM I/O Controller Hub MB/sec (33 MHz, 32.
Storage Systems CS465 Lecture 12 CS465.
CS2100 Computer Organisation Input/Output (AY2010/2011) Semester 2 Adapted from.
Lecture 3: 1 Introduction to Queuing Theory More interested in long term, steady state than in startup => Arrivals = Departures Little’s Law: Mean number.
Lecture 35: Chapter 6 Today’s topic –I/O Overview 1.
Bus Mr. Mukul Varshney. Bus A bus, in computing, is a set of physical connections (cables, printed circuits, etc.) which can be shared by multiple hardware.
I/O Computer Organization II 1 Interconnecting Components Need interconnections between – CPU, memory, I/O controllers Bus: shared communication channel.
August 1, 2001Systems Architecture II1 Systems Architecture II (CS ) Lecture 9: I/O Devices and Communication Buses * Jeremy R. Johnson Wednesday,
Datorteknik F1 bild 1 What is a bus? Slow vehicle that many people ride together –well, true... A bunch of wires...
Accessing I/O Devices Processor Memory BUS I/O Device 1 I/O Device 2.
L/O/G/O Input Output Chapter 4 CS.216 Computer Architecture and Organization.
Csci 136 Computer Architecture II – Buses and IO
Disk Storage SystemsCSCE430/830 Disk Storage Systems CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Yifeng Zhu (U. Maine) Fall,
CS2100 Computer Organisation Input/Output – Own reading only (AY2015/6) Semester 1 Adapted from David Patternson’s lecture slides:
Csci 136 Computer Architecture II – IO and Storage Systems Xiuzhen Cheng
Processor Memory Processor-memory bus I/O Device Bus Adapter I/O Device I/O Device Bus Adapter I/O Device I/O Device Expansion bus I/O Bus.
1 Ó1998 Morgan Kaufmann Publishers Chapter 8 I/O Systems.
1 Lecture 14 Buses and I/O Data Transfer Peng Liu
1 Lecture 23: Storage Systems Topics: disk access, bus design, evaluation metrics, RAID (Sections )
Fall EE 333 Lillevik 333f06-l22 University of Portland School of Engineering Computer Organization Lecture 22 Project 6 Hard disk drive Bus arbitration.
Mohamed Younis CMCS 411, Computer Architecture 1 CMCS Computer Architecture Lecture 26 Bus Interconnect May 7,
10/15: Lecture Topics Input/Output –Types of I/O Devices –How devices communicate with the rest of the system communicating with the processor communicating.
1  2004 Morgan Kaufmann Publishers Page Tables. 2  2004 Morgan Kaufmann Publishers Page Tables.
Mohamed Younis CMCS 411, Computer Architecture 1 CMCS Computer Architecture Lecture 25 I/O Systems May 2,
Computer Organization & Design 计算机组成与设计 Weidong Wang ( 王维东 ) College of Information Science & Electronic Engineering 信息与通信工程研究所 Zhejiang.
CMSC 611: Advanced Computer Architecture I/O & Storage Some material adapted from Mohamed Younis, UMBC CMSC 611 Spr 2003 course slides Some material adapted.
1 Computer Architecture & Assembly Language Spring 2001 Dr. Richard Spillman Lecture 19 – IO II.
Buses: Presentation Outline
What is a bus? Slow vehicle that many people ride together
Chapter 8-A BUS.
Computer Architecture
Virtual Memory Main memory can act as a cache for the secondary storage (disk) Advantages: illusion of having more physical memory program relocation protection.
Lecture 21: Storage Systems
Input-output I/O is very much architecture/system dependent
Peng Liu Lecture 14 I/O Peng Liu
Buses: Presentation Outline
Page Table Constraints 32 bit addresses Page Table v d pro physPGaddr
Presentation transcript:

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 CS152 Computer Architecture and Engineering Lecture 24 Busses (continued) Queueing Theory Disk IO November 28, 2001 John Kubiatowicz (http.cs.berkeley.edu/~kubitron) lecture slides:

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Recap: Making address translation practical: TLB °Virtual memory => memory acts like a cache for the disk °Page table maps virtual page numbers to physical frames °Translation Look-aside Buffer (TLB) is a cache translations Physical Memory Space Virtual Address Space TLB Page Table virtual address page off 2 framepage 2 50 physical address page off

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 TLB 4K Cache bytes index 1 K page #disp 20 assoc lookup 32 Hit/ Miss FN Data Hit/ Miss = FN What if cache size is increased to 8KB? °If we do this in parallel, we have to be careful, however: Recap: Overlapped TLB & Cache Access

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Recap: A Three-Bus System (+ backside cache) °A small number of backplane buses tap into the processor-memory bus Processor-memory bus is only used for processor-memory traffic I/O buses are connected to the backplane bus °Advantage: loading on the processor bus is greatly reduced ProcessorMemory Processor Memory Bus Bus Adaptor Bus Adaptor Bus Adaptor I/O Bus Backside Cache bus I/O Bus L2 Cache

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Recap: Main components of Intel Chipset: Pentium II/III °Northbridge: Handles memory Graphics °Southbridge: I/O PCI bus Disk controllers USB controlers Audio Serial I/O Interrupt controller Timers

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Synchronous Bus: Includes a clock in the control lines A fixed protocol relative to the clock Advantage: little logic and very fast Disadvantages: -Every device on the bus must run at the same clock rate -To avoid clock skew, they cannot be long if they are fast °Asynchronous Bus: It is not clocked It can accommodate a wide range of devices It can be lengthened without worrying about clock skew It requires a handshaking protocol Recap: Synchronous and Asynchronous Bus

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Multiple Potential Bus Masters: the Need for Arbitration °Bus arbitration scheme: A bus master wanting to use the bus asserts the bus request A bus master cannot use the bus until its request is granted A bus master must signal to the arbiter after finish using the bus °Bus arbitration schemes usually try to balance two factors: Bus priority: the highest priority device should be serviced first Fairness: Even the lowest priority device should never be completely locked out from the bus °Bus arbitration schemes can be divided into four broad classes: Daisy chain arbitration Centralized, parallel arbitration Distributed arbitration by self-selection: each device wanting the bus places a code indicating its identity on the bus. Distributed arbitration by collision detection: Each device just “goes for it”. Problems found after the fact.

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °One of the most important issues in bus design: How is the bus reserved by a device that wishes to use it? °Chaos is avoided by a master-slave arrangement: Only the bus master can control access to the bus: It initiates and controls all bus requests A slave responds to read and write requests °The simplest system: Processor is the only bus master All bus requests must be controlled by the processor Major drawback: the processor is involved in every transaction Bus Master Bus Slave Control: Master initiates requests Data can go either way Arbitration: Obtaining Access to the Bus

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 The Daisy Chain Bus Arbitrations Scheme °Advantage: simple °Disadvantages: Cannot assure fairness: A low-priority device may be locked out indefinitely The use of the daisy chain grant signal also limits the bus speed Bus Arbiter Device 1 Highest Priority Device N Lowest Priority Device 2 Grant Release Request wired-OR

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Used in essentially all processor-memory busses and in high-speed I/O busses Bus Arbiter Device 1 Device N Device 2 Grant Req Centralized Parallel Arbitration

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Separate versus multiplexed address and data lines: Address and data can be transmitted in one bus cycle if separate address and data lines are available Cost: (a) more bus lines, (b) increased complexity °Data bus width: By increasing the width of the data bus, transfers of multiple words require fewer bus cycles Example: SPARCstation 20’s memory bus is 128 bit wide Cost: more bus lines °Block transfers: Allow the bus to transfer multiple words in back-to-back bus cycles Only one address needs to be sent at the beginning The bus is not released until the last word is transferred Cost: (a) increased complexity (b) decreased response time for request Increasing the Bus Bandwidth

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Overlapped arbitration perform arbitration for next transaction during current transaction °Bus parking master can holds onto bus and performs multiple transactions as long as no other master makes request °Overlapped address / data phases (prev. slide) requires one of the above techniques °Split-phase (or packet switched) bus completely separate address and data phases arbitrate separately for each address phase yield a tag which is matched with data phase °”All of the above” in most modern buses Increasing Transaction Rate on Multimaster Bus

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 What is DMA (Direct Memory Access)? °Typical I/O devices must transfer large amounts of data to memory of processor: Disk must transfer complete block Large packets from network Regions of frame buffer °DMA gives external device ability to access memory directly: much lower overhead than having processor request one word at a time. °Issue: Cache coherence: What if I/O devices write data that is currently in processor Cache? -The processor may never see new data! Solutions: -Flush cache on every I/O operation (expensive) -Have hardware invalidate cache lines (remember “Coherence” cache misses?)

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Administrivia °Get going on Lab 7! Status update due Monday in Section Talk with Tas about the state of your project °Midterm II on Friday 5:30 – 8:30 in 277 Cory -Pizza afterwards Topics -Pipelining -Caches/Memory systems -Buses and I/O (Disk equation) -Power -Queueing theory? Can bring 1 page of notes -Handwitten, double-sided -CLOSED BOOK!

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 I/O System Design Issues Processor Cache Memory - I/O Bus Main Memory I/O Controller Disk I/O Controller I/O Controller Graphics Network interrupts Performance Expandability Resilience in the face of failure

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 I/O Device Examples Device Behavior Partner Data Rate (KB/sec) Keyboard Input Human 0.01 Mouse Input Human 0.02 Line Printer Output Human 1.00 Floppy disk Storage Machine Laser Printer Output Human Optical Disk Storage Machine Magnetic Disk Storage Machine 5, Network-LAN Input or Output Machine 20 – 1, Graphics Display Output Human30,000.00

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 I/O System Performance °I/O System performance depends on many aspects of the system (“limited by weakest link in the chain”): The CPU The memory system: -Internal and external caches -Main Memory The underlying interconnection (buses) The I/O controller The I/O device The speed of the I/O software (Operating System) The efficiency of the software’s use of the I/O devices °Two common performance metrics: Throughput: I/O bandwidth Response time: Latency

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Simple Producer-Server Model °Throughput: The number of tasks completed by the server in unit time In order to get the highest possible throughput: -The server should never be idle -The queue should never be empty °Response time: Begins when a task is placed in the queue Ends when it is completed by the server In order to minimize the response time: -The queue should be empty -The server will be idle Producer ServerQueue

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Throughput versus Respond Time 20%40%60%80%100% Response Time (ms) Percentage of maximum throughput

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Throughput Enhancement °In general throughput can be improved by: Throwing more hardware at the problem reduces load-related latency °Response time is much harder to reduce: Ultimately it is limited by the speed of light (but we’re far from it) Producer Server Queue Server

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Disk Capacity now doubles every 18 months; before 1990 every 36 months Today: Processing Power Doubles Every 18 months Today: Memory Size Doubles Every 18 months(4X/3yr) Today: Disk Capacity Doubles Every 18 months Disk Positioning Rate (Seek + Rotate) Doubles Every Ten Years! The I/O GAP The I/O GAP Technology Trends

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Driven by the prevailing computing paradigm 1950s: migration from batch to on-line processing 1990s: migration to ubiquitous computing -computers in phones, books, cars, video cameras, … -nationwide fiber optical network with wireless tails °Effects on storage industry: Embedded storage -smaller, cheaper, more reliable, lower power Data utilities -high capacity, hierarchically managed storage Storage Technology Drivers

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Data density Mbit/sq. in. Capacity of Unit Shown Megabytes 1973: 1. 7 Mbit/sq. in 140 MBytes 1979: 7. 7 Mbit/sq. in 2,300 MBytes source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even mroe data into even smaller spaces” Disk History

CS152 / Kubiatowicz Lec /28/01©UCB Fall : 63 Mbit/sq. in 60,000 MBytes 1997: 1450 Mbit/sq. in 2300 MBytes source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even more data into even smaller spaces” 1997: 3090 Mbit/sq. in 8100 MBytes Disk History

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 source: New York Times, 2/23/98, page C3, “Makers of disk drives crowd even more data into even smaller spaces” 470 v Mb/si 9 v. 22 Mb/si 0.2 v. 1.7 Mb/si MBits per square inch: DRAM as % of Disk over time

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Nano-layered Disk Heads °Special sensitivity of Disk head comes from “Giant Magneto-Resistive effect” or (GMR) °IBM is leader in this technology Same technology as TMJ-RAM breakthrough we described in earlier class. Coil for writing

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Organization of a Hard Magnetic Disk °Typical numbers (depending on the disk size): 500 to 2,000 tracks per surface 32 to 128 sectors per track -A sector is the smallest unit that can be read or written °Traditionally all tracks have the same number of sectors: Constant bit density: record more sectors on the outer tracks Recently relaxed: constant bit size, speed varies with track location Platters Track Sector

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Magnetic Disk Characteristic °Cylinder: all the tacks under the head at a given point on all surface °Read/write data is a three-stage process: Seek time: position the arm over the proper track Rotational latency: wait for the desired sector to rotate under the read/write head Transfer time: transfer a block of bits (sector) under the read-write head °Average seek time as reported by the industry: Typically in the range of 8 ms to 12 ms (Sum of the time for all possible seek) / (total # of possible seeks) °Due to locality of disk reference, actual average seek time may: Only be 25% to 33% of the advertised number Sector Track Cylinder Head Platter

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Typical Numbers of a Magnetic Disk °Average seek time as reported by the industry: Typically in the range of 8 ms to 12 ms Due to locality of disk reference may only be 25% to 33% of the advertised number °Rotational Latency: Most disks rotate at 3,600 to 7200 RPM Approximately 16 ms to 8 ms per revolution, respectively An average latency to the desired information is halfway around the disk: 8 ms at 3600 RPM, 4 ms at 7200 RPM °Transfer Time is a function of : Transfer size (usually a sector): 1 KB / sector Rotation speed: 3600 RPM to RPM Recording density: bits per inch on a track Diameter typical diameter ranges from 2.5 to 5.25 in Typical values: 2 to 40 MB per second Sector Track Cylinder Head Platter

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Disk I/O Performance °Disk Access Time = Seek time + Rotational Latency + Transfer time + Controller Time + Queueing Delay °Estimating Queue Length: Utilization = U = Request Rate / Service Rate= /  Mean Queue Length = U / (1 - U) As Request Rate -> Service Rate -Mean Queue Length -> Infinity Processor Queue Disk Controller Disk  Service Rate Request Rate Queue Disk Controller Disk

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Disk Latency = Queueing Time + Controller time + Seek Time + Rotation Time + Xfer Time Order of magnitude times for 4K byte transfers: Average Seek: 8 ms or less Rotate: rpm Xfer: rpm Disk Device Terminology

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Example °512 byte sector, rotate at 5400 RPM, advertised seeks is 12 ms, transfer rate is 4 MB/sec, controller overhead is 1 ms, queue idle so no service time °Disk Access Time = Seek time + Rotational Latency + Transfer time + Controller Time + Queueing Delay °Disk Access Time = 12 ms / 5400 RPM KB / 4 MB/s + 1 ms + 0 °Disk Access Time = 12 ms / 90 RPS / 1024 s + 1 ms + 0 °Disk Access Time = 12 ms ms ms + 1 ms + 0 ms °Disk Access Time = 18.6 ms °If real seeks are 1/3 advertised seeks, then its 10.6 ms, with rotation delay at 50% of the time!

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Reliability and Availability °Two terms that are often confused: Reliability: Is anything broken? Availability: Is the system still available to the user? °Availability can be improved by adding hardware: Example: adding ECC on memory °Reliability can only be improved by: Better environmental conditions Building more reliable components Building with fewer components -Improve availability may come at the cost of lower reliability

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Simple Producer-Server Model °Throughput: The number of tasks completed by the server in unit time In order to get the highest possible throughput: -The server should never be idle -The queue should never be empty °Response time: Begins when a task is placed in the queue Ends when it is completed by the server In order to minimize the response time: -The queue should be empty -The server will be idle Producer ServerQueue

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Disk I/O Performance Response time = Queue + Device Service time 100% Response Time (ms) Throughput (% total BW) % Proc Queue IOCDevice Metrics: Response Time Throughput latency goes as T ser ×u/(1-u) u = utilization

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Queueing Theory applies to long term, steady state behavior  Arrival rate = Departure rate °Little’s Law: Mean number tasks in system = arrival rate x mean reponse time Observed by many, Little was first to prove Simple interpretation: you should see the same number of tasks in queue when entering as when leaving. °Applies to any system in equilibrium, as long as nothing in black box is creating or destroying tasks “Black Box” Queueing System ArrivalsDepartures Introduction to Queueing Theory

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Queuing models assume state of equilibrium: input rate = output rate °Notation: average number of arriving customers/second T ser average time to service a customer (tradtionally µ = 1/ T ser ) userver utilization (0..1): u = x T ser (or u = / µ ) T q average time/customer in queue T sys average time/customer in system: T sys = T q + T ser L q average length of queue: L q = x T q L sys average length of system: L sys = x T sys °Little’s Law: L sys = x T sys (Mean number customers = arrival rate x mean service time) ProcIOCDevice Queue server System A Little Queuing Theory: Notation

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Server spends a variable amount of time with customers Weighted mean m1 = (f1 x T1 + f2 x T fn x Tn)/F =  p(T)xT variance = (f1 x T1 2 + f2 x T fn x Tn 2 )/F – m1 2 =  p(T)xT 2 - m1 2 Squared coefficient of variance: C = variance/m1 2 -Unitless measure (100 ms 2 vs. 0.1 s 2 ) °Exponential distribution C = 1 : most short relative to average, few others long; 90% 90% 1 : further from average C=2.0 => 90% < 2.8 x average, 69% < average Avg. A Little Queuing Theory: Use of random distributions Avg. 0 ProcIOCDevice Queue server System

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Disk response times C  1.5 (majority seeks < average) °Yet usually pick C = 1.0 for simplicity Memoryless, exponential dist Many complex systems well described by memoryless distribution! °Another useful value is average time must wait for server to complete current task: m1(z) Not just 1/2 x m1 because doesn’t capture variance Can derive m1(z) = 1/2 x m1 x (1 + C) No variance  C= 0 => m1(z) = 1/2 x m1 Exponential  C= 1 => m1(z) = m1 A Little Queuing Theory: Variable Service Time ProcIOCDevice Queue server System Avg. 0 Time

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Calculating average wait time in queue T q : If something at server, it takes to complete on average m1(z) Chance server is busy = u; average delay is u x m1(z) All customers in line must complete; each avg T ser T q = u x m1(z) + L q x T s er T q = u x m1(z) + x T q x T s er T q = u x m1(z) + u x T q T q x (1 – u) = m1(z) x u T q = m1(z) x u/(1-u) = T s er x {1/2 x (1+C)} x u/(1 – u)) Notation: average number of arriving customers/second T ser average time to service a customer userver utilization (0..1): u = x T ser T q average time/customer in queue L q average length of queue:L q = x T q m1(z) average residual wait time = T s er x {1/2 x (1+C)} A Little Queuing Theory: Average Wait Time Little’s Law Defn of utilization (u)

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Assumptions so far: System in equilibrium Time between two successive arrivals in line are random Server can start on next customer immediately after prior finishes No limit to the queue: works First-In-First-Out Afterward, all customers in line must complete; each avg T ser °Described “memoryless” or Markovian request arrival (M for C=1 exponentially random), General service distribution (no restrictions), 1 server: M/G/1 queue °When Service times have C = 1, M/M/1 queue T q = T ser x u / (1 – u) T ser average time to service a customer userver utilization (0..1): u = x T ser T q average time/customer in queue A Little Queuing Theory: M/G/1 and M/M/1

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 °Processor sends 10 x 8KB disk I/Os per second, requests & service exponentially distrib., avg. disk service = 20 ms This number comes from disk equation: Service time = Ave seek + ave rot delay + transfer time + ctrl overhead °On average, how utilized is the disk? What is the number of requests in the queue? What is the average time spent in the queue? What is the average response time for a disk request? °Notation: average number of arriving customers/second = 10 T ser average time to service a customer = 20 ms (0.02s) userver utilization (0..1): u = x T ser = 10/s x.02s = 0.2 T q average time/customer in queue = T ser x u / (1 – u) = 20 x 0.2/(1-0.2) = 20 x 0.25 = 5 ms (0.005s) T sys average time/customer in system: T sys =T q +T ser = 25 ms L q average length of queue:L q = x T q = 10/s x.005s = 0.05 requests in queue L sys average # tasks in system: L sys = x T sys = 10/s x.025s = 0.25 A Little Queuing Theory: An Example

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Memory System I/O Performance? °Pipelined Bus with queue at controller? Time to transfer request T queue = Queueing Delay+service time Time to transfer data °DRAM has DETERMINISTIC service time T ser = t RAC + (n-1) * t PC + t precharge T q = m1(z) x u/(1-u) = T s er x {1/2 x (1+C)} x u/(1 – u)) with C=0 Processor Queue DRAM  Service Rate? Request Rate Memory Controller

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 Bus Summary °Buses are an important technique for building large- scale systems Their speed is critically dependent on factors such as length, number of devices, etc. Critically limited by capacitance Tricks: esoteric drive technology such as GTL °Important terminology: Master: The device that can initiate new transactions Slaves: Devices that respond to the master °Two types of bus timing: Synchronous: bus includes clock Asynchronous: no clock, just REQ/ACK strobing °Direct Memory Access (DMA) allows fast, burst transfer into processor’s memory: Processor’s memory acts like a slave Probably requires some form of cache-coherence so that DMA’ed memory can be invalidated from cache.

CS152 / Kubiatowicz Lec /28/01©UCB Fall 2001 I/O Summary: °I/O performance limited by weakest link in chain between OS and device °Queueing theory is important 100% utilization means very large latency Remember, for M/M/1 queue (exponential source of requests/service) -queue size goes as u/(1-u) -latency goes as T ser ×u/(1-u) For M/G/1 queue (more general server, exponential sources) -latency goes as m1(z) x u/(1-u) = T ser x {1/2 x (1+C)} x u/(1-u) °Three Components of Disk Access Time: Seek Time: advertised to be 8 to 12 ms. May be lower in real life. Rotational Latency: 4.1 ms at 7200 RPM and 8.3 ms at 3600 RPM Transfer Time: 2 to 12 MB per second °I/O device notifying the operating system: Polling: it can waste a lot of processor time I/O interrupt: similar to exception except it is asynchronous °Delegating I/O responsibility from the CPU: DMA, or even IOP