I/O Management and Disk Scheduling Chapter 11. Categories: For Human interaction : Printers, terminals, keyboard, mouse Machine readable: Disks, Sensors,

Slides:



Advertisements
Similar presentations
I/O Management and Disk Scheduling
Advertisements

I/O Management and Disk Scheduling
1 Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
I/O Management and Disk Scheduling Chapter 11. I/O Driver OS module which controls an I/O device hides the device specifics from the above layers in the.
Chapter 4 Device Management and Disk Scheduling DEVICE MANAGEMENT Content I/O device overview I/O device overview I/O organization and architecture I/O.
I/O Management and Disk Scheduling
Chapter 11 I/O Management and Disk Scheduling
I/O Management and Disk Scheduling
Chapter 11 I/O Management and Disk Scheduling
Categories of I/O Devices
CS 6560: Operating Systems Design
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Disks and RAID.
Day 30 I/O Management and Disk Scheduling. I/O devices Vary in many ways Vary in many ways –Data rate –Application –Complexity of control –Unit of transfer.
Input/Output Management and Disk Scheduling
Operating Systems 1 K. Salah Module 5.0: I/O and Disks I/O hardware Polling and Interrupt-driven I/O DMA Bock and character devices Blocking and nonblocking.
Based on the slides supporting the text
1 Chapter 11 I/O Management and Disk Scheduling –Operating System Design Issues –I/O Buffering –Disk Scheduling –Disk Cache.
Chapter 1 and 2 Computer System and Operating System Overview
I/O Management and Disk Scheduling (Chapter 10) Perhaps the messiest aspect of operating system design is input/output A wide variety of devices and many.
Lecture 11: I/O Management and Disk Scheduling. Categories of I/O Devices Human readable –Used to communicate with the user –Printers –Video display terminals.
Device Management.
1 Disk Scheduling Chapter 14 Based on the slides supporting the text.
Disks CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
Secondary Storage CSCI 444/544 Operating Systems Fall 2008.
1 Today I/O Systems Storage. 2 I/O Devices Many different kinds of I/O devices Software that controls them: device drivers.
Secondary Storage Management Hank Levy. 8/7/20152 Secondary Storage • Secondary Storage is usually: –anything outside of “primary memory” –storage that.
12.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 12: Mass-Storage Systems.
RAID Ref: Stallings. Introduction The rate in improvement in secondary storage performance has been considerably less than the rate for processors and.
CS 346 – Chapter 10 Mass storage –Advantages? –Disk features –Disk scheduling –Disk formatting –Managing swap space –RAID.
1 Recitation 8 Disk & File System. 2 Disk Scheduling Disks are at least four orders of magnitude slower than main memory –The performance of disk I/O.
I/O Management and Disk Scheduling
I/O – Chapter 8 Introduction Disk Storage and Dependability – 8.2 Buses and other connectors – 8.4 I/O performance measures – 8.6.
Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
Disk Structure Disk drives are addressed as large one- dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer.
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable Used to communicate with the user Printers Video display terminals.
1 IO Management and Disk Scheduling Chapter Categories of I/O Devices n Human readable u used to communicate with the user u video display terminals.
1 I/O Management and Disk Scheduling Chapter
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable  Used to communicate with the user  Printers  Video display.
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable Used to communicate with the user Printers Video display terminals.
Page 110/12/2015 CSE 30341: Operating Systems Principles Network-Attached Storage  Network-attached storage (NAS) is storage made available over a network.
CE Operating Systems Lecture 20 Disk I/O. Overview of lecture In this lecture we will look at: Disk Structure Disk Scheduling Disk Management Swap-Space.
I/O Management and Disk Structure Introduction to Operating Systems: Module 14.
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable –Used to communicate with the user –Printers –Video display.
I/O and Disk Scheduling CS Spring Overview Review of I/O Techniques I/O Buffering Disk Geometry Disk Scheduling Algorithms RAID.
Chapter 12 – Mass Storage Structures (Pgs )
I/O Management and Disk Scheduling Chapter 11. Disk Performance Parameters To read or write, the disk head must be positioned at the desired track and.
OPERATING SYSTEMS IO SYSTEMS. Categories of I/O Devices Human readable –Used to communicate with the user –Printers –Video display terminals Display Keyboard.
I/O Management and Disk Scheduling Chapter 11. Categories of I/O Devices Human readable Used to communicate with the user  Printers, Display terminals,
1 I/O Management and Disk Scheduling Chapter I/O Devices Can be group into three categories: 1.Human readable Used to communicate with the computer.
Chapter 14: Mass-Storage Systems Disk Structure. Disk Scheduling. RAID.
Device Management Mark Stanovich Operating Systems COP 4610.
I/O Scheduling Computer Organization II 1 Disk Scheduling Carrying out disk accesses in the order they are received will not always produce optimal performance.
Chapter 101 I/O Management and Disk Scheduling Chapter 11.
Part IV I/O System Chapter 12: Mass Storage Structure.
Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
1 Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
I/O Management and Disk Scheduling
Multiple Platters.
RAID, Programmed I/O, Interrupt Driven I/O, DMA, Operating System
Disks and RAID.
I/O System Chapter 5 Designed by .VAS.
Operating Systems Disk Scheduling A. Frank - P. Weisberg.
Operating System I/O System Monday, August 11, 2008.
DISK SCHEDULING FCFS SSTF SCAN/ELEVATOR C-SCAN C-LOOK.
Chapter 14 Based on the slides supporting the text
Overview Continuation from Monday (File system implementation)
Operating System Chapter 11. I/O Management and Disk Scheduling
Chapter 11 I/O Management and Disk Scheduling
Disk Scheduling The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk.
Presentation transcript:

I/O Management and Disk Scheduling Chapter 11

Categories: For Human interaction : Printers, terminals, keyboard, mouse Machine readable: Disks, Sensors, Controllers, Actuators … Communication: Modems, Network cards, etc. Differences : Data rate: several orders of magnitude difference between transfer rates Applications: e.g. Disk - store files, databases, virtual pages for MMU… Complexity of control: e.g. Printer – simple control interface; disk – more complex Unit of transfer: e.g. stream of bytes for a terminal or in blocks for a disk Data representation: Different encoding schemes (e.g. parity conventions) **HereB I/O Devices

Data Rates for some I/O Devices Data Rate (Byte/s) HDMI v 2.0 (2.25 GB/s) 10 Gigabit Ethernet (1.25 GB/s) USB 3.0 (625 MB/s) SSD (500 MB/s) Firewire 3200 (393 MB/s) HardDisk HDD (140 MB/s) Gigabit Ethernet (125 MB/s) Firewire 800 (98 MB/s) USB 2.0 (1.5 MB/s) Modem (144 kB/s) Mouse serial port (150 B/s)

Programmed I/O one byte/word at a time uses polling  wastes CPU cycles suitable for special-purpose micro-processor-controlled devices Interrupt-driven I/O many I/O devices use this approach as a good alternative to polling whenever it is needed Direct Memory Access (DMA) For block transfers minimum CPU participation (only at the beginning and at the end) I/O Control Methods

Direct Memory Access Takes control of the system from the CPU to transfer data to and from memory over the system bus Only one bus master, usually the DMA controller due to tight timing constraints. Cycle stealing is used to transfer data on the system bus. i.e. the instruction cycle is suspended so that data can be transferred by DMA controller in bursts. CPU can only access the bus between these bursts No interrupts occur (except at the end)

Efficiency Most I/O devices are extremely slow compared to main memory I/O cannot keep up with processor speed Use of multiprogramming allows for some processes to be waiting on I/O while another process executes Swapping is used to bring in additional ready processes Generality Desirable to handle all I/O devices in a uniform manner Hide the details of device I/O in lower-level routines. Processes and upper levels see devices in general terms such as read, write, open, close, lock, unlock Operating System Design Issues

I/O Buffering Reasons for buffering If buffering is not used, the process which is waiting for I/O to complete can not be swapped out because the pages involved in the I/O must remain in RAM during I/O. Overlap I/O with processing and increase efficiency Block-oriented: Data is stored/transferred in fixed-size blocks (e.g. disks, tapes) Stream-oriented: Data is transferred in streams of bytes (e.g. terminals, printers, communication ports, mouse, etc.)

OS assigns a system buffer in main memory for an I/O request Input transfers made to buffer Block is moved to user space when needed. User process can process one block of data while another block is moved into the buffer Swapping of user process is allowed since input is taking place in system memory, not user memory T = Disk I/O Time M = time to move the data into user space C = Computation Time No Buffer  Time = T + C Single Buffer  Time = Maximum [ C, T ] + M (overlap I/O with computation) **HereA Single Buffer

A process can transfer data to or from one buffer while the OS empties or fills the other one T = Disk I/O Time C = Computation Time M = time to move the data into user space If (C + M) < T  I/O device is working at full speed If (C + M) > T  add more I/O buffers to increase efficiency! Circular buffer used when more than two buffers are needed and the I/O operation must keep up with the process Double Buffer

Hard Disk Drive (HDD) Components

To read/write: disk head must be positioned at the right track and sector Seek time: position the head at the desired track Rotational delay (latency): time it takes for the beginning of the sector to reach the head Access time = seek time + rotational delay Data transfer occurs as the sector moves under the head HDD Performance Parameters HD Drive Parameters in 2010 Seek times: 3-15ms, varies w distance (avg 8-10ms - improving at 7-10% per yr) Rotation speeds: ,000 RPMs ( avg. 2-5ms - improving at 7-10% per yr) Data Transfer rates: Gb/s

Solid State Drive (SSD) SSDs have no moving mechanical components which distinguishes them from traditional HDDs uses electronic interfaces compatible with traditional HDDs More resistant to physical shock, run more quietly, have lower access time and less latency But, SSDs are more expensive per unit of storage than HDDs. While SSDs are more reliable than HDDs, SSD failures are often catastrophic, with total data loss. Whereas HDDs often give warning to allow much or all of their data to be recovered Also known as solid-state disk or electronic disk

SSD vs. HDD AttributeSolid-state driveHard disk drive Start-up time Almost instantaneous; no mechanical components to prepare. Disk spin-up may take several seconds. Data Transfer Rate MB/s~140 MB/s. NoiseNo moving parts, silentMany moving parts, noisy Temp. controlcan tolerate higher temperatures > 95°F can shorten the life > 131°F reliability is at risk Cost per capacity$0.59 per GB (2013)5 Є per GB for 3.5” and 10 Є for 2.5” Storage capacityUp to 2 TB (2011)Up to 4 TB (2011) Read Performance does not change based on where data is stored on an SSD If data from different areas of the platter is read, response time increases due to the need to seek each fragment Power Consumption High performance SSDs generally require ½ to a third of the power of HDDs ~ 2 (2.5”) to 20 (3.5”) Watts

Based on the Requester First-in, First-out (FIFO) Fair to all processes Priority by Process (PRI) Short batch jobs and interactive jobs may have higher priority Good interactive response time Last-in, First-out (LIFO) Good for transaction processing systems The device is given to the most recent user so there is little arm movement Starvation is possible if a job is fallen back from the head of the queue For a single disk there will be a number of I/O requests If requests are serviced randomly  worst possible performance Access time is the reason for differences in performance Disk Scheduling Policies for HDDs

Based on the Requested Item Shortest Service Time First (SSTF) Always choose the minimum Seek time and select the one that requires the least movement of the disk arm from its current position SCAN (Elevator Algorithm) Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction. Then direction is reversed C-SCAN (Circular SCAN) Restricts scanning to one direction only (i.e. like a typewriter), After one scan, the arm is returned to the beginning to start a new scan. This reduces the maximum delay experienced by new requests. N-step-SCAN Segments the disk request queue into subqueues of length N which are processed one at a time, using SCAN New requests added to other queues while the current queue is processed FSCAN two queues; one queue is serviced while the other is filled with new requests Disk Scheduling (cont.)

RAID (Redundant Array of Independent Disks) The rate of improvement in secondary storage performance has been much less than that of the microprocessor Use many disks in parallel to increase storage bandwidth, also improve reliability Files are striped across disks Each stripe portion is read/written in parallel Bandwidth increases with more disks Redundancy is added to improve reliability RAID 0 disk array but No redundancy

Redundancy is achieved by duplicating all the data Each logical strip is mapped to two separate physical disks A read request can be serviced by either of the two disks A write request requires that both strips be updated; but can be done in parallel  real-time back-up When a drive fails, data can still be accessed from the mirror disk Disadvantage: cost is doubled typically limited to drives that store highly critical data and system software RAID 1 (mirrored)

The strips are small; a single bit an error-correcting code is calculated across corresponding bits on each data disk, and bits of the code are stored in the corresponding bit positions on multiple parity disks. Typically Hamming code is used to be able to correct single-bit errors and detect double-bit errors. Costly; only useful when disk errors are very high; since disks are pretty reliable not used in practice RAID 2 (error detection/correction)

The strips are small; a single byte or a word Single redundant disk A simple parity bit is computed for the set of individual bits in the same position on all of the disks RAID 3 (Bit-interleaved parity)

Buffer in main memory for disk sectors Contains a copy of some of the sectors on the disk Least Recently Used (LRU) policy The block that has been in the cache the longest with no reference to it is replaced IMPLEMENTATION The cache consists of a stack of blocks When a block is referenced, it is placed on the top of the stack The block on the bottom of the stack is removed when a new block is brought in Blocks don’t actually move around; pointers to the blocks are moved Disk Cache