Chapter 11 I/O Management and Disk Scheduling Eighth Edition By William Stallings Operating Systems: Internals and Design Principles.

Slides:



Advertisements
Similar presentations
I/O Systems & Mass-Storage Systems
Advertisements

I/O Management and Disk Scheduling
I/O Management and Disk Scheduling
1 Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
I/O Management and Disk Scheduling Chapter 11. I/O Driver OS module which controls an I/O device hides the device specifics from the above layers in the.
Chapter 4 Device Management and Disk Scheduling DEVICE MANAGEMENT Content I/O device overview I/O device overview I/O organization and architecture I/O.
I/O Management and Disk Scheduling
Chapter 11 I/O Management and Disk Scheduling
Chapter 11 I/O Management and Disk Scheduling
I/O Management and Disk Scheduling
Chapter 11 I/O Management and Disk Scheduling
Categories of I/O Devices
CS 6560: Operating Systems Design
Chapter 11 I/O Management and Disk Scheduling
Day 30 I/O Management and Disk Scheduling. I/O devices Vary in many ways Vary in many ways –Data rate –Application –Complexity of control –Unit of transfer.
Input/Output Management and Disk Scheduling
Based on the slides supporting the text
1 Chapter 11 I/O Management and Disk Scheduling –Operating System Design Issues –I/O Buffering –Disk Scheduling –Disk Cache.
I/O Hardware n Incredible variety of I/O devices n Common concepts: – Port – connection point to the computer – Bus (daisy chain or shared direct access)
Chapter 1 and 2 Computer System and Operating System Overview
I/O Management and Disk Scheduling (Chapter 10) Perhaps the messiest aspect of operating system design is input/output A wide variety of devices and many.
Device Management.
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
1 Disk Scheduling Chapter 14 Based on the slides supporting the text.
Disks CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
1 I/O Management in Representative Operating Systems.
1 Today I/O Systems Storage. 2 I/O Devices Many different kinds of I/O devices Software that controls them: device drivers.
12.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 12: Mass-Storage Systems.
RAID Ref: Stallings. Introduction The rate in improvement in secondary storage performance has been considerably less than the rate for processors and.
Chapter 3 Memory Management: Virtual Memory
I/O Management and Disk Scheduling
Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
1 Computer System Overview Chapter 1. 2 n An Operating System makes the computing power available to users by controlling the hardware n Let us review.
Chapter 1 Computer System Overview Dave Bremer Otago Polytechnic, N.Z. ©2008, Prentice Hall Operating Systems: Internals and Design Principles, 6/E William.
Disk Structure Disk drives are addressed as large one- dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer.
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable Used to communicate with the user Printers Video display terminals.
1 IO Management and Disk Scheduling Chapter Categories of I/O Devices n Human readable u used to communicate with the user u video display terminals.
1 I/O Management and Disk Scheduling Chapter
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable  Used to communicate with the user  Printers  Video display.
CSC 322 Operating Systems Concepts Lecture - 25: by Ahmed Mumtaz Mustehsan Special Thanks To: Tanenbaum, Modern Operating Systems 3 e, (c) 2008 Prentice-Hall,
1 Lecture 20: I/O n I/O hardware n I/O structure n communication with controllers n device interrupts n device drivers n streams.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems Principles of I/0 hardware.
CE Operating Systems Lecture 20 Disk I/O. Overview of lecture In this lecture we will look at: Disk Structure Disk Scheduling Disk Management Swap-Space.
I/O Management and Disk Structure Introduction to Operating Systems: Module 14.
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable –Used to communicate with the user –Printers –Video display.
I/O Management and Disk Scheduling Chapter 11. Disk Performance Parameters To read or write, the disk head must be positioned at the desired track and.
Page 1 2P13 Week 10. Page 2 Page 3 Static table-driven approaches performs a static analysis of feasible schedules of dispatching result is a schedule.
I/O Management and Disk Scheduling Chapter 11. Categories of I/O Devices Human readable Used to communicate with the user  Printers, Display terminals,
1 I/O Management and Disk Scheduling Chapter I/O Devices Can be group into three categories: 1.Human readable Used to communicate with the computer.
Chapter 14: Mass-Storage Systems Disk Structure. Disk Scheduling. RAID.
Fall 2000M.B. Ibáñez Lecture 26 I/O Systems II. Fall 2000M.B. Ibáñez Application I/O Interface I/O system calls encapsulate device behaviors in generic.
Chapter 101 I/O Management and Disk Scheduling Chapter 11.
Part IV I/O System Chapter 12: Mass Storage Structure.
Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
1 Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
Module 12: I/O Systems I/O hardware Application I/O Interface
RAID, Programmed I/O, Interrupt Driven I/O, DMA, Operating System
I/O System Chapter 5 Designed by .VAS.
Operating Systems Disk Scheduling A. Frank - P. Weisberg.
Operating System I/O System Monday, August 11, 2008.
Chapter 14 Based on the slides supporting the text
Selecting a Disk-Scheduling Algorithm
CS703 - Advanced Operating Systems
Overview Continuation from Monday (File system implementation)
Chapter 11 I/O Management and Disk Scheduling
Chapter 13: I/O Systems I/O Hardware Application I/O Interface
Disk Scheduling The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk.
Presentation transcript:

Chapter 11 I/O Management and Disk Scheduling Eighth Edition By William Stallings Operating Systems: Internals and Design Principles

External devices that engage in I/O with computer systems can be grouped into three categories: suitable for communicating with the computer user printers, terminals, video display, keyboard, mouse Human readable suitable for communicating with electronic equipment disk drives, USB keys, sensors, controllers Machine readable suitable for communicating with remote devices modems, digital line drivers Communication

Devices differ in a number of areas: Data Rate there may be differences of magnitude between the data transfer rates Application the use to which a device is put has an influence on the software Complexity of Control the effect on the operating system is filtered by the complexity of the I/O module that controls the device Unit of Transfer data may be transferred as a stream of bytes or characters or in larger blocks Data Representation different data encoding schemes are used by different devices Error Conditions the nature of errors, the way in which they are reported, their consequences, and the available range of responses differs from one device to another

Three techniques for performing I/O are: Programmed I/O the processor issues an I/O command on behalf of a process to an I/O module; that process then busy waits for the operation to be completed before proceeding Interrupt-driven I/O the processor issues an I/O command on behalf of a process if non-blocking – processor continues to execute instructions from the process that issued the I/O command if blocking – the next instruction the processor executes is from the OS, which will put the current process in a blocked state and schedule another process When the operation completes the device interrupts and the OS responds Direct Memory Access (DMA) a DMA module controls the exchange of data between main memory and an I/O module – processor signals read or write, device involved, memory location, # of bytes to transfer & then DMA processor takes over. Interrupt is issued when entire block is transferred.

1 Processor directly controls a peripheral device 2 A controller or I/O module is added 3 Same configuration as step 2, but now interrupts are employed 4 The I/O module is given direct control of memory via DMA 5 The I/O module is enhanced to become a separate processor, with a specialized instruction set tailored for I/O 6 The I/O module has a local memory of its own and is, in fact, a computer in its own right; sometimes called an I/O channel

Efficiency Major effort in I/O design Important because I/O operations often form a bottleneck Most I/O devices are extremely slow compared with main memory and the processor The area that has received the most attention is disk I/O Generality Desirable to handle all devices in a uniform manner Applies to the way processes view I/O devices and the way the operating system manages I/O devices and operations Diversity of devices makes it difficult to achieve true generality Use a hierarchical, modular approach to the design of the I/O function

Good software engineering principles are important Functions of the operating system should be separated according to their complexity, their characteristic time scale, and their level of abstraction Layered software architecture can simplify design & development; each layer performs a related subset of the functions required of the operating system Layers communicate directly only with their neighbors Layers should be defined so that changes in one layer do not require changes in other layers

A temporary holding area in memory to store data being transferred between a program and a device, or between two devices Block-oriented device stores information in blocks that are usually of fixed size transfers are made one block at a time possible to reference data by its block number disks and USB keys are examples Stream-oriented device transfers data in and out as a stream of bytes no block structure terminals, printers, communications ports, and most other devices that are not secondary storage are examples

No Buffer Without a buffer, the OS directly accesses the device when it needs data for a process. Process must be active. Single Buffer Operating system assigns a buffer in main memory for an I/O request

Input & output transfers between an application & a disk are buffered in the kernel For input, read a full block into the buffer, transfer in smaller increments to process For output, write to buffer, when it is full schedule a disk write Generally provides a speedup compared to the lack of system buffering

Line-at-a-time operation appropriate for scroll-mode terminals (dumb terminals) user input is one line at a time with a carriage return signaling the end of a line output to the terminal is similarly one line at a time Byte-at-a-time operation used on forms-mode terminals when each keystroke is significant other peripherals such as sensors and controllers

Double Buffer Use two system buffers instead of one A process can transfer data to or from one buffer while the operating system empties or fills the other buffer Also known as buffer swapping

Circular Buffer Two or more buffers are used Each individual buffer is one unit in a circular buffer Used when I/O operation must keep up with process

Buffering Other than Between Application and Disk To accommodate devices that have different data-transfer sizes Example: message passing Sender side: message is subdivided into packets which are sent over the network Receiver side: reassemble the message in a buffer before passing to receiver’s memory To accommodate speed difference between sender & receiver of data Example: download a file from a modem, write to disk Modem is slow; collect data in an OS buffer, write to disk when full Use double buffering so modem can continue to download while buffer is being copied to disk.

Technique that smoothes out differences in data transmission rates If data is constantly being generated faster than it can be used, buffering no longer works. with enough demand eventually all buffers become full and their advantage is lost When there is a variety of I/O and process activities to service, buffering can increase the efficiency of the OS and the performance of individual processes

Hard Disk Drive Performance Parameters The actual details of disk I/O operation depend on the: computer system operating system nature of the I/O channel and disk controller hardware

When the disk drive is operating, the disk is rotating at constant speed To read or write the head must be positioned at the desired track and at the beginning of the desired sector on that track Track selection involves moving the head in a movable-head system or electronically selecting one head on a fixed-head system On a movable-head system the time it takes to position the head at the track is known as seek time; avg: 4-15 ms depending on disk type The time it takes for the beginning of the sector to reach the head is known as rotational delay (latency); avg: 2-7 ms. Data transfer time: ~1000Mbps for hard disk The sum of the seek time and the rotational delay equals the access time

When a process executes a disk read the OS first checks to see if the data is in a memory buffer. If so, return to the process If not, block the process, place it on a wait list for the requested device, schedule the operation. Device driver receives the operation & communicates with the I/O controller Controller sends signals to the disk which cause data to be transferred to the controllers buffer and from there (with DMA) to the kernel buffer and then generates an interrup Interrupt handler responds, then contacts the driver which notifies the I/O subsystem, which in turn moves data to the user’s memory, moves process back to ready state.

Optimizing I/O performance is an important responsibility of the operating system. High-end computers employ I/O channels, special- purpose processors that offload much of the responsibility from the main CPU. Scheduling I/O requests efficiently is an important function of the OS Given the following set of I/O requests, what is the best scheduling order, assuming the position of the RW head is currently track 100? 55, 58, 39, 18, 90, 150, 160, 38, 184 Considerations: disk utilization, measured by throughput, is the primary consideration, followed by fairness (don’t starve requests). Sometimes other factors, such as priorities, influence the decision.

Control of the scheduling is outside the control of disk management software Goal is not to optimize disk utilization but to meet other objectives Short batch jobs and interactive jobs are often given higher priority Provides good interactive response time Longer jobs may have to wait an excessively long time A poor policy for database systems

55, 58, 39, 18, 90, 150, 160, 38, 184 Processes in sequential order Fair to all processes Approximates random scheduling in performance if there are many processes competing for the disk First-In, First-Out (FIFO)

Shortest Service Time First (SSTF) Select the disk I/O request that requires the least movement of the disk arm from its current position Always choose the minimum seek time 55, 58, 39, 18, 90, 150, 160, 38, 184

SCAN Also known as the elevator algorithm Arm moves in one direction only satisfies all outstanding requests until it reaches the last track in that direction then the direction is reversed Favors jobs whose requests are for tracks nearest to both innermost and outermost tracks 55, 58, 39, 18, 90, 150, 160, 38, 184

C-SCAN (Circular SCAN) Restricts scanning to one direction only When the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again 55, 58, 39, 18, 90, 150, 160, 38, 184

Segments the disk request queue into subqueues of length N Subqueues are processed one at a time, using SCAN While a queue is being processed new requests must be added to some other queue If fewer than N requests are available at the end of a scan, all of them are processed with the next scan

Uses two subqueues When a scan begins, all of the requests are in one of the queues, with the other empty During scan, all new requests are put into the other queue Service of new requests is deferred until all of the old requests have been processed The purpose of SCAN variations is to prevent requests near the edges of the disk from being starved.

Table 11.2 Comparison of Disk Scheduling Algorithms

Table 11.3 Disk Scheduling Algorithms

Solid State Drives (SSDs) are faster than Hard Disk Drives (HDDs) but are also much more expensive. SSD (sometimes called solid state disks) actually don’t contain a disk or a drive motor or a read arm – they are completely electronic. They do, however, support traditional block oriented access as in HDDs Two technologies Flash memory – maintain data persistence without power RAM memory - if permanent data retention isn’t an issue Advantages: quiet, less latency, better access times, Disadvantages: cost and the fact that writes gradually degrade performance

Redundant Array of Independent Disks Consists of seven levels, zero through six Design architectures share three characteristics: RAID is a set of physical disk drives viewed by the operating system as a single logical drive data are distributed across the physical drives of an array in a scheme known as striping redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure

The term was originally coined in a paper by a group of researchers at the University of California at Berkeley the paper outlined various configurations and applications and introduced the definitions of the RAID levels Strategy employs multiple disk drives and distributes data in such a way as to enable simultaneous access to data from multiple drives improves I/O performance and allows easier incremental increases in capacity The unique contribution is to address effectively the need for redundancy Makes use of stored parity information that enables the recovery of data lost due to a disk failure

Table 11.4 RAID Levels N = number of data disks; m proportional to log N

RAID Level 0 Not a true RAID because it does not include redundancy to improve performance or provide data protection User and system data are distributed across all of the disks in the array Logical disk is divided into strips

RAID Level 1 Redundancy is achieved by the simple expedient of duplicating all the data There is no “write penalty” When a drive fails the data may still be accessed from the second drive Principal disadvantage is the cost

RAID Level 2 Makes use of a parallel access technique Data striping is used Typically a Hamming code is used Effective choice in an environment in which many disk errors occur

RAID Level 3 Requires only a single redundant disk, no matter how large the disk array Employs parallel access, with data distributed in small strips Can achieve very high data transfer rates

RAID Level 4 Makes use of an independent access technique A bit-by-bit parity strip is calculated across corresponding strips on each data disk, and the parity bits are stored in the corresponding strip on the parity disk Involves a write penalty when an I/O write request of small size is performed

RAID Level 5 Similar to RAID-4 but distributes the parity bits across all disks Typical allocation is a round-robin scheme Has the characteristic that the loss of any one disk does not result in data loss

RAID Level 6 Two different parity calculations are carried out and stored in separate blocks on different disks Provides extremely high data availability Incurs a substantial write penalty because each write affects two parity blocks

Cache memory is used to apply to a memory that is smaller and faster than main memory and that is interposed between main memory and the processor Reduces average memory access time by exploiting the principle of locality Disk cache is a buffer in main memory for disk sectors Contains a copy of some of the sectors on the disk when an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cache if YES the request is satisfied via the cache if NO the requested sector is read into the disk cache from the disk

Most commonly used algorithm that deals with the design issue of replacement strategy The block that has been in the cache the longest with no reference to it is replaced A stack of pointers reference the cache most recently referenced block is on the top of the stack when a block is referenced or brought into the cache, it is placed on the top of the stack

The block that has experienced the fewest references is replaced A counter is associated with each block Counter is incremented each time block is accessed When replacement is required, the block with the smallest count is selected

UNIX Buffer Cache Is essentially a disk cache I/O operations with disk are handled through the buffer cache The data transfer between the buffer cache and the user process space always occurs using DMA does not use up any processor cycles does consume bus cycles Three lists are maintained: Free list list of all slots in the cache that are available for allocation Device list list of all buffers currently associated with each disk Driver I/O queue list of buffers that are actually undergoing or waiting for I/O on a particular device

Character queues may only be read once as each character is read, it is effectively destroyed Either written by the I/O device and read by the process or vice versa producer/consumer model is used Used by character oriented devices terminals and printers

Is simply DMA between device and process space Is always the fastest method for a process to perform I/O Process is locked in main memory and cannot be swapped out I/O device is tied up with the process for the duration of the transfer making it unavailable for other processes

Table 11.5 Device I/O in UNIX

Very similar to other UNIX implementation Associates a special file with each I/O device driver Block, character, and network devices are recognized Default disk scheduler in Linux 2.4 is the Linux Elevator For Linux 2.6 the Elevator algorithm has been augmented by two additional algorithms: the deadline I/O scheduler the anticipatory I/O scheduler

Elevator and deadline scheduling can be counterproductive if there are numerous synchronous read requests Is superimposed on the deadline scheduler When a read request is dispatched, the anticipatory scheduler causes the scheduling system to delay there is a good chance that the application that issued the last read request will issue another read request to the same region of the disk that request will be serviced immediately otherwise the scheduler resumes using the deadline scheduling algorithm

For Linux 2.4 and later there is a single unified page cache for all traffic between disk and main memory Benefits: dirty pages can be collected and written out efficiently pages in the page cache are likely to be referenced again due to temporal locality

Network Drivers Windows includes integrated networking capabilities and support for remote file systems the facilities are implemented as software drivers Hardware Device Drivers the source code of Windows device drivers is portable across different processor types Cache Manager maps regions of files into kernel virtual memory and then relies on the virtual memory manager to copy pages to and from the files on disk File System Drivers sends I/O requests to the software drivers that manage the hardware device adapter

Windows offers two modes of I/O operation asynchronous is used whenever possible to optimize application performance an application initiates an I/O operation and then can continue processing while the I/O request is fulfilled synchronous the application is blocked until the I/O operation completes

 Windows provides five different techniques for signaling I/O completion: 1 Signaling the file object 2 Signaling an event object 3 Asynchronous procedure call 4 I/O completion ports 5 Polling

Windows supports two sorts of RAID configurations: Hardware RAID separate physical disks combined into one or more logical disks by the disk controller or disk storage cabinet hardware Software RAID noncontiguous disk space combined into one or more logical partitions by the fault-tolerant software disk driver, FTDISK

Volume Shadow Copies efficient way of making consistent snapshots of volumes so they can be backed up also useful for archiving files on a per-volume basis implemented by a software driver that makes copies of data on the volume before it is overwritten Volume Encryption Windows uses BitLocker to encrypt entire volumes more secure than encrypting individual files allows multiple interlocking layers of security

Summary I/O devices Organization of the I/O function The evolution of the I/O function Direct memory access Operating system design issues Design objectives Logical structure of the I/O function I/O Buffering Single buffer Double buffer Circular buffer Disk scheduling Disk performance parameters Disk scheduling policies Raid Raid levels 0 – 6 Disk cache Design and performance considerations UNIX SVR4 I/O Buffer cache Character queue Unbuffered I/O UNIX devices Linux I/O Disk scheduling Linux page cache Windows I/O Basic I/O facilities Asynchronous and Synchronous I/O Software RAID Volume shadow copies/encryption