Chapter 11 Disc Scheduling

Slides:



Advertisements
Similar presentations
I/O Systems & Mass-Storage Systems
Advertisements

I/O Management and Disk Scheduling
Chapter 6 I/O Systems.
Overview of Mass Storage Structure
Silberschatz, Galvin and Gagne Operating System Concepts Disk Scheduling Disk IO requests are for blocks, by number Block requests come in an.
Chapter 12: Secondary-Storage Structure. Outline n Cover n (Magnetic) Disk Structure n Disk Attachment n Disk Scheduling Algorithms l FCFS,
I/O Management and Disk Scheduling Chapter 11. I/O Driver OS module which controls an I/O device hides the device specifics from the above layers in the.
I/O Management and Disk Scheduling
Operating Systems Disk Scheduling A. Frank - P. Weisberg.
Faculty of Information Technology Department of Computer Science Computer Organization Chapter 7 External Memory Mohammad Sharaf.
Mass-Storage Structure
Mass Storage Systems. Readings r Chapter , 12.7.
I/O Management and Disk Scheduling
13. Secondary Storage (S&G, Ch. 13)
CS 6560: Operating Systems Design
Disk Scheduling Based on the slides supporting the text 1.
Operating Systems ECE344 Ashvin Goel ECE University of Toronto Disks and RAID.
Other Disk Details. 2 Disk Formatting After manufacturing disk has no information –Is stack of platters coated with magnetizable metal oxide Before use,
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 12: Mass-Storage Systems.
Lecture 17 I/O Optimization. Disk Organization Tracks: concentric rings around disk surface Sectors: arc of track, minimum unit of transfer Cylinder:
Based on the slides supporting the text
1 Disk Scheduling Chapter 14 Based on the slides supporting the text.
12: IO Systems1 I/O SYSTEMS This Chapter is About Processor I/O Interfaces: Connections exist between processors, memory, and IO devices. The OS must manage.
Disks CS 416: Operating Systems Design, Spring 2001 Department of Computer Science Rutgers University
Secondary Storage CSCI 444/544 Operating Systems Fall 2008.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition Mass-Storage Systems Revised Tao Yang.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 12: Mass-Storage Systems.
12.1 Silberschatz, Galvin and Gagne ©2009 Operating System Concepts with Java – 8 th Edition Chapter 12: Mass-Storage Systems.
1 Recitation 8 Disk & File System. 2 Disk Scheduling Disks are at least four orders of magnitude slower than main memory –The performance of disk I/O.
Mass storage Structure Unit 5 (Chapter 14). Disk Structures Magnetic disks are faster than tapes. Disk drives are addressed as large one- dimensional.
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 10: Mass-Storage Systems.
Disk Access. DISK STRUCTURE Sector: Smallest unit of data transfer from/to disk; 512B 2/4/8 adjacent sectors transferred together: Blocks Read/write heads.
1 Lecture 8: Secondary-Storage Structure 2 Disk Architecture Cylinder Track SectorDisk head rpm.
Silberschatz, Galvin and Gagne ©2009 Operating System Concepts – 8 th Edition, Chapter 12: Mass-Storage Systems.
Disk Structure Disk drives are addressed as large one- dimensional arrays of logical blocks, where the logical block is the smallest unit of transfer.
Chapter 12: Mass-Storage Systems Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 12: Mass-Storage Systems Overview of Mass.
1 Disk Scheduling. 2 Mass-Storage Systems  Disk Structure  Disk Scheduling  Disk Management  Swap-Space Management  RAID Structure  Disk Attachment.
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable Used to communicate with the user Printers Video display terminals.
Page 110/12/2015 CSE 30341: Operating Systems Principles Network-Attached Storage  Network-attached storage (NAS) is storage made available over a network.
Chapter 12: Mass-Storage Systems Silberschatz, Galvin and Gagne ©2005 Operating System Concepts – 7 th Edition, Jan 1, 2005 Chapter 12: Mass-Storage.
1Fall 2008, Chapter 12 Disk Hardware Arm can move in and out Read / write head can access a ring of data as the disk rotates Disk consists of one or more.
CS 6502 Operating Systems Dr. J.. Garrido Device Management (Lecture 7b) CS5002 Operating Systems Dr. Jose M. Garrido.
CE Operating Systems Lecture 20 Disk I/O. Overview of lecture In this lecture we will look at: Disk Structure Disk Scheduling Disk Management Swap-Space.
I/O Management and Disk Structure Introduction to Operating Systems: Module 14.
Disks Chapter 5 Thursday, April 5, Today’s Schedule Input/Output – Disks (Chapter 5.4)  Magnetic vs. Optical Disks  RAID levels and functions.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 14: Mass-Storage Systems Disk Structure Disk Scheduling Disk Management Swap-Space.
I/O and Disk Scheduling CS Spring Overview Review of I/O Techniques I/O Buffering Disk Geometry Disk Scheduling Algorithms RAID.
Chapter 12 – Mass Storage Structures (Pgs )
Chapter 14: Mass-Storage Systems Disk Structure. Disk Scheduling. RAID.
Part IV I/O System Chapter 12: Mass Storage Structure.
Chapter 11 I/O Management and Disk Scheduling Dave Bremer Otago Polytechnic, NZ ©2008, Prentice Hall Operating Systems: Internals and Design Principles,
Silberschatz, Galvin and Gagne ©2013 Operating System Concepts – 9 th Edition Chapter 10: Mass-Storage Systems.
Magnetic Disks Have cylinders, sectors platters, tracks, heads virtual and real disk blocks (x cylinders, y heads, z sectors per track) Relatively slow,
1 Chapter 11 I/O Management and Disk Scheduling Patricia Roy Manatee Community College, Venice, FL ©2008, Prentice Hall Operating Systems: Internals and.
© Janice Regan, CMPT 300, May CMPT 300 Introduction to Operating Systems DISK I/0.
Operating System (013022) Dr. H. Iwidat
Multiple Platters.
I/O System Chapter 5 Designed by .VAS.
Operating Systems Disk Scheduling A. Frank - P. Weisberg.
Chapter 12: Mass-Storage Structure
Operating System I/O System Monday, August 11, 2008.
Mass-Storage Structure
DISK SCHEDULING FCFS SSTF SCAN/ELEVATOR C-SCAN C-LOOK.
Chapter 14 Based on the slides supporting the text
RAID RAID Mukesh N Tekwani
Chapter 12: Mass-Storage Systems
Overview Continuation from Monday (File system implementation)
Disks and scheduling algorithms
RAID RAID Mukesh N Tekwani April 23, 2019
Disk Scheduling The operating system is responsible for using hardware efficiently — for the disk drives, this means having a fast access time and disk.
Presentation transcript:

Chapter 11 Disc Scheduling Alex Milenkovich

CS 345 Stalling’s Chapter # Project 1: Computer System Overview 2: Operating System Overview 4 P1: Shell 3: Process Description and Control 4: Threads P2: Tasking 5: Concurrency: ME and Synchronization 6: Concurrency: Deadlock and Starvation 6 P3: Jurassic Park 7: Memory Management 8: Virtual memory P4: Virtual Memory 9: Uniprocessor Scheduling 10: Multiprocessor and Real-Time Scheduling P5: Scheduling 11: I/O Management and Disk Scheduling 12: File Management 8 P6: FAT Student Presentations BYU CS 345 Disc Scheduling

Chapter 11 Learning Objectives After studying this chapter, you should be able to: Summarize key categories of I/O devices on computers. Discuss the organization of the I/O function. Explain some of the key issues in the design of OS support for I/O. Analyze the performance implications of various I/O buffering alternatives. Understand the performance issues involved in magnetic disk access. Explain the concept of RAID and describe the various levels. Understand the performance implications of disk cache. Describe the I/O mechanisms in UNIX, Linux, and Windows 7. BYU CS 345 Disc Scheduling

Disk Structure Addressed as a one dimensional array of logical sectors Logical mapping to physical sectors on disk Sectors are smallest addressable blocks (usually 512 bytes) Clusters composed of one or more sectors Simple, but…….. Defective sectors Hidden by substituting sectors from elsewhere Number of sectors per track is not constant 40% more sectors on outside track 512 bytes 1 2 3 4 5 6 7 8 9 10 11 12 13 14 … BYU CS 345 Disc Scheduling Alex Milenkovich

Disk Structure BYU CS 345 Disc Scheduling Alex Milenkovich

Disk speed BYU CS 345 Disc Scheduling Alex Milenkovich

Disk Performance Much slower than memory I/O bus Protocols Typical disk speed: 4-10 ms (10-3 s) Typical memory speed: 1-10 ns (10-9 s) I/O bus Protocols EIDE – Enhanced Integrated Drive Electronics ATA – Advanced Technology Attachment SATA – Serial ATA USB – Universal Serial Bus FC – Fiber Channel SCSI – Small Computer System Interface SAS – Serial SCSI IDE – Integrated Disk Electronics BYU CS 345 Disc Scheduling Alex Milenkovich

Effective Transfer Rates Performance measures Seek Time – Time to move the heads Approximation  (# of tracks × c) + startup/settle time Rotational Delay – Waiting for the correct sector to move under the head Average 1/2 rotation HD: 5400rpm ® 5.6ms, 10000rpm ® 3ms Floppy: 300 rpm ® 100ms Effective Times Access Time – Sum of seek time and rotational delay Transfer Time – Actual time needed to perform the read or write Time depends on locality BYU CS 345 Disc Scheduling Alex Milenkovich

Disk scheduling When a read/write job is requested, the disk may currently be busy All pending jobs are placed in a disk queue could be scheduled to improve the utilization Disk scheduling increases the disk’s bandwidth (the amount of information that can be transferred in a set amount of time) BYU CS 345 Disc Scheduling Alex Milenkovich

Disk Scheduling Random FIFO Priority Select a random request to do next Worst possible performance, but useful for comparisons FIFO Do in the order they arrive Simple and fair in that all requests are honored Poor performance if requests are not clustered Especially true in multiprogramming systems Priority Do operations for high-priority processes first Not intended to optimize disk utilization, but to meet other objectives Users may try to exploit priorities BYU CS 345 Disc Scheduling Alex Milenkovich

Disk Scheduling LIFO (Last-In First-Out) Always take the most recent request Hope to catch a sequence of reads at one time (locality) Runs risk of starvation SSTF (Shortest Service Time First) Select the read closest to the current track next Requires knowledge of arm position Also has risk of starvation SCAN Arm moves in one direction, meeting all requests en route If no more requests, switch directions Bias against track just traversed Will miss locality in the wrong direction Favors innermost and outermost tracks; latest arriving jobs BYU CS 345 Disc Scheduling Alex Milenkovich

Disk Scheduling C-SCAN (circular SCAN) N-step Scan FSCAN Always scan in one direction If no more requests, move to far end and start over Reduces time for requests on edges N-step Scan Prevent one track from monopolizing disk (“arm stickiness”) Have sub-queues of size N Use SCAN for each sub-queue FSCAN When one scan begins, only existing requests are handled New requests are put on a secondary queue When a scan ends, take the queue entries and start a new scan BYU CS 345 Disc Scheduling Alex Milenkovich

FCFS Scheduling Cylinder Requests: 53, 98, 183, 37, 122, 14, 124, 65, 67 0 14 37 53 65 67 98 122 124 183 199 BYU CS 345 Disc Scheduling Alex Milenkovich

SSTF Scheduling (Shortest seek time first) Cylinder Requests: 53, 98, 183, 37, 122, 14, 124, 65, 67 0 14 37 53 65 67 98 122 124 183 199 A form of Shortest Job First Not optimal May cause starvation BYU CS 345 Disc Scheduling Alex Milenkovich

SCAN Scheduling (Scan one side to other) Cylinder Requests: 53, 98, 183, 37, 122, 14, 124, 65, 67 0 14 37 53 65 67 98 122 124 183 199 Sometimes called the elevator algorithm Think about requests when head reverses direction - Just serviced those requests near disk head BYU CS 345 Disc Scheduling Alex Milenkovich

C-SCAN Scheduling (Circular Scan) Cylinder Requests: 53, 98, 183, 37, 122, 14, 124, 65, 67 0 14 37 53 65 67 98 122 124 183 199 Designed to provide more uniform wait time Treat cylinders like a circular list BYU CS 345 Disc Scheduling Alex Milenkovich

LOOK and C-LOOK Cylinder Requests: 53, 98, 183, 37, 122, 14, 124, 65, 67 0 14 37 53 65 67 98 122 124 183 199 Don’t go all the way to the end cylinders C-LOOK BYU CS 345 Disc Scheduling Alex Milenkovich

Disk Scheduling Algorithms Selection according to requestor RSS Random scheduling For analysis & simulation FIFO First in first out Fairest of them all Priority Priority by process No disk optimization LIFO Last in first out Max locality & resource Selection according to requested item SSTF Shortest service time first High utilization, small queues SCAN Back and forth over disk Better service distribution C-SCAN One way with fast return Lower service variability N-step-SCAN SCAN of N records at a time Service guarantee FSCAN NsS w/N=queue at beginning of SCAN cycle Load sensitive BYU CS 345 Disc Scheduling Alex Milenkovich

Choosing an Algorithm Seek time is the only thing that can be controlled SSTF is commonly used (appealing) SCAN & CSCAN perform better under heavy load Why? (Think about starvation issues) Can find optimal schedule but computation is expensive In low use systems, FCFS is fine BYU CS 345 Disc Scheduling Alex Milenkovich

Choosing an Algorithm Head movement isn’t the only consideration Rotational Latency File system will play a part as well Typically file systems generate requests to read or write a larger unit of data (say 64K) and this request will be passed to the disk as a command to read or write 128 sectors. Hardware can help too Caching a whole track at a time The disk device initially transfers data to a memory chip in the disk controller circuitry. The cheapest desktop computer disks have 2 megabytes of cache memory, while for a few dollars more you can get a disk with 8 megabytes. BYU CS 345 Disc Scheduling Alex Milenkovich

Low-level Formatting When made, a disk is just a magnetic plate Low-level (or physical) formatting divides the disks into sectors MFM, M2FM Missing CLocks Header data – 512 bytes trailer Sector number Disk controller info ECC BYU CS 345 Disc Scheduling Alex Milenkovich

Disk formatting Logical formatting file system structure is written onto the disk Includes boot information (when requested) and FAT tables or inodes Boot sector contains enough instructions to start loading the OS from somewhere else Called by bootstrap program in computer’s ROM BYU CS 345 Disc Scheduling Alex Milenkovich

Bad blocks In FAT, when a block gives an unrecoverable ECC error, the sector is marked bad (0xFF7) in the FAT Bit mapped allocation just marks sector as used Some systems keep a pool of spare blocks when a block goes bad, maps the logical sector to one of the spare blocks Same cylinder if possible Sector slipping (push down a group to free up a sector) Doesn’t recover the corrupted file, but does keep the same logical disk structure BYU CS 345 Disc Scheduling Alex Milenkovich

Asynchronous I/O Application makes request and proceeds Ways to signal completion Signal a device kernel object Simple to handle Cannot distinguish multiple requests regarding the same file Signal a event kernel object Can create a separate object for each request Alertable I/O I/O manager places results in APC (asynchronous procedure call) queue I/O completion ports Use a pool of threads to handle requests RAID Hardware – Done by disk controller Software – System combines space BYU CS 345 Disc Scheduling Alex Milenkovich

Swap Space Swap Space This is the area of the disk where processes or memory pages are written if the system needs more physical memory Can be either set aside area of disk or part of the regular file system Set aside an area of the disk for swap space: Easier management and better performance Requires special code Part of file system: Uses regular file functions Slower to access and change BYU CS 345 Disc Scheduling Alex Milenkovich

Reliability Although disks are fairly reliable - they still fail on a regular basis Redundant arrays of independent disks (RAID) are more reliable Simplest forms have duplicate copies of each disk Twice the cost but twice as fast when reading Another type uses 1 parity block per 8 disk blocks The parity can be used to recalculate the information in a bad block BYU CS 345 Disc Scheduling Alex Milenkovich

RAID Database designers recognize that a H/W component can only be pushed so far… If the data is on separate disks, we can issue parallel commands This can be individual requests or a single large request Provide physical redundancy Industry has a standard for multiple-disk database design: RAID - Redundant Array of Independent Disks Key advantage: Combines multiple low-cost devices using older technology into an array that offers greater capacity, reliability, speed, or a combination of these things, than is affordably available in a single device using the newest technology. BYU CS 345 Disc Scheduling Alex Milenkovich

RAID Common characteristics Addresses the need for redundancy Views a set of physical disks as a single logical entity Data is distributed across the disks in the array Use redundant disk capacity to be able to respond to disk failure Addresses the need for redundancy Caveat: More disks ® more chance of failure BYU CS 345 Disc Scheduling Alex Milenkovich

Raid Product Examples… IBM ESS Model 750 BYU CS 345 Disc Scheduling Alex Milenkovich

Data Mapping for RAID 0 Array Physical Disk 0 Physical Disk 1 Physical Disk 2 Physical Disk 3 strip 0 strip 0 strip 4 strip 8 strip 12 strip 1 strip 5 strip 9 strip 13 strip 2 strip 6 strip 10 strip 14 strip 3 strip 7 strip 11 strip 15 strip 1 strip 2 strip 3 strip 4 strip 5 strip 6 strip 7 Array Management Software strip 8 strip 9 strip 10 strip11 strip 12 strip 13 strip 14 strip 15 BYU CS 345 Disc Scheduling Alex Milenkovich

RAID RAID 0 Use of multiple disks means that a single request can be handled in parallel Greatly reduces I/O transfer time Also may balance load across disks Does not use redundancy RAID 1 Mirror data: read from either disk; writes update both disks in parallel Failure recovery easy Principle disadvantage is cost BYU CS 345 Disc Scheduling Alex Milenkovich

Hamming Codes Forward Error Correction (FEC) refers to the ability of receiving station to correct a transmission error. The transmitting station must append information to the data in the form of error correction bits, but the increase in frame length may be modest relative to the cost of re-transmission. Hamming codes provide for FEC using a "block parity" mechanism that can be inexpensively implemented (XOR). Hamming codes are used as an error detection mechanism to catch both single and double bit errors or to correct single bit error. This is accomplished by using more than one parity bit, each computed on different combination of bits in the data. BYU CS 345 Disc Scheduling Alex Milenkovich

RAID (continued…) RAID 2 RAID 3 Parallel access – All disks participate in each I/O request Uses Hamming codes to correct single-bit errors, detect double-bit errors Spindles synchronized Strips small (byte or word) Overkill – not used RAID 3 Implements a single parity strip with a single redundant disk Parity = data1  data2  data3  ... Missing data can be reconstructed from the parity strip and remaining data strips Capable of high data rates, but only one I/O request at a time Failure operates in reduced mode BYU CS 345 Disc Scheduling Alex Milenkovich

RAID (continued…) RAID 4-6 RAID 4 RAID 5 Independent access techniques Parity strip handled on a block basis Parity strip must be updated on each write Parity disk tends to be a bottleneck RAID 5 Like RAID 4, but distributes parity strips across all disks BYU CS 345 Disc Scheduling Alex Milenkovich

RAID (continued…) RAID 6 Use two different data check methods Can handle a double-disk failure Extremely high data availability w/substantial write penalty BYU CS 345 Disc Scheduling Alex Milenkovich

RAID Level Summary Category Description Typical Application 1 2 3 4 5 I/O Request Rate (Read/Write) Data Transfer Rate (Read/Write) Typical Application Applications requiring high performance for non-critical data Small strips: Excellent Large strings: Excellent Non-redundant Striping System drives; critical files Fair/fair Good/fair Mirrored 1 Mirroring Excellent Poor Redundant via Hamming code 2 Parallel access Large I/O request size applications such as imaging, CAD Excellent Poor Bit-interleaved parity 3 Fair/poor Excellent/fair Block-interleaved parity 4 Independent access Applications requiring extremely high availability Fair/poor Excellent/fair Block-interleaved distributed parity 5 Fair/poor Excellent/poor Block-interleaved dual distributed parity 6 BYU CS 345 Disc Scheduling Alex Milenkovich

Disk Cache Similar concept to memory cache Handling a request Use main memory to hold frequently used disk blocks to exploit the principle of locality Can afford more complex algorithms due to the time factors involved Handling a request If the data is in cache, pass it to the user (copy or use shared memory) If not in cache, read from the disk and add to the cache Replacement algorithms LRU – Can keep a list and determine the one not used for most time Most commonly used algorithm LFU (least frequently used) Account for how often it is used Watch out for burst of uses, then idle Improved performance w/frequency-abased BYU CS 345 Disc Scheduling Alex Milenkovich