Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 I/O Management and Disk Scheduling Chapter 11. 2 I/O Devices Can be group into three categories: 1.Human readable Used to communicate with the computer.

Similar presentations


Presentation on theme: "1 I/O Management and Disk Scheduling Chapter 11. 2 I/O Devices Can be group into three categories: 1.Human readable Used to communicate with the computer."— Presentation transcript:

1 1 I/O Management and Disk Scheduling Chapter 11

2 2 I/O Devices Can be group into three categories: 1.Human readable Used to communicate with the computer user E.g.: –Printers –Video display terminals »Display »Keyboard »Mouse

3 3 I/O Devices 2.Machine readable Used to communicate with electronic equipment E.g.: –Disk and tape drives –Sensors –Controllers –Actuators

4 4 I/O Devices 3.Communication Used to communicate with remote devices E.g.: –Digital line drivers –Modems

5 5 I/O Devices There are differences in I/O devices: 1.Data rate Maybe differences of several orders of magnitude between the data transfer rates. As shown in Figure 11.1 2.Application The use to which the device is put has an influence on the software and policies of the OS and supporting facilities. –Disk used to store files requires file management software –Disk used to store virtual memory pages needs special hardware and software to support it

6

7 7 I/O Devices –Terminal used by system administrator may have a higher priority 3.Complexity of control A printer requires a simple control interface compared with a disk that require much more complex interface.

8 8 I/O Devices 4.Unit of transfer Data may be transferred as a stream of bytes for a terminal OR in larger blocks for a disk 5.Data representation Different encoding schemes used by different devices. 6.Error Conditions Devices respond to errors differently

9 9 Organization of the I/O Function Recap that there are three techniques for performing I/O: 1.Programmed I/O Processor issues an I/O command to I/O module on behalf of a process Process is busy-waiting for the operation to complete before proceeding. 2.Interrupt-driven I/O I/O command is issued by the processor Processor continues executing instructions I/O module sends an interrupt when done

10 10 Organization of the I/O Function 3.Direct Memory Access DMA module controls exchange of data between main memory and the I/O device Processor sends a request for the transfer of a block of data to the DMA module and interrupted only after entire block has been transferred Table 11.1 indicates the relationship among these three techniques.

11 11 Organization of the I/O Function

12 12 Organization of the I/O Function - Evolution of the I/O Function Processor directly controls a peripheral device Controller or I/O module is added Processor uses programmed I/O without interrupts Processor does not need to handle details of external devices

13 13 Organization of the I/O Function - Evolution of the I/O Function Controller or I/O module with interrupts Processor does not spend time waiting for an I/O operation to be performed Increasing efficiency Direct Memory Access Blocks of data are moved into memory without involving the processor Processor involved at beginning and end only

14 14 Organization of the I/O Function - Evolution of the I/O Function I/O module is a separate processor CPU directs the I/O processor to execute and I/O program in main memory. I/O processor fetches and executes these instructions without processor intervention. Allow processor to specify a sequence of I/O activities and to be interrupted only when the entire sequence has been performed. I/O processor I/O module has its own local memory Its a computer in its own right

15 15 Organization of the I/O Function - Direct Memory Access Figure 11.2 show a typical DMA block diagram. Processor delegates I/O operation to the DMA module DMA module transfers data directly to or from memory When complete DMA module sends an interrupt signal to the processor

16 16 Organization of the I/O Function - Direct Memory Access

17 17 Organization of the I/O Function - Direct Memory Access DMA mechanism can be configured in many ways. Some possibilities shown is Figure 11.3.

18 18 Organization of the I/O Function - Direct Memory Access

19 19 Organization of the I/O Function - Direct Memory Access

20 20 Operating System Design Issues – Design Objectives Efficiency Most I/O devices extremely slow compared to main memory Use of multiprogramming allows for some processes to be waiting on I/O while another process executes I/O cannot keep up with processor speed Swapping is used to bring in additional Ready processes which is an I/O operation

21 21 Operating System Design Issues – Design Objectives Generality Desirable to handle all I/O devices in a uniform manner Applies both to the way processes view I/O devices and the way in which OS manages I/O devices and operations. Hide most of the details of device I/O in lower- level routines so that processes and upper levels see devices in general terms such as read, write, open, close, lock, unlock

22 22 Operating System Design Issues – Logical Structure of the I/O Function I/O facility leads to the type of organization suggested by Figure 11.4 The details of the organization will depend on the type of device and the application. There are three most important logical structures as shown: Local peripheral device Communication port File system

23 23 Operating System Design Issues – Logical Structure of the I/O Function Consider the simplest case of a local peripheral device that communicates in a simple fashion such as a stream of bytes or records. For Figure 11.4a, the following layers are involved: Logical I/O Deals with the device as a logical resource and is not concerned with the details of actually controlling the device. Concerned on managing general I/O functions on behalf of user process –Open, close, read and write

24 24 Operating System Design Issues – Logical Structure of the I/O Function Device I/O The requested operations and data are converted into appropriate sequences of I/O instructions, channel commands and controller orders. Buffering techniques maybe used to improve utilization. Scheduling and control Actual queuing and scheduling of I/O operations occurs together with controlling the operations Interacts with the I/O module and the device hardware

25 25 Operating System Design Issues – Logical Structure of the I/O Function For Figure 11.4b, same as Figure 11.4a. The principle difference is that logical I/O module is replaced by communication architecture. Consists of a number of layers –E.g. TCP/IP

26 26 Operating System Design Issues – Logical Structure of the I/O Function For Figure 11.4c shows a representative structure for managing I/O on secondary storage device that support file system. The three layers that not previously discussed: Directory Management Symbolic file names converted to identifiers that either reference the file directly or indirectly through a file descriptor or index table. Concern with user operations that affect the directory of files –Add, delete

27 27 Operating System Design Issues – Logical Structure of the I/O Function File System Deals with the logical structure of files and with the operations that can be specified by user –Open, close, read, write Access rights are managed at this layer Physical Organization Translation of virtual address to physical address using segmentation or paging. Allocation of secondary storage space and main storage buffers

28

29 29 I/O Buffering Suppose user process wishes to read blocks of data from a tape one at a time Each block have a length of 512 bytes. Data are to be read into a data area within the address space of the user process at virtual address 1000 to 1511 Execute an I/O command to the tape unit and wait for the data to become available. There are two problems with this approach. 1.Program user hung up waiting for the I/O to complete 2.Interferes with swapping decisions by OS

30 30 I/O Buffering For problem 2, Virtual location 1000 – 1511 must remain in main memory during the course of block transfer, if not some data maybe lost. Risk of single process deadlock Process suspend because waiting for I/O command and being swapped out. The process is blocked waiting for I/O event and the I/O operation is blocked waiting for the process to be swapped in. To avoid the problem used locking facility.

31 31 I/O Buffering The same problem apply to an output operation: If a block is being transferred from user process area directly to an I/O module, then the process is blocked during transfer and the process may not be swapped out. To avoid these overhead and inefficiencies  Buffering technique Perform input transfers in advance of requests being made Perform output transfers sometime after the request being made.

32 32 I/O Buffering Reasons for buffering Processes must wait for I/O to complete before proceeding Certain pages must remain in main memory during I/O There two types of I/O devices: 1.Block oriented 2.Stream Oriented

33 33 I/O Buffering Block-oriented Information is stored in fixed sized blocks Transfers are made a block at a time Used for disks and tapes Stream-oriented Transfer information as a stream of bytes Used for terminals, printers, communication ports, mouse and other pointing devices, and most other devices that are not secondary storage

34 34 I/O Buffering – Single Buffer Operating system assigns a buffer in main memory for an I/O request Block-oriented Input transfers made to buffer When transfer complete, process moves the block into user space and request for another block. Reading ahead

35 35 I/O Buffering – Single Buffer Speed up compared to no buffering. User process can process one block of data while next block is read in Swapping can occur since input is taking place in system memory, not user memory Operating system keeps track of assignment of system buffers to user processes. For output, when data are being transmitted to a device, they first copied from user space to buffer then will be written to disk.

36 36 I/O Buffering – Single Buffer Stream oriented Used a line at time OR byte at a time User input from a terminal is one line at a time with carriage return signaling the end of the line Output to the terminal is one line at a time Buffer used to hold a single line User process is suspended during input because waiting for arrival of the entire line. For output user process can place a line of output in the buffer and continue processing.

37 37 I/O Buffering

38 38 I/O Buffering – Double Buffer Use two system buffers instead of one A process can transfer data to or from one buffer while the operating system empties or fills the other buffer This technique is known as double buffering or buffer swapping

39 39 I/O Buffering – Double Buffer

40 40 I/O Buffering – Circular Buffer More than two buffers are used Each individual buffer is one unit in a circular buffer Used when I/O operation must keep up with process

41 41 I/O Buffering – Circular Buffer

42 42 I/O Buffering Buffering is a technique that smooth out peaks in I/O demand. However no amount of buffering will allow an I/O device to keep pace with a process. Even with multiple buffers, all of the buffers will eventually fill up and the process will have to wait after processing each chunk of data. In multiprogramming buffering can be a tool that can increase the efficiency of the operating system and the performance of individual processes.

43 43 Disk Scheduling – Disk Performance Parameters When the disk drive operating, the disk is rotating at a constant speed To read or write, the disk head must be positioned at the desired track and at the beginning of the desired sector Track selection involves moving the head in a movable-head system or electronically selecting one head on a fixed head system. Time it takes to position the head at the desired track  Seek time

44 44 Disk Scheduling – Disk Performance Parameters Once track is selected, the disk controller waits until the appropriate sector rotates to line up with the head. The time it takes for the beginning of the sector to reach the head  Rotational delay The sum of the seek time and the rotational delay  Access time: time it takes to get into position to read or write. Once the head is in position, the read or write operation is performed as the sector moves under the head data transfer operation

45 45 Disk Scheduling – Disk Performance Parameters The time required for the transfer  Transfer time.

46 46 Disk Scheduling – Disk Performance Parameters

47 47 Disk Scheduling – Policies Seek time is the reason for differences in performance If sector access requests involve selection of tracks at random, then the performance of the disk I/O will be as poor as possible. To improve, need to reduce the time spent on seeks For a single disk there will be a number of I/O requests (read and write) from various processes in a queue If requests are selected randomly  poor performance

48 48 Disk Scheduling - Policies First-in, first-out (FIFO) Process request sequentially Fair to all processes Approaches random scheduling in performance if there are many processes

49 49 Disk Scheduling - Policies Priority Goal is not to optimize disk use but to meet other objectives Short batch jobs may have higher priority Provide good interactive response time

50 50 Disk Scheduling - Policies Last-in, first-out Good for transaction processing systems The device is given to the most recent user so there should be little arm movement Possibility of starvation since a job may never regain the head of the line

51 51 Disk Scheduling Policies Shortest Service Time First Select the disk I/O request that requires the least movement of the disk arm from its current position Always choose the minimum Seek time

52 52 Disk Scheduling Policies SCAN Arm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction Direction is reversed

53 53 Disk Scheduling Policies C-SCAN Restricts scanning to one direction only When the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins again

54 54 Disk Scheduling Policies N-step-SCAN Segments the disk request queue into subqueues of length N Subqueues are processed one at a time, using SCAN New requests added to other queue when queue is processed FSCAN Two queues One queue is empty for new requests

55

56 56 RAID Redundant Array of Independent Disks Set of physical disk drives viewed by the operating system as a single logical drive Data are distributed across the physical drives of an array Redundant disk capacity is used to store parity information

57 57 RAID 0 (non-redundant)

58 58 RAID 1 (mirrored)

59 59 RAID 2 (redundancy through Hamming code)

60 60 RAID 3 (bit-interleaved parity)

61 61 RAID 4 (block-level parity)

62 62 RAID 5 (block-level distributed parity)

63 63 RAID 6 (dual redundancy)

64 64 Disk Cache Buffer in main memory for disk sectors Contains a copy of some of the sectors on the disk When an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cache. Yes, request is satisfied via cache No, the requested sector is read into the disk cache from the disk.

65 65 Disk Cache – Design Consideration Several design issues: When an I/O request is satisfied from the disk cache, the data in the disk cache must be delivered to requesting process. It can be done either –Transferring the block of data within main memory from the disk cache to memory assigned to user process –Using a shared memory capability and passing a pointer to appropriate slot in the disk cache. »Saves time.

66 66 Disk Cache – Design Consideration When a new sector is brought into the disk cache, one of the existing blocks must be replaced. Least Recently Used (LRU) Least Frequently Used (LFU)

67 67 Disk Cache – Design Consideration Least Recently Used (LRU) Replace the block that has been in the cache the longest with no reference to it. The cache consists of a stack of blocks Most recently referenced block is on the top of the stack When a block is referenced or brought into the cache, it is placed on the top of the stack The block on the bottom of the stack is removed when a new block is brought in Blocks don’t actually move around in main memory A stack of pointers is used

68 68 Disk Cache – Design Consideration Least Frequently Used (LFU) The block that has experienced the fewest references is replaced A counter is associated with each block Counter is incremented each time block accessed. Block with the smallest count is selected for replacement. Some blocks may be referenced many times in a short period of time and the reference count is misleading


Download ppt "1 I/O Management and Disk Scheduling Chapter 11. 2 I/O Devices Can be group into three categories: 1.Human readable Used to communicate with the computer."

Similar presentations


Ads by Google