2 Goals of I/O SoftwareoperatingsystemsProvide a common, abstract view of all devices to the application programmer (open, read, write)Provide as much overlap as possible between the operation of I/O devices and the CPU.
3 I/O – Processor Overlap operatingsystemsApplication programmers expectserial execution semanticsread (device, “%d”, x);y = f(x);We expect that this statement will completebefore the assignment is executed.To accomplish this, the OS blocks the processuntil the I/O operation completes.
4 Without Blocking! operating systems The read is issued. read(device, %D”, x);y = f(x);…operatingsystemsThe read is issued.The read completesand the value of x isupdated.The read has notCompleted … but theprocess continues toexecute.READUser ProcessDevice Independent LayerCPUDevice Dependent LayerInterrupt Handlerdata status commandDevice Controllerx551212
5 In a multi-programming environment, another operatingsystemsIn a multi-programming environment, anotherapplication could use the cpu while the firstapplication waits for the I/O to complete.Request I/OoperationI/OCompleteapp2doneapp1app2I/Ocontroller
6 Performance Thread execution time can be broken into: operatingsystemsPerformanceThread execution time can be broken into:Time compute The time the thread spends doing computationsTime device The time spent on I/O operationsTime overhead The time spent determining if I/O is completeSo, Time total = Time compute + Time device + Time overhead
7 Performance Time total = Time compute + Time device + Time overhead operatingsystemsPerformanceWhen the device driver pollsTime total = Time compute + Time device + Time overheadTime overhead = The period of time between the point where the devicecompletes the operation and the point where the pollingloop determines that the operation is complete. This isgenerally just a few instruction times.Note that when the device driver polls, no otherprocess can use the cpu. Polling consumes the cpu.
9 Performance Time total = Time compute + Time device + Time overhead operatingsystemsWhen the device driver uses interruptsTime total = Time compute + Time device + Time overheadWhen the device driver uses interruptsTime overhead = Time handler + Time readyTime handler is the time spent in the interrupt handlerTime ready is the time the process waits for the cpuafter it has completed its I/O, while another processuses the CPU.
10 For simplicity’s sake assume processes of the operatingsystemsFor simplicity’s sake assume processes of thefollowing form: Each process computes for along while and then writes its results to a file.We will ignore the time taken to do a context switch.Request anI/O operationTime computeTime computeprocessTime deviceI/Ocontroller
11 Polling Case operating systems Proc 1 Proc 2 Time overhead Time deviceTime deviceTime computeTime computeTime computeProc 1Time computeProc 2Proc1 pollsProc2 pollsIn the polling case, the process starts the I/O operation,and then continually loops, asking the device if it is done.
12 Interrupt Case operating systems Proc 1 Proc 2 Time overhead Time computeTime interrupthandlerTime deviceTime computeProc 1Time computeProc 2In the interrupt case, the process starts the I/Ooperation, and then blocks. When the I/O is done,the os will get an interrupt.Time device
13 Which gives better system throughput? * Polling* InterruptsWhich gives better application performance?* Polling* InterruptsIf you were developing an operating system,would you choose interrupts or polling?
14 Buffering Issues operating systems UserspaceKernelRead from the diskInto user memoryAssume that you are using interrupts…What problems Exist in this situation?
15 Buffering Issues operating systems UserspaceKernelRead from the diskInto user memoryAssume that you are using interrupts…What problems Exist in this situation?The process cannot be completely swapped out of memory. At least the page containing the addresses into which the data is being written must remain in real memory.
16 Buffering Issues operating systems User space Read from the disk into kernel buffer.When the buffer is full, transfer to memory in user space.Kernel
17 What problems Exist in this situation? Buffering IssuesoperatingsystemsUserspaceWe can now swap the user processor outWhile the I/O completes.What problems Exist in this situation?Kernel1. The O/S has to carefully keep track of the assignment of system buffers to user processes.2. There is a performance issue when the user process is not in memory and the O/S is ready to transfer its data to the user process. Also, the device must wait while data is being transferred.3. The swapping logic is complicated when the swapping operation uses the same disk drive for paging that the data is being read from.
18 Buffering Issues operating systems User space Some of the performance issues can be addressed by double buffering. While one buffer is being transferred to the user process, the device is reading data into a second buffer.Kernel
19 Networking may involve many copies operatingsystems
20 operatingsystemsDisk SchedulingBecause Disk I/O is so important, it is worth our time toinvestigate some of the issues involved in disk I/O.One of the biggest issues is disk performance.
21 seek time is the timerequired for the read head tomove to the track containingthe data to be read.
22 rotational delay orlatency, is the timerequired for the sectorto move under theread head.
23 Performance Parameters operatingsystemsrotationaldelaydatatransferWait for deviceseek(latency)Wait forChannelDevice busySeek time is the time required tomove the disk arm to the specified trackTs = # tracks * disk constant + startup time~Rotational delay is the time required for the data onthat track to come underneath the read heads. Fora hard drive rotating at 3600 rpm, the average rotationaldelay will be 8.3ms.Transfer TimeTt = bytes / ( rotation_speed * bytes_on_track )
24 Data Organization vs. Performance operatingsystemsConsider a file where the data is stored as compactly aspossible, in this case the file occupies all of the sectors on8 adjacent tracks(32 sectors x 8 tracks = 256 sectors total).The time to read the first track will beaverage seek time msrotational delay msread 32 sectors ms45msAssuming that there is essentially no seek time on the remaining tracks,each successive track can be read in ms = 25ms.Total read time = 45ms + 7 * 25ms = 220ms = 0.22 seconds
25 If the data is randomly distributed across the disk: operatingsystemsIf the data is randomly distributed across the disk:For each sector we haveaverage seek time 20 msrotational delay 8.3 msread 1 sector msTotal time = 256 sectors * 28.8 ms/sector = 7.73 seconds28.8 msRandom placement of data can be a problem when multipleprocesses are accessing the same disk.
26 In the previous example, the biggest factor on performance is ? operatingsystemsIn the previous example, the biggest factoron performance is ?Seek time!To improve performance, weneed to reduce the average seek time.
27 If requests are scheduled in random order, QueueoperatingsystemsRequest…If requests are scheduled in random order,then we would expect the disk tracks to bevisited in a random order.
28 First-come, First-served QueueFirst-come, First-servedSchedulingoperatingsystemsRequest…If there are few processes competingfor the drive, we can hope for goodperformance.If there are a large number of processescompeting for the drive, then performanceapproaches the random scheduling case.
29 While at track 15, assume some random set of read requests -- tracks 4, 40, 11, 35, 7 and 14operatingsystemsHead PathTracks TraveledTrack15 to steps4 to steps40 to steps11 to steps35 to steps7 to steps135 steps40302010Steps50100
30 Shortest Seek Time First QueueShortest Seek Time FirstoperatingsystemsRequest…Always select the request that requiresthe shortest seek time from the currentposition.
31 Shortest Seek Time First While at track 15, assume some random set of readrequests -- tracks 4, 40, 11, 35, 7 and 14operatingsystemsShortest Seek Time FirstTrackHead PathTracks Traveled40302010Steps50100In a heavily loaded system, incoming requests with ashorter seek time will constantly push requests withlong seek times to the end of the queue. This resultsin what is called “Starvation”.Problem?
32 The elevator algorithm QueueThe elevator algorithm(scan-look)operatingsystemsRequest…Search for shortest seek time from thecurrent position only in one direction.Continue in this direction until all requestshave been satisfied, then go the oppositedirection.In the scan algorithm, the head moves all theway to the first (or last) track with a request before it changes direction.
33 Scan-Look operating systems While at track 15, assume some random set of read requestsTrack 4, 40, 11, 35, 7 and 14. Head is moving towards highernumbered tracks.Scan-LookTrackHead PathTracks Traveled40302010Steps50100
34 Which algorithm would you choose if you were implementing an operating system? Issues toconsider when selecting a disk scheduling algorithm:Performance is based on the number and types of requests.What scheme is used to allocate unused disk blocks?How and where are directories and i-nodes stored?How does paging impact disk performance?How does disk caching impact performance?
35 Disk Cache The disk cache holds a number of disk sectors in memory. operatingsystemsThe disk cache holds a number of disk sectors in memory.When an I/O request is made for a particular sector,the disk cache is checked. If the sector is in the cache,it is read. Otherwise, the sector is read into the cache.
36 Replacement Strategies operatingsystemsLeast Recently Usedreplace the sector that has been in the cache thelongest, without being referenced.Least Frequently Usedreplace the sector that has been used the least
37 Redundant Array of Independent Disks RAIDRedundant Array of Independent DisksPush PerformanceAdd reliability
38 RAID Level 0: Striping operating systems Physical Physical Drive 1 A Stripestrip 4strip 2strip 3strip 5strip 4strip 5strip 6strip 6strip 7strip 7o o oo o ostrip 8strip 9strip 10Disk ManagementSoftwarestrip 11o o oLogical Disk
39 RAID Level 1: Mirroring operating systems High Reliability Physical strip 0strip 1PhysicalDrive 1PhysicalDrive 2PhysicalDrive 3PhysicalDrive 4strip 2strip 3strip 0strip 0strip 1strip 1strip 0strip 0strip 1strip 1strip 4strip 2strip 2strip 3strip 3strip 2strip 2strip 3strip 3strip 5strip 4strip 4strip 5strip 5strip 4strip 4strip 5strip 5strip 6strip 6strip 6strip 7strip 7strip 6strip 6strip 7strip 7strip 7o o oo o oo o oo o oo o oo o oo o oo o ostrip 8strip 9strip 10Disk ManagementSoftwarestrip 11o o oLogical Disk
40 RAID Level 3: Parity operating systems High Throughput Physical strip 0strip 1PhysicalDrive 1PhysicalDrive 2PhysicalDrive 3PhysicalDrive 4strip 2strip 3strip 0strip 0strip 1strip 1strip 0strip 2strip 1parastrip 4strip 3strip 2strip 3strip 4strip 5strip 2strip 3parbstrip 5strip 6strip 4strip 7strip 5strip 4strip 8strip 5parcstrip 6strip 9strip 6strip 10strip 7strip 11strip 6strip 7pardstrip 7o o oo o oo o oo o oo o oo o oo o oo o ostrip 8paritystrip 9strip 10Disk ManagementSoftwarestrip 11o o oLogical Disk
42 Process Time compute Time device 1 10 50 2 30 10 3 15 35 operatingsystemsSuppose that 3 processes, p1, p2, and p3 are attempting to concurrently use a machine with interrupt driven I/O. Assuming that no two processes can be using the cpu or the physical device at the same time, what is the minimum amount of time required to execute the three processes, given the following (ignore context switches):Process Time compute Time device
43 Process Time compute Time device 1 10 50 2 30 10 3 15 35 p3p2P1102030405060708090100110120130105
44 Consider the case where the device controller is double buffering I/O. That is, while the process is readinga character from one buffer, thedevice is writing to the second.ProcessWhat is the effect on the running time of the process if the process is I/O bound and requests characters faster than the device can provide them?ADeviceControllerBThe process reads from buffer A.It tries to read from buffer B, butthe device is still reading. The processblocks until the data has been storedin buffer B. The process wakes up andreads the data, then tries to read Buffer A.Double buffering has not helped performance.
45 Consider the case where the device controller is double buffering I/O. That is, while the process is readinga character from one buffer, thedevice is writing to the second.ProcessWhat is the effect on the running time of the process if the process is Compute bound and requests characters much slower than the device can provide them?ADeviceControllerBThe process reads from buffer A.It then computes for a long time.Meanwhile, buffer B is filled. WhenThe process asks for the data it isalready there. The process does nothave to wait and performance improves.
46 Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively.How many tracks must the head step across usinga FCFS strategy?
47 How many tracks must the head step across using a FCFS strategy? Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively.How many tracks must the head step across usinga FCFS strategy?Track97 to steps84 to steps155 to steps103 to steps96 to steps244 steps19915010050Steps100200
48 Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively.How many tracks must the head step across usingan elevator strategy?
49 How many tracks must the head step across using an elevator strategy? Suppose that the read/write head is at track is at track 97, moving toward the highest numbered track on the disk, track 199. The disk request queue contains read/write requests for blocks on tracks 84, 155, 103, 96, and 197, respectively.How many tracks must the head step across usingan elevator strategy?Track97 to steps103 to steps155 to steps197 to steps199 to steps96 to steps217steps19915010050Steps100200
50 In our class discussion on directories it was suggested that directory entries are stored as a linear list. Whatis the big disadvantage of storing directory entriesthis way, and how could you address this problems?Consider what happens when look up a file …The directory must be searched in a linear way.
51 Which file allocation scheme discussed in class gives the best performance? What are some of the concerns withthis approach?Contiguous allocation schemes gives the best performance.Two big problems are:* Finding space for a new file (it must all fit in contiguous blocks)* Allocating space when we don’t know how big the file will be,or handling files that grow over time.
52 What is the difference between internal and external fragmentation? Internal fragmentation occurs when only a portion of aFile block is used by a file.External fragmentation occurs when the free space on a diskdoes not contain enough space to hold a file.
53 Linked allocation of disk blocks solves many of the problems of contiguous allocation, but it does notwork very well for random access files. Why not?To access a random block on disk, you must walkThrough the entire list up to the block you need.
54 Linked allocation of disk blocks has a reliability problem. What is it?If a link breaks for any reason, the disk blocks afterThe broken link are inaccessible.