Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 13-2 I/O Systems. 13.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 13-2: I/O Systems Chapter 13-2 I/O Hardware (continued)

Similar presentations


Presentation on theme: "Chapter 13-2 I/O Systems. 13.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 13-2: I/O Systems Chapter 13-2 I/O Hardware (continued)"— Presentation transcript:

1 Chapter 13-2 I/O Systems

2 13.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 13-2: I/O Systems Chapter 13-2 I/O Hardware (continued) Application I/O Interface Chapter 13-3 Kernel I/O Subsystem Transforming I/O Requests to Hardware Operations Streams Performance

3 13.3 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Interrupts – How they Work CPU hardware simply has a wire (interrupt request line) it senses after executing every instruction. When the CPU detects signal on the line (set by the device controller), CPU performs a state save* and jumps to an interrupt handler routine in a fixed memory location down in the kernel I/O subsystem space. Please note: this save state does not necessarily mean context switching!! Current processing is suspended temporarily (can do context switching in general – but we will discuss this more ahead…) This interrupt handler determines the cause of the interrupt, performs the processing, restores the previous CPU state, and executes a return from the interrupt to return to the CPU to the execution state prior to the interrupt. Terminology: the device controller raises an interrupt; the CPU catches the interrupt, and dispatches it to the interrupt handler. The handler clears the interrupt after servicing the device See process on next slide.

4 13.4 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Interrupt-Driven I/O Cycle This graph is very high level…but: Saving state, etc. are part of this process but not shown in step 4 above… Restoring state and setting CPU back up to resume are implicit in steps 5 and 6.

5 13.5 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Interrupt Issues So, CPU is not polling and potentially wasting time. CPU thus responds to an asynchronous event. Unfortunately, interrupt handling causes many additional problems and issues that need to be resolved.  We will consider many of these…  Here’s one issue: Without actual polling, how is it determined what is the appropriate interrupt handler to branch to without knowing the device (who raised the interrupt?) caused the interrupt? Too: all interrupts are not created equal! Some must be serviced immediately – we cannot go on. Others may be deferred and are clearly lower level – of lesser importance. How are these determinations made? We refer to those interrupts that we can ‘ignore’ temporarily as maskable. Others are referred to as unmaskable – must be serviced now! Example: We are constantly executing critical, kernel routines (semaphores, etc.) that simply cannot be interrupted. (Critical Sections of code)… We simply need multiple levels of interrupts.

6 13.6 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Maskable vs Unmaskable Interrupts using Interrupt-Controller Hardware In hardware, CPUs typically have two interrupt request lines One is the non-maskable interrupt; the second: maskable. Unmaskable interrupts must be handled immediately. Maskable interrupts may be turned off by the CPU before the CPU starts to execute a critical section of code (cannot be interrupted) Device Controllers “typically” use maskable interrupts, which means they can be deferred or queued…(Other ‘events’ causing interrupts are discussed ahead many of which are NOT maskable!) There are many kinds of events that may cause interrupts to take place and many of these can be grouped into categories. Categorization helps to determine what ‘kind’ of interrupt handler should be transferred to and executed. Thus, In practice, the interrupt code (from a source such as a device driver) accepts an address – a number which selects a specific interrupt handling routine from an interrupt vector. This vector contains memory addresses of specific interrupt handlers. A specific memory address points to a head of a list containing pointers to interrupt handlers designed to service faults requiring similar service.  Simply implementing similar interrupt handlers via a linked list. This list, very paired down from a comprehensive list of handlers, can then be searched rather quickly to determine the specific handler to which to branch. This is called interrupt chaining, While this does represent some processing, the performance is quite good and is a major improvement over searching a very large list of potential interrupt handlers.

7 13.7 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Intel Pentium Processor Event-Vector Table Note the interrupt codes for maskable interrupts. Potentially, there is a large number of them. Note again: these are ‘device generated’ interrupts!

8 13.8 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Levels of Interrupts So we can see and we have definite interrupt priority levels. This interrupt mechanism (interrupt vector table) is essential for the smooth performance of the computing system and allows CPU to defer handling lower-priority interrupts without masking out all interrupts Some can simply be dispatched to ‘later.’ How are these priority maskable interrupts determined ? At boot time, the OS checks out the hardware to see what devices are connected to the buses. Then, appropriate interrupt handlers are installed as part of the system generation problem Addresses of these handlers deep down in the kernel I/O subsystem are linked to by the interrupt vector lists. During I/O operations, device controllers raise interrupts when they are ready for service, whatever it might be: output completed, input data available, etc. CPU’s respond, etc… (more ahead)

9 13.9 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Intel Pentium Processor Event-Vector Table Other problems… Generally occurring in user mode. Note how many of these result from application program execution. Not maskable

10 13.10 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Other Interrupts: Very Important Exceptions: Our interrupt mechanism can be used to handle a variety of other events. If we attempt to  divide by zero,  access protected areas,  develop invalid operators, etc.  we are creating an exception from user mode (see previous slide). These types of exceptions all cause interrupts that must be handled now! These are not maskable! These also cause termination of the process (usually) and hence a context switch.

11 13.11 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Another Example of Classes of Interrupts - 2 Another class of exceptions: How about using this interrupt mechanism in servicing page faults in a virtual memory system. We have this all the time in virtual systems! Here, the current process is suspended while the page-fault handler in the kernel resolves this issue. (full context switch!) In this case, the interrupt  suspends the current process,  jumps to the page handler routine down in the kernel,  saves the state of the process,  moves the process to a wait queue,  performs page-cache management,  schedules an I/O operation to fetch the page,  schedules another process to resume execution, and then  returns from the interrupt. Here we have (typically) full context switching – and the CPU does not resume (normally) processing of the interrupted routine while the interrupt handler deals with the problem. The CPU is switched to another waiting process!

12 13.12 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Other Interrupts: Very Important - 3 Software Interrupts – Traps. Software Generated, but ‘interrupt-like.’ ‘Interrupts’ are also used in accommodating many system calls. We usually use library calls to handle issue system calls – which result in specific requests for service, such as a read or write operation. Here, these system calls check arguments provided by the application (such as reading from a specific file – by name), build a data structure to convey this info the kernel (all the parameters associated with the read to support the requested activity), and execute a special software trap instruction. Note how this is handled differently from some other service requets: This approach identifies the specific kernel service requested. These system calls are found down in the kernel I/O subsystem ‘below’ the interrupt handlers (so to speak). These system calls are used to  save the state of the user code,  cause a switch to supervisor mode (so the CPU can execute special privileged instructions), (all system calls involve privileged instructions) and  dispatches to the kernel routine that implements the requested service. Please note that these usually result in a context switch.

13 13.13 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts A bit more on traps (3-continued) As it turns out, traps are usually given a relatively low-priority service compared to those assigned to device interrupts – and clearly for good reasons. Traps, as those described, clearly require less urgency. They typically involve servicing a program (application, let’s say – but perhaps a kernel module). These traps don’t have the urgency that an interrupt from a device controller might have. We constantly try to keep the I/O subsystem running to its fullest. We cannot afford to lose data (e.g. before perhaps, a FIFO queue associated with a device might overflow resulting in data loss) Traps (software interrupts) can usually be deferred, if necessary with often no major degradation of service or performance.

14 13.14 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Still More …Speeding up I/O Interrupts may also be used to manage flow of control within kernel modules. Let’s say we need to read a disk. One step is to copy data from kernel space to the user buffer. Copying is time consuming but not urgent – this should not block other high-priority interrupt handling. Simply, this is not “as urgent” as servicing an device-generated interrupt. Pair of Interrupt Handlers: So to speed things up for input/output in general and to speed up the effective execution of the interrupt routines, we may start the next pending I/O for that disk drive before ‘this’ one is totally completed.. To do this, we use a pair of interrupt handlers in the kernel code that completes a disk read. The high priority handler records the I/O status, clears the device interrupt (which means the next I/O can start!!), starts the next pending I/O, and then raises a low- priority interrupt to complete the work which results in copying, say, data from a kernel buffer into an application’s name space. In this way, the next disk I/O can take place and is not delayed. The, when the CPU ‘gets a chance,’ this low priority interrupt will be dispatched, and completes the user-level I/O by copying data from kernel buffers to the application space. Then, the interrupt handler can call the scheduler to place the application back into the ready queue (context switch).

15 13.15 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Summary In summary, interrupts are used throughout modern operating systems to handle asynchronous events and to trap supervisor mode routines (system calls) in the kernel. The implementation of an interrupt priority scheme is clearly a must. Some asynchronous events need servicing now, others are of lesser importance. To do this, modern computers use a system of interrupt priorities. Device controllers, hardware faults, and system calls all raise interrupts to trigger kernel routines. Because interrupts are used so heavily for time-sensitive processing, efficient interrupt handling is required for good system performance. I’ve spent a good deal of time on this – most directly from the book – but not all. These variations on exactly how interrupts (hardware) and traps (software interrupts) and handled along with various exceptions that can arise is very involved. It is important to understand these things – and what causes a context switch and what does not; what events are maskable (and why) and not, and more… Let’s now look at a special processor that really helps – but with a cost, of course.

16 13.16 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts For large data transfers, as like that which comes from a disk, it is way too inefficient to watch status bits and use controller registers for a “one-byte at a time data transfer” – whether we poll - Programmed I/O / busy waiting – (or wait for the CPU to be interrupted – Interrupt-based I/O.) To improve overall performance – especially with regards to larger data transfers, we can use a special processor called a Direct Memory Access (DMA) controller. (DMA controllers are standard components in PCs) To get this started, the host writes a DMA command “block” into memory. A DMA Block contains a pointer to source of transfer (where it is coming from), pointer to destination of transfer (where it is going to), and a byte count to be transferred (how much). CPU then passes the address of this command block to the DMA controller. In practice, the DMA controller operates the memory bus directly placing addresses on the bus to take care of the transfer (no help from the CPU). The beauty of Direct Memory Access is that all this transfer takes place (again) with no CPU assistance. Only when the transfer is accommodated does the CPU get back involved.

17 13.17 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Handshaking between DMA Controller and Device Controller Handshaking uses a pair of wires: DMA request line and a DMA acknowledge line. Device Controller places a signal on the DMA request line when a word of data is available for transfer from a device (via the device controller registers …) to, say, primary memory. Signal causes the DMA controller to seize the memory bus, place desired address on the memory address wires, and place a signal on the DMA acknowledge wire. The device controller receives this DMA acknowledge and transfers the word of data to memory and removes the DMA request signal. See figure coming up (two slides down) When entire transfer is completed, the DMA controller interrupts the CPU. Cycle Stealing: When DMA controller seizes memory bus to transfer data, the CPU cannot access main memory – at that instant. can still access data items in its primary and secondary caches. Note: I/O activity is given priority!!! This phenomenon is called cycle stealing. Overall, this offloading of data transfer to a DMA controller significantly improves total system performance.

18 13.18 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Direct Memory Access - main notions Used to avoid programmed I/O for large data movement Requires hardware DMA controller Bypasses CPU to transfer data directly between I/O device and memory A device's direct memory access (DMA) is not affected by those CPU- to-device communication methods. By definition, DMA is a memory-to-device communication method that bypasses the CPU.

19 13.19 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Six Step Process to Perform DMA Transfer Data coming into (reading) Memory… Here, again, pretty general. Seizing memory bus not articulated here. Lots of little activities are missing here – but okay…

20 13.20 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts I/O Hardware Summary I/O is complex – especially when we consider the operations at the hardware level. Main concepts: A bus A controller (and controller registers) An I/O port (and their registers) The handshaking relationship between the host and a device controller The execution of this handshaking in a polling loop or via interrupts The offloading of this work to a DMA controller for large transfers. Need to understand handshaking between a device controller and a host. But all have their own variations, such as control bit definitions, protocols for interacting with the host, and more With such, we like to have some kind of standard I/O interface!

21 End of Chapter 13-2


Download ppt "Chapter 13-2 I/O Systems. 13.2 Silberschatz, Galvin and Gagne ©2005 Operating System Concepts Chapter 13-2: I/O Systems Chapter 13-2 I/O Hardware (continued)"

Similar presentations


Ads by Google