Presentation is loading. Please wait.

Presentation is loading. Please wait.

COSC121: Computer Systems. Exceptions and Interrupts

Similar presentations


Presentation on theme: "COSC121: Computer Systems. Exceptions and Interrupts"— Presentation transcript:

1 COSC121: Computer Systems. Exceptions and Interrupts
Jeremy Bolton, PhD Assistant Teaching Professor Constructed using materials: - Patt and Patel Introduction to Computing Systems (2nd) - Patterson and Hennessy Computer Organization and Design (4th) **A special thanks to Eric Roberts and Mary Jane Irwin

2 Notes Read PH.4.9 and PH.6

3 Outline Interrupts Exceptions

4 This week … our journey takes us …
COSC 121: Computer Systems Application (Browser) Operating System (Win, Linux) Compiler COSC 255: Operating Systems Software Assembler Drivers Instruction Set Architecture Hardware Processor Memory I/O system Datapath & Control Digital Design COSC 120: Computer Hardware Circuit Design transistors

5 Review: Major Components of a Computer
Processor Devices Control Output Memory Datapath Input Important metrics for an I/O system Performance Expandability Dependability Cost, size, weight Security

6 A Typical I/O System Interrupts Processor Cache Memory - I/O Bus Main
Controller I/O Controller I/O Controller The I/O devices are shown here to be connected to the computer via I/O controllers that sit on the Memory I/O busses. Notice that I/O system is inherently more irregular than the Processor and the Memory system because all the different devices (disk, graphics) that can attach to it. So when one designs an I/O system, performance is still an important consideration – although performance will now be measured in terms of access latency or throughput. But besides raw performance, one also has to think about expandability, diversity of devices, and resilience in the face of failure. For example, one has to ask questions such as: (a)[Expandability]: is there any easy way to connect another disk to the system? (b) And if this I/O controller (network) fails, is it going to affect the rest of the network? I/O systems place much greater emphasis on dependability and cost, while processors and memory focus on performance and cost. Graphics Disk Disk Network

7 Input and Output Devices
I/O devices are incredibly diverse with respect to Behavior – input, output or storage Partner – human or machine Data rate – the peak rate at which data can be transferred between the I/O device and the main memory or processor Device Behavior Partner Data rate (Mb/s) Keyboard input human 0.0001 Mouse 0.0038 Laser printer output 3.2000 Magnetic disk storage machine Graphics display Network/LAN input or output 8 orders of magnitude range Here are some examples of the various I/O devices you are probably familiar with. Notice that most I/O devices that has human as their partner usually has relatively low peak data rates because human in general are slow relatively to the computer system. The exceptions are the laser printer and the graphic displays. Laser printer requires a high data rate because it takes a lot of bits to describe high resolution image you like to print by the laser writer. The graphic display requires a high data rate because all the color objects we see in the real world and take for granted are very hard to replicate on a graphic display. Data rates span eight orders of magnitude and are always quoted in base 10 so that 10 Mb/sec = 10,000,000 bits/sec

8 I/O Performance Measures
I/O bandwidth (throughput) – amount of information that can be input (output) and communicated across an interconnect (e.g., a bus) to the processor/memory (I/O device) per unit time How much data can we move through the system in a certain time? How many I/O operations can we do per unit time? I/O response time (latency) – the total elapsed time to accomplish an input or output operation An especially important performance metric in real-time systems Many applications require both high throughput and short response times Desktop and embedded systems are more focuses on response time and diversity of I/O devices Server systems are more focused on throughput and expandability of I/O devices.

9 I/O System Interconnect Issues
A bus is a shared communication link (a single set of wires used to connect multiple subsystems) that needs to support a range of devices with widely varying latencies and data transfer rates Advantages Versatile – new devices can be added easily and can be moved between computer systems that use the same bus standard Low cost – a single set of wires is shared in multiple ways Disadvantages Creates a communication bottleneck – bus bandwidth limits the maximum I/O throughput The maximum bus speed is largely limited by The length of the bus The number of devices on the bus The two major advantages of the bus organization are versatility and low cost. By versatility, we mean new devices can easily be added. Furthermore, if a device is designed according to a industry bus standard, it can be move between computer systems that use the same bus standard. The bus organization is a low cost solution because a single set of wires is shared in multiple ways. The major disadvantage of the bus organization is that it creates a communication bottleneck. When I/O must pass through a single bus, the bandwidth of that bus can limit the maximum I/O throughput. The maximum bus speed is also largely limited by: (a) The length of the bus. (b) The number of I/O devices on the bus. (C) And the need to support a wide range of devices with a widely varying latencies and data transfer rates.

10 Types of Buses Processor-memory bus (“Front Side Bus”, proprietary)
Short and high speed Matched to the memory system to maximize the memory- processor bandwidth Optimized for cache block transfers I/O bus (industry standard, e.g., SCSI, USB, Firewire) Usually is lengthy and slower Needs to accommodate a wide range of I/O devices Use either the processor-memory bus or a backplane bus to connect to memory Backplane bus (industry standard, e.g., PCIexpress) Used as an intermediary bus connecting I/O busses to the processor-memory bus Buses are traditionally classified as one of 3 types: processor memory buses, I/O buses, or backplane buses. The processor memory bus is usually design specific while the I/O and backplane buses are often standard buses. In general processor bus are short and high speed. It tries to match the memory system in order to maximize the memory-to-processor BW and is connected directly to the processor. I/O bus usually is lengthy and slow because it has to match a wide range of I/O devices and it usually connects to the processor-memory bus or backplane bus. Backplane bus receives its name because it was often built into the backplane of the computer--it is an interconnection structure within the chassis. It is designed to allow processors, memory, and I/O devices to coexist on a single bus so it has the cost advantage of having only one single bus for all components. +2 = 16 min. (X:56)

11 I/O Transactions An I/O transaction is a sequence of operations over the interconnect that includes a request and may include a response either of which may carry data. A transaction is initiated by a single request and may take many individual bus operations. An I/O transaction typically includes two parts Sending the address Receiving or sending the data Bus transactions are defined by what they do to memory A read transaction reads data from memory (to either the processor or an I/O device) A write transaction writes data to the memory (from either the processor or an I/O device) output input

12 Synchronous and Asynchronous Buses
Synchronous bus (e.g., processor-memory buses) Includes a clock in the control lines and has a fixed protocol for communication that is relative to the clock Advantage: involves very little logic and can run very fast Disadvantages: Every device communicating on the bus must use same clock rate To avoid clock skew, they cannot be long if they are fast Asynchronous bus (e.g., I/O buses) It is not clocked, so requires a handshaking protocol and additional control lines (ReadReq, Ack, DataRdy) Advantages: Can accommodate a wide range of devices and device speeds Can be lengthened without worrying about clock skew or synchronization problems Disadvantage: slow(er) There are substantial differences between the design requirements for the I/O buses and processor-memory buses and the backplane buses. Consequently, there are two different schemes for communication on the bus: synchronous and asynchronous. Synchronous bus includes a clock in the control lines and a fixed protocol for communication that is relative to the clock. Since the protocol is fixed and everything happens with respect to the clock, it involves very logic and can run very fast. Most processor-memory buses fall into this category. Synchronous buses have two major disadvantages: (1) every device on the bus must run at the same clock rate. (2) And if they are fast, they must be short to avoid clock skew problem. By definition, an asynchronous bus is not clocked so it can accommodate a wide range of devices at different clock rates and can be lengthened without worrying about clock skew. The draw back is that it can be slow and more complex because a handshaking protocol is needed to coordinate the transmission of data between the sender and receiver. +2 = 28 min. (Y:08)

13 Asynchronous Bus Handshaking Protocol
Output (read) data from memory to an I/O device ReadReq Data Ack DataRdy addr data 1 2 3 4 6 5 7 I/O device signals a request by raising ReadReq and putting the addr on the data lines Memory sees ReadReq, reads addr from data lines, and raises Ack ReadReq: used to indicate a read request for memory. The address is put on the data lines at the same time. DataRdy: used to indicate that the data word is now ready on the data lines. In an output transaction, the memory will assert this signal. In an input transaction, the I/O device will assert it. Ack: used to acknowledge the ReadReq or the DataRdy signal of the other party. The control signals ReadReq and DataRdy are asserted until the other party indicates that the control lines have been seen and the data lines have been read; This indication is made by asserting the Ack line - HANDSHAKING. I/O device sees Ack and releases the ReadReq and data lines Memory sees ReadReq go low and drops Ack When memory has data ready, it places it on data lines and raises DataRdy I/O device sees DataRdy, reads the data from data lines, and raises Ack Memory sees Ack, releases the data lines, and drops DataRdy I/O device sees DataRdy go low and drops Ack

14 A Typical I/O System Intel Xeon 5300 processor Intel Xeon 5300
Front Side Bus (1333MHz, 10.5GB/sec) Memory Controller Hub (north bridge) 5000P FB DDR2 667 (5.3GB/sec) Main memory DIMMs PCIe 8x (2GB/sec) ESI (2GB/sec) I/O Controller Hub (south bridge) Entreprise South Bridge 2 CD/DVD Disk Serial ATA (300MB/sec) Keyboard, Mouse, … LPC (1MB/sec) USB ports USB 2.0 (60MB/sec) PCIe 4x (1GB/sec) PCI-X bus Parallel ATA (100MB/sec) The north bridge is a DMA controller connecting the processor to memory, possibly a graphics card, and the south bridge. (The north bridge is starting to move on-chip, e.g., the AMD Opteron, thus reducing chip count and the latency to memory and graphics by skipping a chip crossing.) The south bridge connects the north bridge to a large set of I/O buses. ESI is Enclosure Services Interface – a protocol used in SCSI enclosures

15 Example: The Pentium 4’s Buses
Memory Controller Hub (“Northbridge”) System Bus (“Front Side Bus”): 64b x 800 MHz (6.4GB/s), 533 MHz, or 400 MHz Graphics output: GB/s DDR SDRAM Main Memory Gbit ethernet: GB/s Hub Bus: 8b x 266 MHz 2 serial ATAs: 150 MB/s PCI: b x 33 MHz From old notes (2003) so in hide mode. But a nice picture of what used to be … The northbridge chip is basically a DMA controller (to the memory, the graphical output device (monitor) and the southbridge chip). The southbridge connects the northbridge to a cornucopia of I/O buses I/O Controller Hub (“Southbridge”) 2 parallel ATA: 100 MB/s 8 USBs: MB/s

16 Interfacing I/O Devices to the Processor, Memory, and OS
The operating system acts as the interface between the I/O hardware and the program requesting I/O since Multiple programs using the processor share the I/O system I/O systems usually use interrupts which are handled by the OS Low-level control of an I/O device is complex and detailed Thus OS must handle interrupts generated by I/O devices and supply routines for low-level I/O device operations, provide equitable access to the shared I/O resources, protect those I/O devices/activities to which a user program doesn’t have access, and schedule I/O requests to enhance system throughput OS must be able to give commands to the I/O devices I/O device must be able to notify the OS about its status Must be able to transfer data between the memory and the I/O device

17 Communication of I/O Devices and Processor
How the processor directs the I/O devices Special I/O instructions Must specify both the device and the command Memory-mapped I/O Portions of the high-order memory address space are assigned to each I/O device Read and writes to those memory addresses are interpreted as commands to the I/O devices Load/stores to the I/O address space can only be done by the OS How I/O devices communicate with the processor Polling – the processor periodically checks the status of an I/O device (through the OS) to determine its need for service Processor is totally in control – but does all the work Can waste a lot of processor time due to speed differences Interrupt-driven I/O – the I/O device issues an interrupt to indicate that it needs attention If special I/O instructions are used, the OS will use the I/O instruction to specify both the device number and the command word. The processor then executes the special I/O instruction by passing the device number to the I/O device (in most cases) via a set of control lines on the bus and at the same time sends the command to the I/O device using the bus’s data lines. Special I/O instructions are not used that widely. Most processors use memory-mapped I/O where portions of the address space are assigned to the I/O device. Read and write to this special address space are interpreted by the memory controller as I/O commands and the memory controller will do right thing to communicate with the I/O device. If a polling check takes 100 cycles and the processor has a 500 MHz clock, a mouse running at 30x/s takes only % of the processor, a floppy running at 50KB/s takes only 0.5%, but a disk running at 2MB/sec takes 10%.

18 Interrupt Driven I/O An I/O interrupt is asynchronous wrt instruction execution Is not associated with any instruction so doesn’t prevent any instruction from completing You can pick your own convenient point to handle the interrupt With I/O interrupts Need a way to identify the device generating the interrupt Can have different urgencies (so need a way to prioritize them) Advantages of using interrupts Relieves the processor from having to continuously poll for an I/O event; user program progress is only suspended during the actual transfer of I/O data to/from user memory space Disadvantage – special hardware is needed to Indicate the I/O device causing the interrupt and to save the necessary information prior to servicing the interrupt and to resume normal processing after servicing the interrupt An I/O interrupt is asynchronous with respect to the instruction execution while exception such as overflow or page fault are always associated with a certain instruction. Also for exception, the only information needs to be conveyed is the fact that an exceptional condition has occurred but for interrupt, there is more information to be conveyed. Let me elaborate on each of these two points. Unlike exception, which is always associated with an instruction, interrupt is not associated with any instruction. The user program is just doing its things when an I/O interrupt occurs. So I/O interrupt does not prevent any instruction from completing so you can pick your own convenient point to take the interrupt. As far as conveying more information is concerned, the interrupt detection hardware must somehow let the OS know who is causing the interrupt. Furthermore, interrupt requests needs to be prioritized.

19 Interrupt Priority Levels
Priority levels can be used to direct the OS the order in which the interrupts should be serviced MIPS Status register Determines who can interrupt the processor (if Interrupt enable is 0, none can interrupt) MIPS Cause register To enable a Pending interrupt, the correspond bit in the Interrupt mask must be 1 Once an interrupt occurs, the OS can find the reason in the Exception codes field Interrupt enable Exception level User mode Interrupt mask 15 8 4 1 Pending interrupts 15 8 6 2 Branch delay 31 Exception codes STATUS REGISTER Intr Mask = 1 bit for each of 6 hw and 2 sw exception levels (1 enables exception at that level, 0 disables them) User mode = 0 if running in kernel mode when exception occurred; 1 if running in user mode (fixed at 1 in SPIM) Excp level = set to 1 (disable exceptions) when an exception occurs; should be reset by exception handler when done Intr enable = 1 if exception are enabled; 0 if disabled CAUSE REGISTER Exception codes (reasons for interrupt) 0 (INT)  external interrupt (I/O device request) 4 (AdEL)  address error trap (load or instr fetch) 5 (AdES)  address error trap (store) 6 (IBE)  bus error on instruction fetch trap 7 (DBE)  bus error on data load or store trap 8 (Sys)  syscall trap 9 (Bp)  breakpoint trap 10 (RI)  reserved (or undefined) instruction trap 12 (Ov)  arithmetic overflow trap Pending interrupts bits set if exception occurs but not yet serviced so can handle more than one exception occurring at same time, or records exception requests when exception are disabled

20 Interrupt Handling Steps
Logically AND the Pending interrupt field and the Interrupt mask field to see which enabled interrupts could be the culprit. Make copies of both Status and Cause registers. Select the higher priority of these interrupts (leftmost is highest) Save the Interrupt mask field Change the Interrupt mask field to disable all interrupts of equal or lower priority Save the processor state prior to “handling” the interrupt See example with LC3 later Set the Interrupt enable bit (to allow higher-priority interrupts) Call the appropriate interrupt handler routine Before returning from interrupt, set the Interrupt enable bit back to 0 and restore the Interrupt mask field Interrupt priority levels (IPLs) assigned by the OS to each process can be raised and lowered via changes to the Status’s Interrupt mask field

21 Direct Memory Access (DMA)
For high-bandwidth devices (like disks) interrupt-driven I/O would consume a lot of processor cycles With DMA, the DMA controller has the ability to transfer large blocks of data directly to/from the memory without involving the processor The processor initiates the DMA transfer by supplying the I/O device address, the operation to be performed, the memory address destination/source, the number of bytes to transfer The DMA controller manages the entire transfer (possibly thousand of bytes in length), arbitrating for the bus When the DMA transfer is complete, the DMA controller interrupts the processor to let it know that the transfer is complete There may be multiple DMA devices in one system Processor and DMA controllers contend for bus cycles and for memory The processor will be delayed when the memory is busy doing a DMA transfer. By using caches, the processor can avoid having to access the memory most of the time, leaving most of the memory bandwidth free for use by the I/O devices. The real issue is cache coherence. What if the I/O device writes data that is currently in the processor cache (the processor may never see the new data) or reads data that is not up-to-date (because of a write-back cache)? Solutions are to flush the cache on every I/O operation (expensive) or have the hardware invalidate the cache lines.

22 The DMA Stale Data Problem
In systems with caches, there can be two copies of a data item, one in the cache and one in the main memory For a DMA input (from disk to memory) – the processor will be using stale data if that location is also in the cache For a DMA output (from memory to disk) and a write- back cache – the I/O device will receive stale data if the data is in the cache and has not yet been written back to the memory The coherency problem can be solved by Routing all I/O activity through the cache – expensive and a large negative performance impact Having the OS invalidate all the entries in the cache for an I/O input or force write-backs for an I/O output (called a cache flush) Providing hardware to selectively invalidate cache entries – i.e., need a snooping cache controller Cache flushing – requires some small amount of hardware support; because flushing need only happen on DAM block accesses, it will be relatively infrequent.

23 I/O System Performance
Designing an I/O system to meet a set of bandwidth and/or latency constraints means Finding the weakest link in the I/O system – the component that constrains the design The processor and memory system ? The underlying interconnection (i.e., bus) ? The I/O controllers ? The I/O devices themselves ? (Re)configuring the weakest link to meet the bandwidth and/or latency requirements Determining requirements for the rest of the components and (re)configuring them to support this latency and/or bandwidth THIS SERIES OF SLIDES HAS NOT YET BEEN UPDATED SO ARE IN HIDE MODE Performance of I/O systems depends on the rate at which the system transfers data. The transfer rate depends on the clock rate and is usually quoted in GB/sec. In I/O systems, GBs are measure using base 10 – i.e., 1 GB = 10^9 = 1,000,000,000 bytes.

24 I/O System Performance Example
A disk workload consisting of 64KB reads and writes where the user program executes 200,000 instructions per disk I/O operation and a processor that sustains 3 billion instr/s and averages 100,000 OS instructions to handle a disk I/O operation The maximum disk I/O rate (# I/O’s/sec) of the processor is a memory-I/O bus that sustains a transfer rate of 1000 MB/s Each disk I/O reads/writes 64 KB so the maximum I/O rate of the bus is For class handout SCSI disk I/O controllers with a DMA transfer rate of 320 MB/s that can accommodate up to 7 disks per controller disk drives with a read/write bandwidth of 75 MB/s and an average seek plus rotational latency of 6 ms what is the maximum sustainable I/O rate and what is the number of disks and SCSI controllers required to achieve that rate?

25 I/O System Performance Example
A disk workload consisting of 64KB reads and writes where the user program executes 200,000 instructions per disk I/O operation and a processor that sustains 3 billion instr/s and averages 100,000 OS instructions per disk I/O operation The maximum disk I/O rate (# I/O’s/s) of the processor is = = 10,000 I/O’s/s Instr execution rate x 109 Instr per I/O ( ) x 103 a memory-I/O bus that sustains a transfer rate of 1000 MB/s Each disk I/O reads/writes 64 KB so the maximum I/O rate of the bus is = = 15,625 I/O’s/s Bus bandwidth x 106 Bytes per I/O x 103 For lecture SCSI disk I/O controllers with a DMA transfer rate of 320 MB/s that can accommodate up to 7 disks per controller disk drives with a read/write bandwidth of 75 MB/s and an average seek plus rotational latency of 6 ms what is the maximum sustainable I/O rate and what is the number of disks and SCSI controllers required to achieve that rate?

26 Disk I/O System Example
10,000 I/O’s/s Processor Cache Memory - I/O Bus 15,625 I/O’s/s I/O Controller Disk I/O Controller Disk 320 MB/s Main Memory 75 MB/s Up to 7

27 I/O System Performance Example, Con’t
So the processor is the bottleneck, not the bus disk drives with a read/write bandwidth of 75 MB/s and an average seek plus rotational latency of 6 ms Disk I/O read/write time = seek + rotational time + transfer time = ms + 64KB/(75MB/s) = 6.9ms Thus each disk can complete 1000ms/6.9ms or 146 I/O’s per second. To saturate the processor requires 10,000 I/O’s per second or ,000/146 = 69 disks To calculate the number of SCSI disk controllers, we need to know the average transfer rate per disk to ensure we can put the maximum of 7 disks per SCSI controller and that a disk controller won’t saturate the memory-I/O bus during a DMA transfer Note that only one disk controller will be using the bus at a time, so you only need to be concerned about how much load ONE disk I/O controller is putting on the memory-I/O bus when doing a DMA transfer (i.e., don’t have to worry about the processor bandwidth here). Disk transfer rate = (transfer size)/(transfer time) = 64KB/6.9ms = 9.56 MB/s Thus 7 disks won’t saturate either the SCSI controller (with a maximum transfer rate of 320 MB/s) or the memory-I/O bus (1000 MB/s). This means we will need 69/7 or 10 SCSI controllers.

28 Exceptions: Example MIPS
Jeremy Bolton, PhD Assistant Teaching Professor Constructed using materials: - Patt and Patel Introduction to Computing Systems (2nd) - Patterson and Hennessy Computer Organization and Design (4th) **A special thanks to Eric Roberts and Mary Jane Irwin

29 Dealing with Exceptions
Exceptions (and interrupts) are just another form of control hazard. Exceptions and interrupts arise from R-type arithmetic overflow Trying to execute an undefined instruction An I/O device request An OS service request (e.g., a page fault, TLB exception) A hardware malfunction The pipeline has to stop executing the offending instruction in midstream, let all prior instructions complete, flush all following instructions, set a register to show the cause of the exception, save the address of the offending instruction, and then jump to a prearranged address (the address of the exception handler code) The software (OS) looks at the cause of the exception and “handles” with it For undefined instrs, hardware failure, or arith overflow – the OS normally kills the program and returns an indication of the reason to the user For an I/O device request or an OS service call – the OS saves the state of the program, performs the desired task and, at some point in the future, restores the program to continue execution

30 Two Types of “Exceptions”
Interrupts – asynchronous to program execution caused by external events may be handled between instructions, so can let the instructions currently active in the pipeline complete before passing control to the OS interrupt handler simply suspend and resume user program Exceptions – synchronous to program execution caused by internal events condition must be remedied by the trap handler for that instruction, so much stop the offending instruction midstream in the pipeline and pass control to the OS trap handler the offending instruction may be retried (or simulated by the OS) and the program may continue or it may be aborted

31 Where in the Pipeline Exceptions Occur
ALU IM Reg DM Stage(s)? Synchronous? yes no Arithmetic overflow Undefined instruction TLB or page fault I/O service request Hardware malfunction EX ID IF, MEM any For lecture Note that an I/O service request is not associated with any executing instruction so can be handled when the hardware decides (to some limit) –i.e., pick the most convenient place to handle the exception Beware that multiple exceptions can occur simultaneously in a single clock cycle

32 Multiple Simultaneous Exceptions
ALU IM Reg DM Inst 0 I n s t r. O r d e D$ page fault ALU IM Reg DM Inst 1 arithmetic overflow ALU IM Reg DM Inst 2 undefined instruction ALU IM Reg DM Inst 3 For lecture ALU IM Reg DM Inst 4 I$ page fault Hardware sorts the exceptions so that the earliest instruction is the one interrupted first

33 Additions to MIPS to Handle Exceptions (Fig 6.42)
Cause register (records exceptions) – hardware to record in Cause the exceptions and a signal to control writes to it (CauseWrite) EPC register (records the addresses of the offending instructions) – hardware to record in EPC the address of the offending instruction and a signal to control writes to it (EPCWrite) Exception software must match exception to instruction A way to load the PC with the address of the exception handler Expand the PC input mux where the new input is hardwired to the exception handler address - (e.g., hex for arithmetic overflow) A way to flush offending instruction and the ones that follow it

34 Datapath with Controls for Exceptions
Read Address Instruction Memory PC 4 Write Data Read Addr 1 Read Addr 2 Write Addr RegFile Read Data 1 ReadData 2 16 32 ALU Shift left 2 Add Data Read Data IF/ID Sign Extend ID/EX EX/MEM MEM/WB Control cntrl Branch PCSrc Forward Unit Hazard 1 Compare IF.Flush hex Cause EPC EX.Flush ID.Flush (new) Figure 4.66

35 Exceptions: Example LC3
Jeremy Bolton, PhD Assistant Teaching Professor Constructed using materials: - Patt and Patel Introduction to Computing Systems (2nd) - Patterson and Hennessy Computer Organization and Design (4th) **A special thanks to Eric Roberts and Mary Jane Irwin

36 Interrupt-Driven I/O Interrupts.
External device signals need to be serviced. Processor saves state and starts service routine. When finished, processor restores state and resumes program. How doe we save the current state of computation so the interrupt can be resolved? Interrupt is an unscripted subroutine call, triggered by an external event.

37 Processor State: Example LC3
What state is needed to completely capture the state of a running process? Processor Status Register Privilege [15], Priority Level [10:8], Condition Codes [2:0] Program Counter Pointer to next instruction to be executed. Registers All temporary state of the process that’s not stored in memory.

38 Where to Save Processor State?
Can’t use registers. Programmer doesn’t know when interrupt might occur, so she can’t prepare by saving critical registers. When resuming, need to restore state exactly as it was. Memory allocated by service routine? Must save state before invoking routine, so we wouldn’t know where. Also, interrupts may be nested – that is, an interrupt service routine might also get interrupted! Use a stack! Location of stack “hard-wired”. Push state to save, pop to restore.

39 Supervisor Stack A special region of memory used as the stack for interrupt service routines. Initial Supervisor Stack Pointer (SSP) stored in Saved.SSP. Another register for storing User Stack Pointer (USP): Saved.USP. Want to use R6 as stack pointer. So that our PUSH/POP routines still work. When switching from User mode to Supervisor mode (as result of interrupt), save R6 to Saved.USP.

40 Invoking the Service Routine – The Details
If Priv = 1 (user), Saved.USP = R6, then R6 = Saved.SSP. Push PSR and PC to Supervisor Stack. Set PSR[15] = 0 (supervisor mode). Set PSR[10:8] = priority of interrupt being serviced. Set PSR[2:0] = 0. Set MAR = x01vv, where vv = 8-bit interrupt vector provided by interrupting device (e.g., keyboard = x80). Load memory location (M[x01vv]) into MDR. Set PC = MDR; now first instruction of ISR will be fetched. Note: This all happens between the STORE RESULT of the last user instruction and the FETCH of the first ISR instruction.

41 Returning from Interrupt
Special instruction – RTI – that restores state. Pop PC from supervisor stack. (PC = M[R6]; R6 = R6 + 1) Pop PSR from supervisor stack. (PSR = M[R6]; R6 = R6 + 1) If PSR[15] = 1, R6 = Saved.USP. (If going back to user mode, need to restore User Stack Pointer.) RTI is a privileged instruction. Can only be executed in Supervisor Mode. If executed in User Mode, causes an exception. (More about that later.)

42 Example (1) Saved.SSP / / / / / / / / / / / / ADD / / / / / /
Program A Saved.SSP / / / / / / / / / / / / x3006 ADD / / / / / / / / / / / / / / / / / / PC x3006 Executing ADD at location x3006 when Device B interrupts.

43 Example (2) / / / / / / / / / / / / ADD R6 x3007 RTI / / / / / / PC
Program A ISR for Device B x6200 / / / / / / / / / / / / x3006 ADD R6 x3007 PSR for A x6210 RTI / / / / / / PC x6200 Saved.USP = R6. R6 = Saved.SSP. Push PSR and PC onto stack, then transfer to Device B service routine (at x6200).

44 Example (3) / / / / / / AND / / / / / / ADD R6 x3007 RTI / / / / / /
Program A ISR for Device B x6200 / / / / / / x6202 AND / / / / / / x3006 ADD R6 x3007 PSR for A x6210 RTI / / / / / / PC x6203 Executing AND at x6202 when Device C interrupts.

45 Example (4) R6 x6203 AND ADD x3007 RTI / / / / / / PC x6300 RTI
Program A ISR for Device B x6200 R6 x6203 x6202 AND PSR for B x3006 ADD ISR for Device C x3007 PSR for A x6210 RTI x6300 / / / / / / PC x6300 x6315 RTI Push PSR and PC onto stack, then transfer to Device C service routine (at x6300).

46 Example (5) x6203 AND ADD R6 x3007 RTI / / / / / / PC x6203 RTI
Program A ISR for Device B x6200 x6203 x6202 AND PSR for B x3006 ADD ISR for Device C R6 x3007 PSR for A x6210 RTI x6300 / / / / / / PC x6203 x6315 RTI Execute RTI at x6315; pop PC and PSR from stack.

47 Example (6) Saved.SSP x6203 AND ADD x3007 RTI / / / / / / PC x3007 RTI
Program A ISR for Device B Saved.SSP x6200 x6203 x6202 AND PSR for B x3006 ADD ISR for Device C x3007 PSR for A x6210 RTI x6300 / / / / / / PC x3007 x6315 RTI Execute RTI at x6210; pop PSR and PC from stack. Restore R6. Continue Program A as if nothing happened.

48 Exception: Internal Interrupt
When something unexpected happens inside the processor, it may cause an exception. Examples: Privileged operation (e.g., RTI in user mode) Executing an illegal opcode Divide by zero Accessing an illegal address (e.g., protected system memory) Handled just like an interrupt Vector is determined internally by type of exception Priority is the same as running program


Download ppt "COSC121: Computer Systems. Exceptions and Interrupts"

Similar presentations


Ads by Google