Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory.

Similar presentations

Presentation on theme: "Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory."— Presentation transcript:

1 Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory – Concurrent execution of CPUs and devices competing for memory cycles

2 Computer System Operation I/O devices and the CPU can execute concurrently Each device controller is in charge of a particular device type Each device controller has a local buffer CPU moves data from/to main memory to/from local buffers I/O is from the device to local buffer of controller Device controller informs CPU that it has finished its operation by causing an interrupt

3 Computer Startup and Execution bootstrap program is loaded at power-up or reboot – Typically stored in ROM or EEPROM, generally known as firmware – Initializes all aspects of system – Loads operating system kernel and starts execution Kernel runs, waits for event to occur – Interrupt from either hardware or software Hardware sends trigger on bus at any time Software triggers interrupt by system call Stops current kernel execution, transfers execution to fixed location – Interrupt service routine executes and resumes kernel where interrupted – Usually a service routine for each device / function » Interrupt vector dispatches interrupt to appropriate routine

4 Interrupt Timeline

5 Interrupts invoked with interrupt lines from devices Interrupt controller chooses interrupt request to honor – Mask enables/disables interrupts – Priority encoder picks highest enabled interrupt – Software Interrupt Set/Cleared by Software CPU can disable all interrupts with internal flag Interrupt Controller Network Interrupt Interrupt Mask Control Software Interrupt CPU Priority Encoder Timer Int Disable

6 Example: Network Interrupt  add $r1,$r2,$r3 subi $r4,$r1,#4 slli $r4,$r4,#2 PC saved Disable All Ints Supervisor Mode Restore PC User Mode  Transfer Network Packet from hardware to Kernel Buffers  “Interrupt Handler” lw$r2,0($r4) lw$r3,4($r4) add$r2,$r2,$r3 sw8($r4),$r2  External Interrupt

7 Common Functions of Interrupts Interrupt transfers control to the interrupt service routine generally, through the interrupt vector, which contains the addresses of all the service routines Interrupt architecture must save the address of the interrupted instruction Incoming interrupts are disabled while another interrupt is being processed to prevent a lost interrupt A trap is a software-generated interrupt caused either by an error or a user request An operating system is interrupt-driven

8 Storage Structure Programs must be in main memory (RAM) to execute Von-Neumann architecture START Fetch next instruction from Memory to IR Increment PC Decode and Execute Instruction in IR STOP ? NO YES

9 Storage Structure Ideally, we want programs and data to reside in main memory permanently – Main memory is usually too small – Main memory is volatile – loses contents on power loss Secondary storage holds large quantities of data, permanently – Magnetic disk is the most common secondary-storage device – Actually, a hierarchy of storage varying by speed, cost, size and volatility

10 Storage-Device Hierarchy

11 Storage Hierarchy Storage systems organized in hierarchy according to speed, cost, and volatility A program in execution (i.e., a process) generates a stream of memory addresses START Fetch next instruction from Memory to IR Increment PC Decode and Execute Instruction in IR STOP ? NO YES

12 Storage Hierarchy What if next instruction/data is not in (main) memory? – Problem: Memory can be a bottleneck for processor performance – Solution: Rely on memory hierarchy of faster memory to bridge the gap

13 Caching Important principle, performed at many levels in a computer (in hardware, operating system, software) Information in use copied from slower to faster storage temporarily Faster storage (cache) checked first to determine if information is there – If it is, information used directly from the cache (fast) – If not, data copied to cache and used there What is cache for disk (e.g., secondary memory)?

14 Caching Analogy You are going to do some research on a particular topic. Thus, you go to the library and look for the a shelve that contains books on that particular topic You pick up a book from the shelve, find a chair, seat and start reading

15 Caching Analogy You find a reference to another book on the same topic that you are also interested in reading. Thus, you stand up, go to the same shelve, leave the first book and pick up the other book Then, you go back to the chair and start reading the second book Later on you realize that you want to read the first book once again (or another related book). Thus, you repeat the same process (i.e., go to the shelve to find it)

16 Caching Analogy Suppose that instead of taking just one book from the shelve, you take 10 books on the same topic. Then, you find a table with a chair, put the 10 books on the table, sit there and start reading one of the books If you need another related book, there is a good chance that it is on your table so you don’t have to go to the shelve to get it. Also, you can leave the first book on the table and there is a good chance that you will be needing it again later

17 Caching Analogy The table is a cache for what? If the book that you need is on the table, you have a cache hit If the book that you need is not on the table, you have a cache miss Cache smaller than storage being cached – Cache management important design problem – Cache size and replacement policy

18 Caching Temporal Locality (locality in time) – Recently accessed items tend to be accessed again the near future – Keep most recently accessed data closer to the processor – In the analogy? Spatial Locality (locality in space) – Accesses are clustered in the address space – Move words consisting of contiguous words to the faster levels – In the analogy? – Why are “gotos” not good?

19 Caching We know that, statistically, only a small amount of the entire memory space is being accessed at any given time and values in that subset are being accessed repeatedly Locality properties allow us to use a small amount of very fast memory to effectively accelerate the majority of memory accesses The result is that we end up with a memory system that can hold a large amount of information (in a large, low-cost memory) yet provide nearly the same access speed as would be obtained from having all of the memory be very fast and expensive

20 Caching Suppose a memory reference is generated by the CPU and it generates a cache miss (i.e., corresponding value is not in the cache) – In the analogy, you don’t have the book that you need on the table The address is sent to main memory to fetch the desired word and load it into the cache. However, the cache is already full. You must replace one of the values in the cache. What do you think would be a good policy for choosing the value to be replaced? – In the analogy, the table is full, which book do you remove from the table to make room for the new one?

21 Caching Given that we want to keep values that we will need again soon, what about getting rid of the one that won’t be needed for the longest time? Choosing which value to replace is called the replacement policy Will cover this in detail in chapter 9

22 In general …

23 Hit: data appears in some block in the faster level – Hit Rate: The fraction of memory accesses found in the higher level – Hit Time: Time to access the faster level which consists of Memory Access Time + Time to determine hit/miss Miss: data needs to be retrieved from a block in the slower level – Miss Rate: 1 – (Hit Rate) – Miss Penalty: Time to replace a block in the upper level + Time to deliver the block to the processor Hit Time << Miss Penalty Will cover memory management in chapters 8 and 9

24 I/O Structure Storage is one of many types of I/O devices Each device connected to a controller – Maintains local buffer storage and set of special-purpose registers – Responsible for moving data between the peripheral devices that it controls and its local buffer storage – Device driver for each device controller Understands the device controller and presents uniform interface to the device to the rest of the operating system

25 Direct Memory Access Device controller transfers block of data to/from main memory Interrupts when block transfer completed Only one interrupt per block is generated rather than one interrupt per byte

26 Computer System Architecture A computer system can be organized in a number of different ways, which we can categorize roughly according to the number of general-purpose processors that it has Single-processor system – From PDAs to mainframes – Almost all have special-purpose processors PCs contain a microprocessor in the keyboard to convert keystrokes into codes to be sent to the CPU Not considered multiprocessor

27 Computer System Architecture Multi-processor systems – Increase throughput Speed-up ratio with N processors ? – Economy of scale System with N processors or N single-processor systems, which one is cheaper? – Increased reliability Some are fault tolerant – Asymmetric multiprocessing Each processor assigned a specific task A master processor controls the system – Symmetric multiprocessing (SMP) most common No master-slave relationship

Download ppt "Computer System Organization Computer-system operation – One or more CPUs, device controllers connect through common bus providing access to shared memory."

Similar presentations

Ads by Google