Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 300 – Lecture 24 Intro to Computer Architecture / Assembly Language The LAST Lecture!

Similar presentations


Presentation on theme: "CS 300 – Lecture 24 Intro to Computer Architecture / Assembly Language The LAST Lecture!"— Presentation transcript:

1 CS 300 – Lecture 24 Intro to Computer Architecture / Assembly Language The LAST Lecture!

2 Final Exam Tuesday, 8am. The exam will be comprehensive I'll post a worksheet (practice exam) Thursday evening that covers topics since the last exam.

3 Homework The last homework is due Friday. "darcs pull" is your friend – minor bugs have been found (and fixed) in the assembler. Extra credit: if you want it, come see me after class. All EC stuff is due Friday of finals week.

4 Speeding Up An Instruction Typically, an instruction is executed in stages: * Fetch (brings the instruction in from cache / memory) * Decode (figure out what the instruction will do) * Execute (the actual operation, like add or multiply) * Store (place results in registers / memory) This varies a lot from processor to processor but the idea is always the same – break up execution into smaller chunks that overlap.

5 Other Speedup Strategies * Vector instructions: explicit parallelism in the instruction set to feed sequences of data to a functional unit (Cray-1 and successors) * Multiple instructions at once: pack instruction words with lots of independent operations that execute at the same time (VLIW) * Replicated CPU, one instruction stream (SIMD) * Multicore machines (MIMD) – various coupling strategies

6 Pipeline Hazards Structural: lack of computational resources to perform operations in parallel Data: dependencies among instructions (write-read) Control: conditional branching prevents you from knowing which instruction comes next

7 Memory Hazards * It's "obvious" which registers an instruction uses * It's hard to figure out how memory accesses interact. Much harder to find out which memory an instruction touches. What can we do? * Reorder reads without worry * Reads and writes can't be switched unless we know that they don't interfere * Separate "memory banks" (instruction / data) reduces hazards

8 Compiler / Programmer Help The compiler and programmer have access to information that the CPU doesn't. We can determine whether or not aliasing is possible – this is what makes it hard to reorganize memory access. A CPU isn't able to understand that two pointers must point to different things, for example.

9 Delayed Branching One interesting feature of the MIPS was the delayed branch. The instruction after a branch is always executed – that is, you want to have at least one instruction in the pipeline before you have to wait to see where the branch goes. You can't see this – the assembler is able to fill branch slots for you behind your back.

10 User-Level Pipelining One of the big ideas in software is to pipeline at the application level: * Anticipate future I/O * Use non-blocking reads (threading) * Understand whether writers may alter information that is prefetched Note that web browsers pipline web page display!

11 Pipeline Metaphors The idea of keeping lots of workers busy at once is pervasive. Managers have to deal with similar issues when scheduling work: * Not all workers can perform the same task * Workflow requires that things be passed from one worker to another * Speculative work is often done (like a restaurant preparing food that may or may not be ordered)

12 Examples The following slides are stolen from Mary Jane Irwin (www.cse.psu.edu/~mji )www.cse.psu.edu/~mji www.cse.psu.edu/~cg431

13 Single Cycle vs. Multiple Cycle Timing Clk Cycle 1 Multiple Cycle Implementation: IFetchDecExecMemWB Cycle 2Cycle 3Cycle 4Cycle 5Cycle 6Cycle 7Cycle 8Cycle 9Cycle 10 IFetchDecExecMem lwsw IFetch R-type Clk Single Cycle Implementation: lwsw Waste Cycle 1Cycle 2 multicycle clock slower than 1/5 th of single cycle clock due to stage register overhead

14 How Can We Make It Even Faster? Split the multiple instruction cycle into smaller and smaller steps –There is a point of diminishing returns where as much time is spent loading the state registers as doing the work Start fetching and executing the next instruction before the current one has completed –Pipelining – (all?) modern processors are pipelined for performance

15 A Pipelined MIPS Processor Start the next instruction before the current one has completed –improves throughput - total amount of work done in a given time –instruction latency (execution time, delay time, response time - time from the start of an instruction to its completion) is not reduced Cycle 1Cycle 2Cycle 3Cycle 4Cycle 5 IFetchDecExecMemWB lw Cycle 7Cycle 6Cycle 8 sw IFetchDecExecMemWB R-type IFetchDecExecMemWB clock cycle (pipeline stage time) is limited by the slowest stage for some instructions, some stages are wasted cycles

16 Single Cycle, Multiple Cycle, vs. Pipeline Multiple Cycle Implementation: Clk Cycle 1 IFetchDecExecMemWB Cycle 2Cycle 3Cycle 4Cycle 5Cycle 6Cycle 7Cycle 8Cycle 9Cycle 10 IFetchDecExecMem lwsw IFetch R-type lw IFetchDecExecMemWB Pipeline Implementation: IFetchDecExecMemWB sw IFetchDecExecMemWB R-type Clk Single Cycle Implementation: lwsw Waste Cycle 1Cycle 2

17 Pipelining the MIPS ISA What makes it easy –all instructions are the same length (32 bits) can fetch in the 1 st stage and decode in the 2 nd stage –few instruction formats (three) with symmetry across formats can begin reading register file in 2 nd stage –memory operations can occur only in loads and stores can use the execute stage to calculate memory addresses –each MIPS instruction writes at most one result (i.e., changes the machine state) and does so near the end of the pipeline (MEM and WB) What makes it hard –structural hazards: what if we had only one memory? –control hazards: what about branches? –data hazards: what if an instruction’s input operands depend on the output of a previous instruction?

18 Can Pipelining Get Us Into Trouble? Yes: Pipeline Hazards –structural hazards: attempt to use the same resource by two different instructions at the same time –data hazards: attempt to use data before it is ready An instruction’s source operand(s) are produced by a prior instruction still in the pipeline –control hazards: attempt to make a decision about program control flow before the condition has been evaluated and the new PC target address calculated branch instructions Can always resolve hazards by waiting –pipeline control must detect the hazard –and take action to resolve hazards

19 I n s t r. O r d e r Time (clock cycles) lw Inst 1 Inst 2 Inst 4 Inst 3 ALU Mem Reg MemReg ALU Mem Reg MemReg ALU Mem Reg MemReg ALU Mem Reg MemReg ALU Mem Reg MemReg A Single Memory Would Be a Structural Hazard Reading data from memory Reading instruction from memory Fix with separate instr and data memories (I$ and D$)

20 Register Usage Can Cause Data Hazards I n s t r. O r d e r add $1, sub $4,$1,$5 and $6,$1,$7 xor $4,$1,$5 or $8,$1,$9 ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg ALU IM Reg DMReg Dependencies backward in time cause hazards Read before write data hazard

21 To Know for the Final * How caching, virtual memory, and pipelines work in broad terms * What problems are being solved * Basic approaches to these problems * What is done in software and hardware


Download ppt "CS 300 – Lecture 24 Intro to Computer Architecture / Assembly Language The LAST Lecture!"

Similar presentations


Ads by Google