Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Architecture

Similar presentations

Presentation on theme: "Computer Architecture"— Presentation transcript:

1 Computer Architecture
Princess Sumaya University for Technology Computer Architecture Dr. Esam Al_Qaralleh

2 Data path

3 The processor : Data Path and Control
Two types of functional units: elements that operate on data values (combinational) elements that contain state (state elements)

4 Single Cycle Implementation
Typical execution: read contents of some state elements, send values through some combinational logic write results to one or more state elements Using a clock signal for synchronization Edge triggered methodology

5 A portion of the datapath used for fetching instructions

6 The datapath for R-type instructions

7 The datapath for load and store insructions

8 The datapath for branch instructions

9 Complete Data Path

10 Control Selecting the operations to perform (ALU, read/write, etc.)
Controlling the flow of data (multiplexor inputs) Information comes from the 32 bits of the instruction Example: lW $1, 100($2) Value of control signals is dependent upon: what instruction is being executed which step is being performed

11 Data Path with Control

12 Single Cycle Implementation
Calculate cycle time assuming negligible delays except: memory (2ns), ALU and adders (2ns), register file access (1ns)

13 Multicycle approach Single Cycle Problems: One Solution:
what if we had a more complicated instruction like floating point? wasteful of area One Solution: use a “smaller” cycle time have different instructions take different numbers of cycles a “multicycle” datapath:

14 Break up the instructions into steps, each step takes a cycle
Multicycle Approach Break up the instructions into steps, each step takes a cycle balance the amount of work to be done restrict each cycle to use only one major functional unit At the end of a cycle store values for use in later cycles (easiest thing to do) introduce additional “internal” registers

15 Multicycle Approach Implementation

16 Five Execution Steps Instruction Fetch Instruction Decode and Register Fetch Execution, Memory Address Computation, or Branch Completion Memory Access or R-type instruction completion Write-back step INSTRUCTIONS TAKE FROM CYCLES!

17 Five Execution Steps Step name Action for R-type instructions
Action for Memory-reference Instructions Action for branches Action for jumps Instruction fetch IR = MEM[PC] PC = PC + 4 Instruction decode/ register fetch A = Reg[IR[25-21]] B = Reg[IR[20-16]] ALUOut = PC + (sign extend (IR[15-0])<<2) Execution, address computation, branch/jump completion ALUOut = A op B ALUOut = A+sign extend(IR[15-0]) IF(A==B) Then PC=ALUOut PC=PC[31-28]||(IR[25-0]<<2) Memory access or R-type completion Reg[IR[15-11]] = ALUOut Load:MDR =Mem[ALUOut] or Store:Mem[ALUOut] = B Memory read completion Load: Reg[IR[20-16]] = MDR

18 pipelining

19 “Folder” takes 20 minutes
Pipelining is Natural! Pipelining provides a method for executing multiple instructions at the same time. Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer takes 40 minutes “Folder” takes 20 minutes A B C D

20 Sequential laundry takes 6 hours for 4 loads
6 PM 7 8 9 10 11 Midnight Time 30 40 20 30 40 20 30 40 20 30 40 20 T a s k O r d e A B C D Sequential laundry takes 6 hours for 4 loads If they learned pipelining, how long would laundry take?

21 Pipelined Laundry: Start work ASAP
6 PM 7 8 9 10 11 Midnight Time 30 40 20 T a s k O r d e A B C D Pipelined laundry takes 3.5 hours for 4 loads

22 Pipelining Lessons 6 PM 7 8 9 30 40 20 A B C D
Pipelining doesn’t help latency of single task, it helps throughput of entire workload Pipeline rate limited by slowest pipeline stage Multiple tasks operating simultaneously using different resources Potential speedup = Number pipe stages Unbalanced lengths of pipe stages reduces speedup Time to “fill” pipeline and time to “drain” it reduces speedup Stall for Dependences Time 30 40 20 T a s k O r d e A B C D

23 Basic DataPath What do we need to add to actually split the datapath into stages?

24 Pipeline DataPath

25 The Five Stages of the Load Instruction
Cycle 1 Cycle 2 Cycle 3 Cycle 4 Cycle 5 Load Ifetch Reg/Dec Exec Mem Wr Ifetch: Instruction Fetch Fetch the instruction from the Instruction Memory Reg/Dec: Registers Fetch and Instruction Decode Exec: Calculate the memory address Mem: Read the data from the Data Memory Wr: Write the data back to the register file As shown here, each of these five steps will take one clock cycle to complete. And in pipeline terminology, each step is referred to as one stage of the pipeline. +1 = 8 min. (X:48)

26 Assume each instruction takes five cycles
Pipelined Execution Time IFetch Dcd Exec Mem WB IFetch Dcd Exec Mem WB IFetch Dcd Exec Mem WB IFetch Dcd Exec Mem WB IFetch Dcd Exec Mem WB Program Flow IFetch Dcd Exec Mem WB On a processor multiple instructions are in various stages at the same time. Assume each instruction takes five cycles

27 Single Cycle, Multiple Cycle, vs. Pipeline
Here are the timing diagrams showing the differences between the single cycle, multiple cycle, and pipeline implementations. For example, in the pipeline implementation, we can finish executing the Load, Store, and R-type instruction sequence in seven cycles. In the multiple clock cycle implementation, however, we cannot start executing the store until Cycle 6 because we must wait for the load instruction to complete. Similarly, we cannot start the execution of the R-type instruction until the store instruction has completed its execution in Cycle 9. In the Single Cycle implementation, the cycle time is set to accommodate the longest instruction, the Load instruction. Consequently, the cycle time for the Single Cycle implementation can be five times longer than the multiple cycle implementation. But may be more importantly, since the cycle time has to be long enough for the load instruction, it is too long for the store instruction so the last part of the cycle here is wasted. +2 = 77 min. (X:57)

28 Graphically Representing Pipelines
Can help with answering questions like: How many cycles does it take to execute this code? What is the ALU doing during cycle 4? Are two instructions trying to use the same resource at the same time?

29 Why Pipeline? Because the resources are there!

30 Pipelining MIPS Execution

31 Use separate instruction and data memories
Observations Use separate instruction and data memories To reduce memory conflict The memory system must deliver 5 times the bandwidth, if the pipelined processor has a clock cycle that is equal to the unpipelined processor. Register file is used in the two stages One for reading in ID and one for writing in WB 2 reads 1 write in every clock cycle • Maintaining PC IF stage: preparing the PC for the next instruction ID stage: to compute the branch target address Problem of a branch does not change the PC until the ID stage

32 Why Pipeline? Suppose Single Cycle Machine Multicycle Machine
100 instructions are executed The single cycle machine has a cycle time of 45 ns The multicycle and pipeline machines have cycle times of 10 ns The multicycle machine has a CPI of 4.6 Single Cycle Machine 45 ns/cycle x 1 CPI x 100 inst = 4500 ns Multicycle Machine 10 ns/cycle x 4.6 CPI x 100 inst = 4600 ns Ideal pipelined machine 10 ns/cycle x (1 CPI x 100 inst + 4 cycle drain) = 1040 ns Ideal pipelined vs. single cycle speedup 4500 ns / 1040 ns = 4.33 What has not yet been considered?

33 Compare Performance Compare: Single-cycle, multicycle and pipelined control using SPECint2000 Single-cycle: memory access = 200ps, ALU = 100ps, register file read and write = 50ps =600ps Multicycle: 25% loads, 10% stores, 11% branches, 2% jumps, 52% ALU CPI = 4.12, The clock cycle = 200ps (longest functional unit) Pipelined 1 clock cycle when there is no load-use dependence 2 when there is, average 1.5 per load Stores and ALU take 1 clock cycle Branches - 1 when predicted correctly and 2 when not, average 1.25 Jump – 2 1.5x25%+1x10%+1x52%+1.25x11%+2x2% = 1.17 Average instruction time: single-cycle = 600ps, multicycle = 4.12x200=824, pipelined 1.17x200 = 234ps Memory access 200ps is the bottleneck. How to improve?

34 Can pipelining get us into trouble?
Yes: Pipeline Hazards structural hazards: attempt to use the same resource two different ways at the same time E.g., two instructions try to read the same memory at the same time data hazards: attempt to use item before it is ready instruction depends on result of prior instruction still in the pipeline add r1, r2, r3 sub r4, r2, r1 control hazards: attempt to make a decision before condition is evaulated branch instructions beq r1, loop Can always resolve hazards by waiting pipeline control must detect the hazard take action (or delay action) to resolve hazards

35 Single Memory is a Structural Hazard
Time (clock cycles) ALU I n s t r. O r d e Mem Reg Mem Reg Load ALU Mem Reg Instr 1 ALU Mem Reg Instr 2 ALU Instr 3 Mem Reg Mem Reg ALU Mem Reg Instr 4 Detection is easy in this case! (right half highlight means read, left half write)

36 Structural Hazards limit performance
Example: if 1.3 memory accesses per instruction and only one memory access per cycle then average CPI = 1.3 otherwise resource is more than 100% utilized Solution 1: Use separate instruction and data memories Solution 2: Allow memory to read and write more than one word per cycle Solution 3: Stall

37 Control Hazard Solutions
Stall: wait until decision is clear Its possible to move up decision to 2nd stage by adding hardware to check registers as being read Impact: 2 clock cycles per branch instruction => slow I n s t r. O r d e Time (clock cycles) ALU Mem Reg Mem Reg Add ALU Beq Mem Reg Mem Reg Load ALU Mem Reg Mem Reg

38 Control Hazard Solutions
Predict: guess one direction then back up if wrong Predict not taken Impact: 1 clock cycle per branch instruction if right, 2 if wrong (right ­ 50% of time) More dynamic scheme: history of 1 branch (­ 90%) I n s t r. O r d e Time (clock cycles) ALU Mem Reg Mem Reg Add ALU Beq Mem Reg Mem Reg Load ALU Mem Reg Mem Reg

39 Control Hazard Solutions
Redefine branch behavior (takes place after next instruction) “delayed branch” Impact: 1 clock cycles per branch instruction if can find instruction to put in “slot” (­ 50% of time) Launch more instructions per clock cycle=>less useful I n s t r. O r d e Time (clock cycles) ALU Mem Reg Mem Reg Add ALU Beq Mem Reg Mem Reg ALU Misc Mem Reg Mem Reg ALU Load Mem Reg Mem Reg

40 Data Hazard

41 Data Hazard---Forwarding
Use temporary results, don’t wait for them to be written register file forwarding to handle read/write to same register ALU forwarding

42 Can’t always forward Load word can still cause a hazard:
an instruction tries to read a register following a load instruction that writes to the same register. Thus, we need a hazard detection unit to “stall” the load instruction

43 Stalling We can stall the pipeline by keeping an instruction in the same stage

44 Designing Instruction Set for Pipelining
MIPS instructions are the same length. This makes much easer to fetch instructions in the first stage and to decode them in the second stage. In IA-32 instructions vary from 1-17 bytes, pipelining is considerably more challenging. All recent IA-32 architectures translate instructions into micro_operations, which are pipelined MIPS has only few instruction formats, with the source registers fields in the same place This symmetry – the second stage can read the register file at the same time that the hardware is decoding the type of instruction Without this symmetry we would need to split stage 2

45 Designing Instruction Set for Pipelining
Memory operands only appears in loads or stores in MIPS We can use the execute stage to calculate the memory and then access memory in the following stage If we operate on operand in memory, stages 3 and 4 would expand to an address stage, memory stage and then execute stage Operands must be aligned in memory We can have instructions requiring two data memory access, the transfer can be done in a single pipeline stage

46 Summary: Pipelining What makes it easy What makes it hard?
all instructions are the same length just a few instruction formats memory operands appear only in loads and stores What makes it hard? structural hazards: suppose we had only one memory control hazards: need to worry about branch instructions data hazards: an instruction depends on a previous instruction We’ll talk about modern processors and what really makes it hard: exception handling trying to improve performance with out-of-order execution, etc.

47 Pipelining is a fundamental concept
Summary Pipelining is a fundamental concept multiple steps using distinct resources Utilize capabilities of the datapath by pipelined instruction processing start next instruction while working on the current one limited by length of longest stage (plus fill/flush) detect and resolve hazards

Download ppt "Computer Architecture"

Similar presentations

Ads by Google