Chapter 12 Pipelining Strategies Performance Hazards.

Slides:



Advertisements
Similar presentations
CPU Structure and Function
Advertisements

1/1/ / faculty of Electrical Engineering eindhoven university of technology Speeding it up Part 3: Out-Of-Order and SuperScalar execution dr.ir. A.C. Verschueren.
Computer Organization and Architecture
CSCI 4717/5717 Computer Architecture
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
Pipeline Computer Organization II 1 Hazards Situations that prevent starting the next instruction in the next cycle Structural hazards – A required resource.
Lecture Objectives: 1)Define pipelining 2)Calculate the speedup achieved by pipelining for a given number of instructions. 3)Define how pipelining improves.
Pipeline Hazards Pipeline hazards These are situations that inhibit that the next instruction can be processed in the next stage of the pipeline. This.
Computer Organization and Architecture
Instruction-Level Parallelism (ILP)
Chapter 8. Pipelining. Instruction Hazards Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline.
Chapter 12 CPU Structure and Function. CPU Sequence Fetch instructions Interpret instructions Fetch data Process data Write data.
Computer Organization and Architecture
Computer Organization and Architecture
Computer Organization and Architecture The CPU Structure.
Pipelining Fetch instruction Decode instruction Calculate operands (i.e. EAs) Fetch operands Execute instructions Write result Overlap these operations.
Chapter 13 Reduced Instruction Set Computers (RISC) Pipelining.
King Fahd University of Petroleum and Minerals King Fahd University of Petroleum and Minerals Computer Engineering Department Computer Engineering Department.
Chapter 12 CPU Structure and Function. Example Register Organizations.
Midterm Thursday let the slides be your guide Topics: First Exam - definitely cache,.. Hamming Code External Memory & Buses - Interrupts, DMA & Channels,
ENGS 116 Lecture 51 Pipelining and Hazards Vincent H. Berk September 30, 2005 Reading for today: Chapter A.1 – A.3, article: Patterson&Ditzel Reading for.
Pipelining. Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization.
Group 5 Alain J. Percial Paula A. Ortiz Francis X. Ruiz.
CH12 CPU Structure and Function
1/1/ / faculty of Electrical Engineering eindhoven university of technology Speeding it up Part 2: Pipeline problems & tricks dr.ir. A.C. Verschueren Eindhoven.
5-Stage Pipelining Fetch Instruction (FI) Fetch Operand (FO) Decode Instruction (DI) Write Operand (WO) Execution Instruction (EI) S3S3 S4S4 S1S1 S2S2.
Lecture 15: Pipelining and Hazards CS 2011 Fall 2014, Dr. Rozier.
PART 4: (1/2) Central Processing Unit (CPU) Basics CHAPTER 14: P ROCESSOR S TRUCTURE AND F UNCTION.
Group 5 Tony Joseph Sergio Martinez Daniel Rultz Reginald Brandon Haas Emmanuel Sacristan Keith Bellville.
Edited By Miss Sarwat Iqbal (FUUAST) Last updated:21/1/13
9.2 Pipelining Suppose we want to perform the combined multiply and add operations with a stream of numbers: A i * B i + C i for i =1,2,3,…,7.
CPU Design and PipeliningCSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: CPU Operations and Pipelining Reading: Stallings,
Presented by: Sergio Ospina Qing Gao. Contents ♦ 12.1 Processor Organization ♦ 12.2 Register Organization ♦ 12.3 Instruction Cycle ♦ 12.4 Instruction.
Parallel Processing - introduction  Traditionally, the computer has been viewed as a sequential machine. This view of the computer has never been entirely.
University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell CS352H: Computer Systems Architecture Topic 8: MIPS Pipelined.
Comp Sci pipelining 1 Ch. 13 Pipelining. Comp Sci pipelining 2 Pipelining.
ECE 456 Computer Architecture Lecture #14 – CPU (III) Instruction Cycle & Pipelining Instructor: Dr. Honggang Wang Fall 2013.
CMPE 421 Parallel Computer Architecture
CS 1104 Help Session IV Five Issues in Pipelining Colin Tan, S
Superscalar - summary Superscalar machines have multiple functional units (FUs) eg 2 x integer ALU, 1 x FPU, 1 x branch, 1 x load/store Requires complex.
Pipelining Example Laundry Example: Three Stages
CPU Design and Pipelining – Page 1CSCI 4717 – Computer Architecture CSCI 4717/5717 Computer Architecture Topic: CPU Operations and Pipelining Reading:
COMPSYS 304 Computer Architecture Speculation & Branching Morning visitors - Paradise Bay, Bay of Islands.
Advanced Architectures
Computer Architecture Chapter (14): Processor Structure and Function
William Stallings Computer Organization and Architecture 8th Edition
Parallel Processing - introduction
CSC 4250 Computer Architectures
Chapter 14 Instruction Level Parallelism and Superscalar Processors
Pipeline Implementation (4.6)
Computer Organization and ASSEMBLY LANGUAGE
Control unit extension for data hazards
Computer Architecture
CS203 – Advanced Computer Architecture
Control unit extension for data hazards
Chapter 11 Processor Structure and function
Presentation transcript:

Chapter 12 Pipelining Strategies Performance Hazards

Example Register Organizations

Pentium 4 Organization

PowerPC Register Organization

Simple Instruction Cycle Model Where is the time spent here ?

Faster Processing Can be achieved through: Faster cycle time Divide cycle into more States Implementing parallelism

Prefetch Consider the instruction sequence as: Fetch instruction Execution instruction (often does not access main memory) Can computer fetch next instruction during execution of current instruction ? Called instruction Pre-fetch What are the implications of Pre-fetch? 36

A Two Stage Instruction Pipeline What additional hardware is required for Pre-fetch ?

Improved Performance with Prefetch Improved speed, but not doubled, why? Fetch usually shorter than execution Any jump or branch means that pre-fetched instructions are not the required instructions Could we Prefetch more than one instruction ? Could we add “more stages” to further improve performance? 37

Instruction Cycle with Indirect Addressing What is the benefit of this organization ?

Five State Instruction Cycle Fetch instructions Decode instructions Fetch Operands (Calc Addr & get data) Execute (Process data) Write results (Calculate Addr & store data) 22

Instruction Cycle State Diagram

Pipelining Consider the instruction sequence as: This is pipelining Fetch instruction (FI) , Decode instruction (DI), Calculate Operands (CO), Fetch Operands (FO) Execute Instruction (EI), Write Operand (WO), Check for Interrupt (CI) Consider it as an “assembly line” of operations. Then we can begin the next instruction assembly line sequence before the last has finished. Actually we can fetch the next instruction while the present one is being decoded. This is pipelining 36

Pipeline “stations” Let’s define a possible set of Pipeline stations: Fetch Instruction (FI) Decode Instruction (DI) Calculate Operand Addresses (CO) Fetch Operands (FO) Execute Instruction (EI) Write Operand (WO)

Possible Timing Diagram for Instruction Pipeline Operation Limitation: - maximum time for any stage, - unnecessary stages, and - overhead of transfers 39

The Impact of a Conditional Branch on Instruction Pipeline Operation Instruction 3 is a conditional branch to instruction 15: 40

Alternative Pipeline View Instruction 3 is conditional branch to instruction 15:

Pipeline Flowchart for Branches

Speedup Factors with Instruction Pipelining

Pipeline Hazards Types of Pipeline Hazards: Structural (or Resource) Data Control

Structural Hazards Structural hazards occur when instruction in the pipeline need the same resource: Memory CPU Etc.

Example: Resource Hazard Fetch of I3 has to stall for memory access of I1 operand.

Data Hazard Data Hazards occur when there is a conflict in the access of: a memory location or a register

Types of Data Hazards Read after Write (RAW) – true dependency A Hazard occurs if the Read occurs before the Write is complete Write after Read (WAR) – anti-dependency A Hazard occurs if the Write occurs before the Read happens Write after Write (WAW) – output dependency A Hazard occurs if the two Writes occur in the reverse order than intended

Example: RAW Data Hazard The second instruction needs to stall for EAC to be written by the first instruction before fetching it. Is there a way of stalling one cycle instead of two?

The Other Data Hazards Write after Read (WAR) – anti-dependency A Hazard occurs if the Write occurs before the Read happens Example? Write after Write (WAW) – output dependency A Hazard occurs if the two Writes occur in the reverse order than intended

Control Hazard Control Hazards occur when a wrong fetch decision results in a new instruction fetch and the pipeline being flushed Solutions include: Multiple Pipeline streams Prefetching the branch target Using a Loop Buffer Branch Prediction Delayed Branch Reordering of Instructions Multiple Copies of Registers Get branch target early

Multiple Streams Have two pipelines Prefetch each branch into a separate pipeline Use appropriate pipeline Challenges: Leads to bus & register contention Multiple branches lead to further pipelines being needed 42

Prefetch Branch Target Target of branch is prefetched in addition to instructions following branch Keep target until branch is executed 43

Using a Loop Buffer Have a small fast memory to hold the past n instructions – perhaps already decoded This likely contains loops that are executed repeatedly

Loop Buffer

Branch Prediction Predict branch never taken Predict branch always taken Predict by opcode Use Predict branch taken/not taken switch Maintain branch history table Get help from Compiler 45

Predict Branch Taken / Not taken Predict never taken Assume that jump will not happen Always fetch next instruction Predict always taken Assume that jump will happen Always fetch target instruction Which is better – consider possible page faults? 45

Branch Prediction by Opcode / Switch Predict by Opcode Some instructions are more likely to result in a jump than others Can get up to 75% success with this stategy Taken/Not taken switch Based on previous history Good for loops Perhaps good to match programmer style 46

Branch Prediction Flowchart

Branch Prediction State Diagram

Maintain Branch Table Perhaps maintain a cache table of three entries: - Address of branch - History of branching - Targets of branch 47

Branch History Table

Delayed Branch In Delayed Branch, the branch is moved before “independent instructions” preceding it. Then those instructions which now follow the branch can be executed while the branch target is being determined. What would it take to actually do this ?

Instruction Reordering Instruction reordering requires a judicious reordering of instructions so that data hazards can be eliminated. How can this be implemented ?

Multiple Copies of Registers Having multiple copies of registers – perhaps as many as one set for each stage can eliminate many data hazards How would you implement this ?

Get Branch Target Early The branch target is often available before the end of the pipeline, e.g. a JMP has it available as soon as the source operand stage is completed. There is no need to wait until the completion of the write back stage to begin fetching the next instruction. What would it take to implement this ?

Example: Intel 80486 Pipelining Fetch (Fetch) From cache or external memory Put in one of two 16-byte prefetch buffers Fill buffer with new data as soon as old data consumed Average 5 instructions fetched per load Independent of other stages to keep buffers full Decode stage 1 (D1) Opcode & address-mode info At most first 3 bytes of instruction Can direct D2 stage to get rest of instruction Decode stage 2 (D2) Expand opcode into control signals Computation of complex address modes Execute (EX) ALU operations, cache access, register update Writeback (WB) Update registers & flags Results sent to cache & bus interface write buffers

80486 Instruction Pipeline Examples