Chapter 8. Pipelining.

Slides:



Advertisements
Similar presentations
PipelineCSCE430/830 Pipeline: Introduction CSCE430/830 Computer Architecture Lecturer: Prof. Hong Jiang Courtesy of Prof. Yifeng Zhu, U of Maine Fall,
Advertisements

Pipelining (Week 8).
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
CMPT 334 Computer Organization
Pipeline and Vector Processing (Chapter2 and Appendix A)
Chapter 8. Pipelining.
Review: Pipelining. Pipelining Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer.
Pipelining I (1) Fall 2005 Lecture 18: Pipelining I.
Pipelining Hwanmo Sung CS147 Presentation Professor Sin-Min Lee.
EECS 318 CAD Computer Aided Design LECTURE 2: DSP Architectures Instructor: Francis G. Wolff Case Western Reserve University This presentation.
CS252/Patterson Lec 1.1 1/17/01 Pipelining: Its Natural! Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer.
Chapter Six 1.
Chapter 8. Pipelining. Instruction Hazards Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline.
S. Barua – CPSC 440 CHAPTER 6 ENHANCING PERFORMANCE WITH PIPELINING This chapter presents pipelining.
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
Computer ArchitectureFall 2007 © October 24nd, 2007 Majd F. Sakr CS-447– Computer Architecture.
King Fahd University of Petroleum and Minerals King Fahd University of Petroleum and Minerals Computer Engineering Department Computer Engineering Department.
CS430 – Computer Architecture Introduction to Pipelined Execution
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon., Sep 9, 2002 Topic: Pipelining Basics.
1 Atanasoff–Berry Computer, built by Professor John Vincent Atanasoff and grad student Clifford Berry in the basement of the physics building at Iowa State.
CS 61C L30 Introduction to Pipelined Execution (1) Garcia, Fall 2004 © UCB Lecturer PSOE Dan Garcia inst.eecs.berkeley.edu/~cs61c.
7/2/ _23 1 Pipelining ECE-445 Computer Organization Dr. Ron Hayne Electrical and Computer Engineering.
Appendix A Pipelining: Basic and Intermediate Concepts
Introduction to Pipelining Rabi Mahapatra Adapted from the lecture notes of Dr. John Kubiatowicz (UC Berkeley)
Pipelining. Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization.
9.2 Pipelining Suppose we want to perform the combined multiply and add operations with a stream of numbers: A i * B i + C i for i =1,2,3,…,7.
CS1104: Computer Organisation School of Computing National University of Singapore.
Computer Science Education
Integrated Circuits Costs
Pipelining (I). Pipelining Example  Laundry Example  Four students have one load of clothes each to wash, dry, fold, and put away  Washer takes 30.
Pipeline Hazards. CS5513 Fall Pipeline Hazards Situations that prevent the next instructions in the instruction stream from executing during its.
CSE 340 Computer Architecture Summer 2014 Basic MIPS Pipelining Review.
1 Designing a Pipelined Processor In this Chapter, we will study 1. Pipelined datapath 2. Pipelined control 3. Data Hazards 4. Forwarding 5. Branch Hazards.
Topics covered: Pipelining CSE243: Introduction to Computer Architecture and Hardware/Software Interface.
ECE 232 L18.Pipeline.1 Adapted from Patterson 97 ©UCBCopyright 1998 Morgan Kaufmann Publishers ECE 232 Hardware Organization and Design Lecture 18 Pipelining.

CSIE30300 Computer Architecture Unit 04: Basic MIPS Pipelining Hsin-Chou Chi [Adapted from material by and
Pipelining Example Laundry Example: Three Stages
CS252/Patterson Lec 1.1 1/17/01 معماري کامپيوتر - درس نهم pipeline برگرفته از درس : Prof. David A. Patterson.
Lecture 9. MIPS Processor Design – Pipelined Processor Design #1 Prof. Taeweon Suh Computer Science Education Korea University 2010 R&E Computer System.
Chapter One Introduction to Pipelined Processors.
DICCD Class-08. Parallel processing A parallel processing system is able to perform concurrent data processing to achieve faster execution time The system.
Lecture 5. MIPS Processor Design Pipelined MIPS #1 Prof. Taeweon Suh Computer Science & Engineering Korea University COSE222, COMP212 Computer Architecture.
Real-World Pipelines Idea Divide process into independent stages
Chapter Six.
Advanced Architectures
Lecture 18: Pipelining I.
Pipelines An overview of pipelining
Review: Instruction Set Evolution
Pipelining Chapter 6.
CMSC 611: Advanced Computer Architecture
Chapter One Introduction to Pipelined Processors
ECE232: Hardware Organization and Design
Chapter 4 The Processor Part 2
Pipelining Chapter 6.
Chapter 8. Pipelining.
Lecturer: Alan Christopher
Serial versus Pipelined Execution
Chapter 8. Pipelining.
Chapter 8. Pipelining.
Chapter Six.
Systems I Pipelining I Topics Pipelining principles Pipeline overheads
Chapter Six.
An Introduction to pipelining
8086 processor.
Pipelining Appendix A and Chapter 3.
Pipelining Chapter 6.
A relevant question Assuming you’ve got: One washer (takes 30 minutes)
Pipelining.
Presentation transcript:

Chapter 8. Pipelining

Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization requires sophisticated compilation techniques.

Making the Execution of Programs Faster Use faster circuit technology to build the processor and the main memory. Arrange the hardware so that more than one operation can be performed at the same time. In the latter way, the number of operations performed per second is increased even though the elapsed time needed to perform any one operation is not changed.

Basic Concepts

Traditional Pipeline Concept Laundry Example Ann, Brian, Cathy, Dave each have one load of clothes to wash, dry, and fold Washer takes 30 minutes Dryer takes 40 minutes “Folder” takes 20 minutes A B C D

Traditional Pipeline Concept 6 PM 7 8 9 10 11 Midnight Time 30 40 20 30 40 20 30 40 20 30 40 20 Sequential laundry takes 6 hours for 4 loads If they learned pipelining, how long would laundry take? A B C D

Traditional Pipeline Concept 6 PM 7 8 9 10 11 Midnight Time T a s k O r d e 30 40 20 A Pipelined laundry takes 3.5 hours for 4 loads B C D

Traditional Pipeline Concept Pipelining doesn’t help latency of single task, it helps throughput of entire workload Pipeline rate limited by slowest pipeline stage Multiple tasks operating simultaneously using different resources Potential speedup = Number pipe stages Unbalanced lengths of pipe stages reduces speedup Time to “fill” pipeline and time to “drain” it reduces speedup Stall for Dependences 6 PM 7 8 9 Time T a s k O r d e 30 40 20 A B C D

Use the Idea of Pipelining in a Computer Fetch + Execution T ime I I I 1 2 3 T ime Clock cycle 1 2 3 4 F E F E F E 1 1 2 2 3 3 Instruction I F E (a) Sequential execution 1 1 1 I F E 2 2 2 Interstage buffer B1 I F E 3 3 3 Instruction Ex ecution fetch unit (c) Pipelined execution unit Figure 8.1. Basic idea of instruction pipelining. (b) Hardware organization

Use the Idea of Pipelining in a Computer Fetch + Decode + Execution + Write Textbook page: 457

Role of Cache Memory Each pipeline stage is expected to complete in one clock cycle. The clock period should be long enough to let the slowest pipeline stage to complete. Faster stages can only wait for the slowest one to complete. Since main memory is very slow compared to the execution, if each instruction needs to be fetched from main memory, pipeline is almost useless. Fortunately, we have cache.

Pipeline Performance The potential increase in performance resulting from pipelining is proportional to the number of pipeline stages. However, this increase would be achieved only if all pipeline stages require the same time to complete, and there is no interruption throughout program execution. Unfortunately, this is not true.

Pipeline Performance

Pipeline Performance The previous pipeline is said to have been stalled for two clock cycles. Any condition that causes a pipeline to stall is called a hazard. Data hazard – any condition in which either the source or the destination operands of an instruction are not available at the time expected in the pipeline. So some operation has to be delayed, and the pipeline stalls. Instruction (control) hazard – a delay in the availability of an instruction causes the pipeline to stall. Structural hazard – the situation when two instructions require the use of a given hardware resource at the same time.

Pipeline Performance Instruction hazard Idle periods – stalls (bubbles)

Pipeline Performance Load X(R1), R2 Structural hazard

Pipeline Performance Again, pipelining does not result in individual instructions being executed faster; rather, it is the throughput that increases. Throughput is measured by the rate at which instruction execution is completed. Pipeline stall causes degradation in pipeline performance. We need to identify all hazards that may cause the pipeline to stall and to find ways to minimize their impact.

Quiz Four instructions, the I2 takes two clock cycles for execution. Pls draw the figure for 4-stage pipeline, and figure out the total cycles needed for the four instructions to complete.

Data Hazards

Data Hazards We must ensure that the results obtained when instructions are executed in a pipelined processor are identical to those obtained when the same instructions are executed sequentially. Hazard occurs A ← 3 + A B ← 4 × A No hazard A ← 5 × C B ← 20 + C When two operations depend on each other, they must be executed sequentially in the correct order. Another example: Mul R2, R3, R4 Add R5, R4, R6

Figure 8.6. Pipeline stalled by data dependency between D2 and W1. Data Hazards Figure 8.6. Pipeline stalled by data dependency between D2 and W1.

Operand Forwarding Instead of from the register file, the second instruction can get data directly from the output of ALU after the previous instruction is completed. A special arrangement needs to be made to “forward” the output of ALU to the input of ALU.

Handling Data Hazards in Software Let the compiler detect and handle the hazard: I1: Mul R2, R3, R4 NOP I2: Add R5, R4, R6 The compiler can reorder the instructions to perform some useful work during the NOP slots.

Side Effects The previous example is explicit and easily detected. Sometimes an instruction changes the contents of a register other than the one named as the destination. When a location other than one explicitly named in an instruction as a destination operand is affected, the instruction is said to have a side effect. (Example?) Example: conditional code flags: Add R1, R3 AddWithCarry R2, R4 Instructions designed for execution on pipelined hardware should have few side effects.

Instruction Hazards

Overview Whenever the stream of instructions supplied by the instruction fetch unit is interrupted, the pipeline stalls. Cache miss Branch

Unconditional Branches

Branch Timing - Branch penalty - Reducing the penalty

Instruction Queue and Prefetching Instruction fetch unit Instruction queue F : Fetch instruction D : Dispatch/ E : Ex ecute W : Write Decode instruction results unit Figure 8.10. Use of an instruction queue in the hardware organization of Figure 8.2b.

Conditional Braches A conditional branch instruction introduces the added hazard caused by the dependency of the branch condition on the result of a preceding instruction. The decision to branch cannot be made until the execution of that instruction has been completed. Branch instructions represent about 20% of the dynamic instruction count of most programs.