PipelinedImplementation Part I PipelinedImplementation.

Slides:



Advertisements
Similar presentations
Randal E. Bryant Carnegie Mellon University CS:APP CS:APP Chapter 4 Computer Architecture PipelinedImplementation Part II CS:APP Chapter 4 Computer Architecture.
Advertisements

University of Amsterdam Computer Systems – the processor architecture Arnoud Visser 1 Computer Systems The processor architecture.
Lecture 4: CPU Performance
Mehmet Can Vuran, Instructor University of Nebraska-Lincoln Acknowledgement: Overheads adapted from those provided by the authors of the textbook.
Pipeline Computer Organization II 1 Hazards Situations that prevent starting the next instruction in the next cycle Structural hazards – A required resource.
Pipelining I Topics Pipelining principles Pipeline overheads Pipeline registers and stages Systems I.
Chapter 8. Pipelining.
Chapter Six 1.
1 Seoul National University Wrap-Up. 2 Overview Seoul National University Wrap-Up of PIPE Design  Exception conditions  Performance analysis Modern.
Real-World Pipelines: Car Washes Idea  Divide process into independent stages  Move objects through stages in sequence  At any instant, multiple objects.
Instructor: Erol Sahin
– 1 – Chapter 4 Processor Architecture Pipelined Implementation Chapter 4 Processor Architecture Pipelined Implementation Instructor: Dr. Hyunyoung Lee.
PipelinedImplementation Part I CSC 333. – 2 – Overview General Principles of Pipelining Goal Difficulties Creating a Pipelined Y86 Processor Rearranging.
Chapter 12 Pipelining Strategies Performance Hazards.
7/2/ _23 1 Pipelining ECE-445 Computer Organization Dr. Ron Hayne Electrical and Computer Engineering.
Appendix A Pipelining: Basic and Intermediate Concepts
ENGS 116 Lecture 51 Pipelining and Hazards Vincent H. Berk September 30, 2005 Reading for today: Chapter A.1 – A.3, article: Patterson&Ditzel Reading for.
Randal E. Bryant CS:APP Chapter 4 Computer Architecture SequentialImplementation CS:APP Chapter 4 Computer Architecture SequentialImplementation Slides.
Pipelining. Overview Pipelining is widely used in modern processors. Pipelining improves system performance in terms of throughput. Pipelined organization.
Datapath Design II Topics Control flow instructions Hardware for sequential machine (SEQ) Systems I.
Pipelining III Topics Hazard mitigation through pipeline forwarding Hardware support for forwarding Forwarding to mitigate control (branch) hazards Systems.
David O’Hallaron Carnegie Mellon University Processor Architecture PIPE: Pipelined Implementation Part I Processor Architecture PIPE: Pipelined Implementation.
Data Hazard Solution 2: Data Forwarding Our naïve pipeline would experience many data stalls  Register isn’t written until completion of write-back stage.
Lecture 15: Pipelining and Hazards CS 2011 Fall 2014, Dr. Rozier.
1 Seoul National University Pipelined Implementation : Part I.
1 Naïve Pipelined Implementation. 2 Outline General Principles of Pipelining –Goal –Difficulties Naïve PIPE Implementation Suggested Reading 4.4, 4.5.
Pipelining (I). Pipelining Example  Laundry Example  Four students have one load of clothes each to wash, dry, fold, and put away  Washer takes 30.
CMPE 421 Parallel Computer Architecture
Randal E. Bryant Carnegie Mellon University CS:APP CS:APP Chapter 4 Computer Architecture SequentialImplementation CS:APP Chapter 4 Computer Architecture.
Datapath Design I Topics Sequential instruction execution cycle Instruction mapping to hardware Instruction decoding Systems I.
1 Sequential CPU Implementation. 2 Outline Logic design Organizing Processing into Stages SEQ timing Suggested Reading 4.2,4.3.1 ~
Pipeline Architecture I Slides from: Bryant & O’ Hallaron
PipelinedImplementation Part II PipelinedImplementation.
Computer Architecture: Pipelined Implementation - I
Computer Architecture adapted by Jason Fritts
Pipelining Example Laundry Example: Three Stages
Pipelining IV Topics Implementing pipeline control Pipelining and performance analysis Systems I.
1 SEQ CPU Implementation. 2 Outline SEQ Implementation Suggested Reading 4.3.1,
Randal E. Bryant Carnegie Mellon University CS:APP2e CS:APP Chapter 4 Computer Architecture PipelinedImplementation Part II CS:APP Chapter 4 Computer Architecture.
Sequential Hardware “God created the integers, all else is the work of man” Leopold Kronecker (He believed in the reduction of all mathematics to arguments.
Real-World Pipelines Idea –Divide process into independent stages –Move objects through stages in sequence –At any given times, multiple objects being.
1 Pipelined Implementation. 2 Outline Handle Control Hazard Handle Exception Performance Analysis Suggested Reading 4.5.
1 Seoul National University Sequential Implementation.
Real-World Pipelines Idea Divide process into independent stages
Systems I Pipelining IV
Lecture 14 Y86-64: PIPE – pipelined implementation
Administrivia Midterm to be posted on Tuesday after class
Morgan Kaufmann Publishers The Processor
Pipelined Implementation : Part I
Seoul National University
Seoul National University
Instruction Decoding Optional icode ifun valC Instruction Format
Pipelined Implementation : Part II
Systems I Pipelining III
Computer Architecture adapted by Jason Fritts
Systems I Pipelining II
Pipelined Implementation : Part I
Seoul National University
Systems I Pipelining I Topics Pipelining principles Pipeline overheads
Pipeline Architecture I Slides from: Bryant & O’ Hallaron
Pipelined Implementation : Part I
Pipelined Implementation
Pipelined Implementation
Systems I Pipelining II
Chapter 4 Processor Architecture
Systems I Pipelining II
Pipelined Implementation
Real-World Pipelines: Car Washes
Sequential CPU Implementation
Sequential Design תרגול 10.
Presentation transcript:

PipelinedImplementation Part I PipelinedImplementation

– 2 – Processor Overview General Principles of Pipelining Goal Difficulties Creating a Pipelined Y86 Processor Rearranging SEQ Inserting pipeline registers Problems with data and control hazards

– 3 – Processor Suggested Reading - Chap 4.3.5, 4.4, 4.5

– 4 – Processor SEQ Hardware (Review) Stages occur in sequence One operation in process at a time Figure 4.21 P293

– 5 – Processor SEQ+ Hardware Still sequential implementation Reorder PC stage to put at beginning PC Stage Task is to select PC for current instruction Based on results computed by previous instruction Processor State PC is no longer stored in register But, can determine PC based on other stored information

– 6 – Processor Problem of SEQ and SEQ+ Too slow Too many tasks needed to finish in one clock cycle Signals need long time to propagate through all of the stages The clock must run slowly enough Does not make good use of hardware units Every unit is active for part of the total clock cycle

– 7 – Processor Real-World Pipelines: Car Washes Idea Divide process into independent stages Move objects through stages in sequence At any given times, multiple objects being processed SequentialParallel Pipelined

– 8 – Processor Computational Example System Computation requires total of 300 picoseconds Additional 20 picoseconds to save result in register Can must have clock cycle of at least 320 ps Combinational logic RegReg 300 ps20 ps Clock Delay = 320 ps Throughput = 3.12 GOPS Figure 4.32 P310

– 9 – Processor 3-Way Pipelined Version System Divide combinational logic into 3 blocks of 100 ps each Can begin new operation as soon as previous one passes through stage A. Begin new operation every 120 ps Overall latency increases 360 ps from start to finish RegReg Clock Comb. logic A RegReg Comb. logic B RegReg Comb. logic C 100 ps20 ps100 ps20 ps100 ps20 ps Delay = 360 ps Throughput = 8.33 GOPS Figure 4.33 A) P310

– 10 – Processor Pipeline Diagrams Unpipelined Cannot start new operation until previous one completes 3-Way Pipelined Up to 3 operations in process simultaneously Time OP1 OP2 OP3 Time ABC ABC ABC OP1 OP2 OP3 Figure 4.33 B) P310

– 11 – Processor Operating a Pipeline Time OP1 OP2 OP3 ABC ABC ABC Clock RegReg Comb. logic A RegReg Comb. logic B RegReg Comb. logic C 100 ps20 ps100 ps20 ps100 ps20 ps 239 RegReg Clock Comb. logic A RegReg Comb. logic B RegReg Comb. logic C 100 ps20 ps100 ps20 ps100 ps20 ps 241 RegReg RegReg RegReg 100 ps20 ps100 ps20 ps100 ps20 ps Comb. logic A Comb. logic B Comb. logic C Clock 300 RegReg Clock Comb. logic A RegReg Comb. logic B RegReg Comb. logic C 100 ps20 ps100 ps20 ps100 ps20 ps 359 Figure 4.35 P312

– 12 – Processor Limitations: Nonuniform Delays Throughput limited by slowest stage Other stages sit idle for much of the time Challenging to partition system into balanced stages RegReg Clock RegReg Comb. logic B RegReg Comb. logic C 50 ps20 ps150 ps20 ps100 ps20 ps Delay = 510 ps Throughput = 5.88 GOPS Comb. logic A Time OP1 OP2 OP3 ABC ABC ABC Figure 4.36 P313

– 13 – Processor Limitations: Register Overhead As try to deepen pipeline, overhead of loading registers becomes more significant Percentage of clock cycle spent loading register: 1-stage pipeline: 6.25% 3-stage pipeline: 16.67% 6-stage pipeline: 28.57% High speeds of modern processor designs obtained through very deep pipelining Delay = 420 ps, Throughput = GOPSClock RegReg Comb. logic 50 ps20 ps RegReg Comb. logic 50 ps20 ps RegReg Comb. logic 50 ps20 ps RegReg Comb. logic 50 ps20 ps RegReg Comb. logic 50 ps20 ps RegReg Comb. logic 50 ps20 ps Figure 4.37 P315 Overhead: 开销

– 14 – Processor Data Dependencies System Each operation depends on result from preceding one Clock Combinational logic RegReg Time OP1 OP2 OP3 Figure 4.38 A),B) P316

– 15 – Processor Data Hazards Result does not feed back around in time for next operation Pipelining has changed behavior of system RegReg Clock Comb. logic A RegReg Comb. logic B RegReg Comb. logic C Time OP1 OP2 OP3 ABC ABC ABC OP4 ABC Figure 4.38 C) P316

– 16 – Processor Data Dependencies in Processors Result from one instruction used as operand for another Read-after-write (RAW) dependency Very common in actual programs Must make sure our pipeline handles these properly Get correct results Minimize performance impact 1 irmovl $50, %eax 2 addl %eax, %ebx 3 mrmovl 100( %ebx ), %edx P315

– 17 – Processor Control Dependence Example: loop: subl %edx, %ebx jne targ irmovl $10, %edx jmp loop targ: halt The jne instruction create a control dependency. Which instruction will be executed? P316

– 18 – Processor Adding Pipeline Registers Figure 4.39 P318 Figure 4.30 P306

– 19 – Processor Pipeline Stages Fetch Select current PC Read instruction Compute incremented PCDecode Read program registersExecute Operate ALUMemory Read or write data memory Write Back Update register file Figure 4.39 P318

– 20 – Processor PIPE- Hardware Pipeline registers hold intermediate values from instruction execution Forward (Upward) Paths Values passed from one stage to next Cannot jump past stages e.g., valC passes through decode Figure 4.41 P320

– 21 – Processor Feedback Paths Predicted PC Guess value of next PC Branch information Jump taken/not-taken Fall-through or target address Return point Read from memory Register updates To register file write ports

– 22 – Processor Predicting the PC Start fetch of new instruction after current one has completed fetch stage Not enough time to reliably determine next instruction Guess which instruction will follow Recover if prediction was incorrect Figure 4.56 P

– 23 – Processor Our Prediction Strategy Instructions that Don’t Transfer Control Predict next PC to be valP Always reliable Call and Unconditional Jumps Predict next PC to be valC (destination) Always reliable Conditional Jumps Predict next PC to be valC (destination) Only correct if branch is taken Typically right 60% of time Return Instruction Don’t try to predict P322

– 24 – Processor Recovering from PC Misprediction Mispredicted Jump Will see branch flag once instruction reaches memory stage Can get fall-through PC from valA Return Instruction Will get return PC when ret reaches write-back stage Figure 4.56 P338

– 25 – Processor Select PC int f_PC = [ int f_PC = [ #mispredicted branch. Fetch at incremented PC M_icode == IJXX && !M_Bch : M_valA; #completion of RET instruciton W_icode == IRET : W_valM; #default: Use predicted value of PC 1: F_predPC ]; Int new_F_predPC = [ f_icode in {IJXX, ICALL} : f_valC; 1: f_valP; ]; P338 and Figure 4.56

– 26 – Processor Pipeline Demonstration irmovl $1,%eax #I FDEM W irmovl $2,%ecx #I2 FDEM W irmovl $3,%edx #I3 FDEMW irmovl $4,%ebx #I4 FDEMW halt #I5 FDEMW Cycle 5 W I1 M I2 E I3 D I4 F I5 Figure 4.40 P319

– 27 – Processor Data Dependencies: 3 Nop’s Figure 4.42 P324

– 28 – Processor Data Dependencies: 2 Nop’s Figure 4.43 P325

– 29 – Processor Data Dependencies: 1 Nop Figure 4.44 P326

– 30 – Processor Data Dependencies: No Nop 0x000:irmovl$10,%edx FDEM W 0x006:irmovl$3,%eax FDEM W FDEMW 0x00c:addl%edx,%eax FDEMW 0x00e: halt # demo-h0.ys E D valA  R[ %edx ]=0 valB  R[ %eax ]=0 D valA  R[ %edx ]=0 valB  R[ %eax ]=0 Cycle 4 Error M M_valE= 10 M_dstE= %edx e_valE  = 3 E_dstE= %eax Figure 4.45 P327

– 31 – Processor Classes of Data Hazards Hazards can potentially occur when one instruction updates part of the program state that read by a later instruction Program states: Program registers The hazards already identified. Condition codes Both written and read in the execute stage. No hazards can arise Program counter Conflicts between updating and reading PC cause control hazards Memory Both written and read in the memory stage. Without self-modified code, no hazards.

– 32 – Processor PIPE- Summary Concept Break instruction execution into 5 stages Run instructions through in pipelined modeLimitations Can’t handle dependencies between instructions when instructions follow too closely Data dependencies One instruction writes register, later one reads it Control dependency Instruction sets PC in way that pipeline did not predict correctly Mispredicted branch and return Fixing the Pipeline We’ll do that next time 4.5.5

– 33 – Processor Data Dependencies: 2 Nop’s Figure 4.43 P325

– 34 – Processor Stalling for Data Dependencies If instruction follows too closely after one that writes register, slow it down Hold instruction in decode Dynamically inject nop into execute stage 0x000: irmovl $10,%edx FDEMW 0x006: irmovl $3,%eax FDEMW 0x00c: nop FDEMW bubble F EMW 0x00e: addl %edx,%eax DDEMW 0x010: halt FDEMW 10 # demo-h2.ys F FDEMW 0x00d: nop , Stall: 停止,迟延

– 35 – Processor Stall Condition Source Registers srcA and srcB of current instruction in decode stage Destination Registers dstE and dstM fields Instructions in execute, memory, and write-back stagesCondition srcA==dstE or srcA==dstM srcB==dstE or srcB==dstM Special Case Don’t stall for register ID 8 Indicates absence of register operand E M W F D Instruction memory Instruction memory PC increment PC increment Register file Register file ALU Data memory Data memory Select PC rB dstEdstM Select A ALU A B Mem. control Addr srcAsrcB read write ALU fun. Fetch Decode Execute Memory Write back icode data out data in AB M E M_valA W_valM W_valE M_valA W_valM d_rvalA f_PC Predict PC valEvalMdstEdstM BchicodevalEvalAdstEdstM icodeifunvalCvalAvalBdstEdstMsrcAsrcB valCvalPicodeifunrA predPC CC d_srcBd_srcA e_Bch M_Bch

– 36 – Processor Detecting Stall Condition 0x000: irmovl $10,%edx FDEMW 0x006: irmovl $3,%eax FDEMW 0x00c: nop FDEMW bubble F EMW 0x00e: addl %edx,%eax DDEMW 0x010: halt FDEMW 10 # demo-h2.ys F FDEMW 0x00d: nop 11 Cycle 6 W D W_dstE = %eax W_valE = 3 srcA = %edx srcB = %eax Figure 4.46 P328

– 37 – Processor Stalling X3 0x000: irmovl $10,%edx FDEMW 0x006: irmovl $3,%eax FDEMW bubble F EMW D EMW 0x00c: addl %edx,%eax DDEMW 0x00e: halt FDEMW 10 # demo-h0.ys FF D F EMW bubble 11 Cycle 4 W W_dstE = %eax D srcA = %edx srcB = %eax M M_dstE = %eax D srcA = %edx srcB = %eax E E_dstE = %eax D srcA = %edx srcB = %eax Cycle 5 Cycle 6 Figure 4.48 P329

– 38 – Processor What Happens When Stalling? Stalling instruction held back in decode stage Following instruction stays in fetch stage Bubbles injected into execute stage Like dynamically generated nop’s Move through later stages 0x000: irmovl $10,%edx 0x006: irmovl $3,%eax 0x00c: addl %edx,%eax Cycle 4 0x00e: halt 0x000: irmovl $10,%edx 0x006: irmovl $3,%eax 0x00c: addl %edx,%eax # demo-h0.ys 0x00e: halt 0x000: irmovl $10,%edx 0x006: irmovl $3,%eax bubble 0x00c: addl %edx,%eax Cycle 5 0x00e: halt 0x006: irmovl $3,%eax bubble 0x00c: addl %edx,%eax bubble Cycle 6 0x00e: halt bubble 0x00c: addl %edx,%eax bubble Cycle 7 0x00e: halt bubble Cycle 8 0x00c: addl %edx,%eax 0x00e: halt Write Back Memory Execute Decode Fetch Figure 4.48 P329

– 39 – Processor Implementing Stalling Pipeline Control Combinational logic detects stall condition Sets mode signals for how pipeline registers should update Figure 4.68 P351

– 40 – Processor Pipeline Register Modes Rising clock Rising clock  Output = y yy Rising clock Rising clock  Output = x xx xx n o p Rising clock Rising clock  Output = nop Output = xInput = y stall = 0 bubble = 0 xx Normal Output = xInput = y stall = 1 bubble = 0 xx Stall Output = xInput = y stall = 0 bubble = 1 Bubble Figure 4.65 P348

– 41 – Processor Data Forwarding Naïve Pipeline Register isn’t written until completion of write-back stage Source operands read from register file in decode stage Needs to be in register file at start of stage Performance is not goodObservation Value generated in execute or memory stageTrick Pass value directly from generating instruction to decode stage Needs to be available at end of decode stage 4.5.6

– 42 – Processor Data Forwarding Example irmovl in write- back stage Destination value in W pipeline register Forward as valB for decode stage Figure 4.49 P331

– 43 – Processor Bypass Paths Decode Stage Forwarding logic selects valA and valB Normally from register file Forwarding: get valA or valB from later pipeline stage Forwarding Sources Execute: valE Memory: valE, valM Write back: valE, valM

– 44 – Processor Data Forwarding Example #2 Register %edx Generated by ALU during previous cycle Forward from memory stage as valA Register %eax Value just generated by ALU Forward from execute stage as valB Figure 4.51 P332

– 45 – Processor Implementing Forwarding Add additional feedback paths from E, M, and W pipeline registers into decode stage Create logic blocks to select from multiple sources for valA and valB in decode stage Figure 4.53 P334

– 46 – Processor Implementing Forwarding ## What should be the A value? int new_E_valA = [ # Use incremented PC D_icode in { ICALL, IJXX } : D_valP; # Forward valE from execute d_srcA == E_dstE : e_valE; # Forward valM from memory d_srcA == M_dstM : m_valM; # Forward valE from memory d_srcA == M_dstE : M_valE; # Forward valM from write back d_srcA == W_dstM : W_valM; # Forward valE from write back d_srcA == W_dstE : W_valE; # Use value read from register file 1 : d_rvalA; ]; Figure 4.53 P334 P340

– 47 – Processor Limitation of Forwarding Load-use dependency Value needed by end of decode stage in cycle 7 Value read from memory in memory stage of cycle Figure 4.54 P335

– 48 – Processor Avoiding Load/Use Hazard Stall using instruction for one cycle Can then pick up loaded value by forwarding from memory stage Figure 4.55 P336

– 49 – Processor Detecting Load/Use Hazard ConditionTrigger Load/Use Hazard E_icode in { IMRMOVL, IPOPL } && E_dstM in { d_srcA, d_srcB } Figure 4.64 P347

– 50 – Processor Control for Load/Use Hazard Stall instructions in fetch and decode stages Inject bubble into execute stageConditionFDEMW Load/Use Hazard stallstallbubblenormalnormal Figure 4.66 P348 P345