CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Multithreading I Steve Ko Computer Sciences and Engineering University at Buffalo.

Slides:



Advertisements
Similar presentations
Multithreaded Processors
Advertisements

Asanovic/Devadas Spring VLIW/EPIC: Statically Scheduled ILP Krste Asanovic Laboratory for Computer Science Massachusetts Institute of Technology.
Multithreading Processors and Static Optimization Review Adapted from Bhuyan, Patterson, Eggers, probably others.
Computer Structure 2014 – Out-Of-Order Execution 1 Computer Structure Out-Of-Order Execution Lihu Rappoport and Adi Yoaz.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture VLIW Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache III Steve Ko Computer Sciences and Engineering University at Buffalo.
CS252 Graduate Computer Architecture Spring 2014 Lecture 9: VLIW Architectures Krste Asanovic
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Snoopy Caches I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Complex Pipelining II Steve Ko Computer Sciences and Engineering University at Buffalo.
Multithreading processors Adapted from Bhuyan, Patterson, Eggers, probably others.
Microprocessor Microarchitecture Multithreading Lynn Choi School of Electrical Engineering.
Instruction-Level Parallelism (ILP)
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture ILP II Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Snoopy Caches II Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache II Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Directory-Based Caches I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Address Translation and Protection Steve Ko Computer Sciences and Engineering University at.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Pipelining III Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Virtual Memory I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Cache IV Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590 Computer Architecture Virtual Memory II
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Putting it all together: Intel Nehalem Steve Ko Computer Sciences and Engineering University.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture ILP I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Synchronization and Consistency I Steve Ko Computer Sciences and Engineering University at Buffalo.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Multithreading II Steve Ko Computer Sciences and Engineering University at Buffalo.
CS 252 Graduate Computer Architecture Lecture 13: Multithreading Krste Asanovic Electrical Engineering and Computer Sciences University of California,
CS 203A Computer Architecture Lecture 10: Multimedia and Multithreading Instructor: L.N. Bhuyan.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Dec 5, 2005 Topic: Intro to Multiprocessors and Thread-Level Parallelism.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania ECE Computer Organization Lecture 19 - Pipelined.
April 6, 2010CS152, Spring 2010 CS 152 Computer Architecture and Engineering Lecture 18: Multithreading Krste Asanovic Electrical Engineering and Computer.
CS 152 Computer Architecture and Engineering Lecture 16 - VLIW Machines and Statically Scheduled ILP Krste Asanovic Electrical Engineering and Computer.
Instruction Level Parallelism (ILP) Colin Stevens.
Krste Asanovic Electrical Engineering and Computer Sciences
CS 152 Computer Architecture and Engineering Lecture 18: Multithreading Krste Asanovic Electrical Engineering and Computer Sciences University of California,
1  2004 Morgan Kaufmann Publishers Chapter Six. 2  2004 Morgan Kaufmann Publishers Pipelining The laundry analogy.
CS 162 Computer Architecture Lecture 10: Multithreading Instructor: L.N. Bhuyan Adopted from Internet.
Multithreading and Dataflow Architectures CPSC 321 Andreas Klappenecker.
Review of CS 203A Laxmi Narayan Bhuyan Lecture2.
Prof. John Nestor ECE Department Lafayette College Easton, Pennsylvania Computer Organization Pipelined Processor Design 3.
1 Lecture 10: ILP Innovations Today: ILP innovations and SMT (Section 3.5)
March 16, 2011CS152, Spring 2011 CS 152 Computer Architecture and Engineering Lecture 14: Multithreading Krste Asanovic Electrical Engineering and Computer.
© Krste Asanovic, 2014CS252, Spring 2014, Lecture 15 CS252 Graduate Computer Architecture Spring 2014 Lecture 15: Virtual Memory and Caches Krste Asanovic.
CPE 631: Multithreading: Thread-Level Parallelism Within a Processor Electrical and Computer Engineering University of Alabama in Huntsville Aleksandar.
POLITECNICO DI MILANO Parallelism in wonderland: are you ready to see how deep the rabbit hole goes? Multithreaded and multicore processors Marco D. Santambrogio:
© Krste Asanovic, 2015CS252, Spring 2015, Lecture 13 CS252 Graduate Computer Architecture Fall 2015 Lecture 13: Multithreading Krste Asanovic
Processor Level Parallelism. Improving the Pipeline Pipelined processor – Ideal speedup = num stages – Branches / conflicts mean limited returns after.
Computer Architecture: Multithreading (I) Prof. Onur Mutlu Carnegie Mellon University.
CENG331 Computer Organization Multithreading
Advanced Computer Architecture pg 1 Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8) Henk Corporaal
Niagara: A 32-Way Multithreaded Sparc Processor Kongetira, Aingaran, Olukotun Presentation by: Mohamed Abuobaida Mohamed For COE502 : Parallel Processing.
Processor Level Parallelism 1
CS203 – Advanced Computer Architecture
Electrical and Computer Engineering
Mehmet Altan Açıkgöz Ercan Saraç
Krste Asanovic Electrical Engineering and Computer Sciences
Simultaneous Multithreading
Simultaneous Multithreading
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
/ Computer Architecture and Design
Computer Architecture: Multithreading (I)
Krste Asanovic Electrical Engineering and Computer Sciences
Levels of Parallelism within a Single Processor
Hardware Multithreading
Krste Asanovic Electrical Engineering and Computer Sciences
/ Computer Architecture and Design
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
Levels of Parallelism within a Single Processor
CS 152 Computer Architecture and Engineering CS252 Graduate Computer Architecture Lecture 14 – Multithreading Krste Asanovic Speaker: David Biancolin.
CSC3050 – Computer Architecture
CSE 486/586 Distributed Systems Cache Coherence
Presentation transcript:

CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Multithreading I Steve Ko Computer Sciences and Engineering University at Buffalo

CSE 490/590, Spring Last time… Superscalar suffers from the sequential nature of the ISA VLIW instructions consists of multiple operations Techniques such as loop unrolling, software pipelining, and trace scheduling gives an opportunity to extract ILP necessary in VLIW

CSE 490/590, Spring Rotating Register Files Problems: Scheduled loops require lots of registers, Lots of duplicated code in prolog, epilog Solution: Allocate new set of registers for each loop iteration

CSE 490/590, Spring Rotating Register File P0 P1 P2 P3 P4 P5 P6 P7 RRB=3 + R1 Rotating Register Base (RRB) register points to base of current register set. Value added on to logical register specifier to give physical register number. Usually, split into rotating and non-rotating registers.

CSE 490/590, Spring Rotating Register File (Previous Loop Example) bloopsd f9, ()fadd f5, f4,...ld f1, () Three cycle load latency encoded as difference of 3 in register specifier number (f4 - f1 = 3) Four cycle fadd latency encoded as difference of 4 in register specifier number (f9 – f5 = 4) bloopsd P17, ()fadd P13, P12,ld P9, ()RRB=8 bloopsd P16, ()fadd P12, P11,ld P8, ()RRB=7 bloopsd P15, ()fadd P11, P10,ld P7, ()RRB=6 bloopsd P14, ()fadd P10, P9,ld P6, ()RRB=5 bloopsd P13, ()fadd P9, P8,ld P5, ()RRB=4 bloopsd P12, ()fadd P8, P7,ld P4, ()RRB=3 bloopsd P11, ()fadd P7, P6,ld P3, ()RRB=2 bloopsd P10, ()fadd P6, P5,ld P2, ()RRB=1

CSE 490/590, Spring Cydra-5: Memory Latency Register (MLR) Problem: Loads have variable latency Solution: Let software choose desired memory latency Compiler schedules code for maximum load-use distance Software sets MLR to latency that matches code schedule Hardware ensures that loads take exactly MLR cycles to return values into processor pipeline –Hardware buffers loads that return early –Hardware stalls processor if loads return late

CSE 490/590, Spring 2011 Multithreading Difficult to continue to extract instruction-level parallelism (ILP) or data-level parallelism (DLP) from a single sequential thread of control Many workloads can make use of thread-level parallelism (TLP) –TLP from multiprogramming (run independent sequential jobs) –TLP from multithreaded applications (run one job faster using parallel threads) Multithreading uses TLP to improve utilization of a single processor

CSE 490/590, Spring Pipeline Hazards Each instruction may depend on the next LW r1, 0(r2) LW r5, 12(r1) ADDI r5, r5, #12 SW 12(r1), r5 FDXMW t0t1t2t3t4t5t6t7t8 FDXMWDDD FDXMWDDDFFF FDDDDFFF t9t10t11t12t13t14 What is usually done to cope with this?

CSE 490/590, Spring Multithreading How can we guarantee no dependencies between instructions in a pipeline? -- One way is to interleave execution of instructions from different program threads on same pipeline FDXMW t0t1t2t3t4t5t6t7t8 T1: LW r1, 0(r2) T2: ADD r7, r1, r4 T3: XORI r5, r4, #12 T4: SW 0(r7), r5 T1: LW r5, 12(r1) t9 FDXMW FDXMW FDXMW FDXMW Interleave 4 threads, T1-T4, on non-bypassed 5-stage pipe Prior instruction in a thread always completes write- back before next instruction in same thread reads register file

CSE 490/590, Spring CDC 6600 Peripheral Processors (Cray, 1964) First multithreaded hardware 10 “virtual” I/O processors Fixed interleave on simple pipeline Pipeline has 100ns cycle time Each virtual processor executes one instruction every 1000ns Accumulator-based instruction set to reduce processor state

CSE 490/590, Spring Simple Multithreaded Pipeline Have to carry thread select down pipeline to ensure correct state bits read/written at each pipe stage Appears to software (including OS) as multiple, albeit slower, CPUs +1 2 Thread select PC 1 PC 1 PC 1 PC 1 I$ IR GPR1 XY 2 D$

CSE 490/590, Spring Multithreading Costs Each thread requires its own user state – PC – GPRs Also, needs its own system state –virtual memory page table base register –exception handling registers Other overheads: –Additional cache/TLB conflicts from competing threads –(or add larger cache/TLB capacity) –More OS overhead to schedule more threads (where do all these threads come from?)

CSE 490/590, Spring Thread Scheduling Policies Fixed interleave (CDC 6600 PPUs, 1964) –Each of N threads executes one instruction every N cycles –If thread not ready to go in its slot, insert pipeline bubble Software-controlled interleave (TI ASC PPUs, 1971) –OS allocates S pipeline slots amongst N threads –Hardware performs fixed interleave over S slots, executing whichever thread is in that slot Hardware-controlled thread scheduling (HEP, 1982) –Hardware keeps track of which threads are ready to go –Picks next thread to execute based on hardware priority scheme

CSE 490/590, Spring CSE 490/590 Administrivia HW2 & midterm solution out Quiz 2 (next Friday 4/8): After midterm until next Monday

CSE 490/590, Spring Denelcor HEP (Burton Smith, 1982) First commercial machine to use hardware threading in main CPU –120 threads per processor –10 MHz clock rate –Up to 8 processors –precursor to Tera MTA (Multithreaded Architecture)

CSE 490/590, Spring Tera MTA (1990-) Up to 256 processors Up to 128 active threads per processor Processors and memory modules populate a sparse 3D torus interconnection fabric Flat, shared main memory – No data cache – Sustains one main memory access per cycle per processor GaAs logic in prototype, 260MHz –Second version CMOS, MTA-2, 50W/processor –New version, XMT, fits into AMD Opteron socket, runs at 500MHz

CSE 490/590, Spring MTA Pipeline A W C W M Inst Fetch Memory Pool Retry Pool Interconnection Network Write Pool W Memory pipeline Issue Pool Every cycle, one VLIW instruction from one active thread is launched into pipeline Instruction pipeline is 21 cycles long Memory operations incur ~150 cycles of latency

CSE 490/590, Spring Coarse-Grain Multithreading Tera MTA designed for supercomputing applications with large data sets and low locality –No data cache –Many parallel threads needed to hide large memory latency Other applications are more cache friendly –Few pipeline bubbles if cache mostly has hits –Just add a few threads to hide occasional cache miss latencies –Swap threads on cache misses

CSE 490/590, Spring MIT Alewife (1990) Modified SPARC chips –register windows hold different thread contexts Up to four threads per node Thread switch on local cache miss

CSE 490/590, Spring IBM PowerPC RS64-IV (2000) Commercial coarse-grain multithreading CPU Based on PowerPC with quad-issue in-order five- stage pipeline Each physical CPU supports two virtual CPUs On L2 cache miss, pipeline is flushed and execution switches to second thread –short pipeline minimizes flush penalty (4 cycles), small compared to memory access latency –flush pipeline to simplify exception handling

CSE 490/590, Spring Simultaneous Multithreading (SMT) for OoO Superscalars Techniques presented so far have all been “vertical” multithreading where each pipeline stage works on one thread at a time SMT uses fine-grain control already present inside an OoO superscalar to allow instructions from multiple threads to enter execution on same clock cycle. Gives better utilization of machine resources.

CSE 490/590, Spring For most apps, most execution units lie idle in an OoO superscalar From: Tullsen, Eggers, and Levy, “Simultaneous Multithreading: Maximizing On-chip Parallelism”, ISCA For an 8-way superscalar.

CSE 490/590, Spring Acknowledgements These slides heavily contain material developed and copyright by –Krste Asanovic (MIT/UCB) –David Patterson (UCB) And also by: –Arvind (MIT) –Joel Emer (Intel/MIT) –James Hoe (CMU) –John Kubiatowicz (UCB) MIT material derived from course UCB material derived from course CS252