Computer Architecture: Multithreading (I) Prof. Onur Mutlu Carnegie Mellon University.

Slides:



Advertisements
Similar presentations
Multiprocessors— Large vs. Small Scale Multiprocessors— Large vs. Small Scale.
Advertisements

Lecture 6: Multicore Systems
Multithreading processors Adapted from Bhuyan, Patterson, Eggers, probably others.
Microprocessor Microarchitecture Multithreading Lynn Choi School of Electrical Engineering.
Multithreading Peer Instruction Lecture Materials for Computer Architecture by Dr. Leo Porter is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike.
Helper Threads via Virtual Multithreading on an experimental Itanium 2 processor platform. Perry H Wang et. Al.
1 COMP 206: Computer Architecture and Implementation Montek Singh Mon, Dec 5, 2005 Topic: Intro to Multiprocessors and Thread-Level Parallelism.
CS 162 Computer Architecture Lecture 10: Multithreading Instructor: L.N. Bhuyan Adopted from Internet.
Multithreading and Dataflow Architectures CPSC 321 Andreas Klappenecker.
Multithreaded ASC Kevin Schaffer and Robert A. Walker ASC Processor Group Computer Science Department Kent State University.
Chapter 17 Parallel Processing.
CSE 490/590, Spring 2011 CSE 490/590 Computer Architecture Multithreading I Steve Ko Computer Sciences and Engineering University at Buffalo.
Techniques for Efficient Processing in Runahead Execution Engines Onur Mutlu Hyesoon Kim Yale N. Patt.
1 Lecture 10: ILP Innovations Today: ILP innovations and SMT (Section 3.5)
18-447: Computer Architecture Lecture 30B: Multiprocessors Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 4/22/2013.
Multi-core architectures. Single-core computer Single-core CPU chip.
Lecture 11 Multithreaded Architectures Graduate Computer Architecture Fall 2005 Shih-Hao Hung Dept. of Computer Science and Information Engineering National.
POLITECNICO DI MILANO Parallelism in wonderland: are you ready to see how deep the rabbit hole goes? Multithreaded and multicore processors Marco D. Santambrogio:
Processes and Threads Processes have two characteristics: – Resource ownership - process includes a virtual address space to hold the process image – Scheduling/execution.
Nicolas Tjioe CSE 520 Wednesday 11/12/2008 Hyper-Threading in NetBurst Microarchitecture David Koufaty Deborah T. Marr Intel Published by the IEEE Computer.
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
Hardware Multithreading. Increasing CPU Performance By increasing clock frequency By increasing Instructions per Clock Minimizing memory access impact.
1 Multiprocessor and Real-Time Scheduling Chapter 10 Real-Time scheduling will be covered in SYSC3303.
Hyper-Threading Technology Architecture and Micro-Architecture.
Tahir CELEBI, Istanbul, 2005 Hyper-Threading Technology Architecture and Micro-Architecture Prepared by Tahir Celebi Istanbul, 2005.
Implicitly-Multithreaded Processors Il Park and Babak Falsafi and T. N. Vijaykumar Presented by: Ashay Rane Published in: SIGARCH Computer Architecture.
COMP25212 CPU Multi Threading Learning Outcomes: to be able to: –Describe the motivation for multithread support in CPU hardware –To distinguish the benefits.
Super computers Parallel Processing By Lecturer: Aisha Dawood.
1 Threads, SMP, and Microkernels Chapter Multithreading Operating system supports multiple threads of execution within a single process MS-DOS.
SIMULTANEOUS MULTITHREADING Ting Liu Liu Ren Hua Zhong.
Pipelining and Parallelism Mark Staveley
Thread Level Parallelism Since ILP has inherent limitations, can we exploit multithreading? –a thread is defined as a separate process with its own instructions.
Processor Level Parallelism. Improving the Pipeline Pipelined processor – Ideal speedup = num stages – Branches / conflicts mean limited returns after.
CALTECH cs184c Spring DeHon CS184c: Computer Architecture [Parallel and Multithreaded] Day 7: April 24, 2001 Threaded Abstract Machine (TAM) Simultaneous.
Computer Architecture: Multithreading (II) Prof. Onur Mutlu Carnegie Mellon University.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Advanced Computer Architecture pg 1 Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8) Henk Corporaal
Computer Structure 2015 – Intel ® Core TM μArch 1 Computer Structure Multi-Threading Lihu Rappoport and Adi Yoaz.
Computer Architecture Lecture 27: Multiprocessors Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 4/6/2015.
15-740/ Computer Architecture Lecture 12: Issues in OoO Execution Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 10/7/2011.
740: Computer Architecture Memory Consistency Prof. Onur Mutlu Carnegie Mellon University.
PipeliningPipelining Computer Architecture (Fall 2006)
Spring 2011 Parallel Computer Architecture Lecture 11: Core Fusion and Multithreading Prof. Onur Mutlu Carnegie Mellon University.
Fall 2012 Parallel Computer Architecture Lecture 4: Multi-Core Processors Prof. Onur Mutlu Carnegie Mellon University 9/14/2012.
Computer Architecture: Multithreading (II)
COMP 740: Computer Architecture and Implementation
Electrical and Computer Engineering
18-447: Computer Architecture Lecture 30B: Multiprocessors
Computer Architecture: Parallel Processing Basics
Prof. Onur Mutlu Carnegie Mellon University
Prof. Onur Mutlu Carnegie Mellon University
Simultaneous Multithreading
Simultaneous Multithreading
Computer Structure Multi-Threading
5.2 Eleven Advanced Optimizations of Cache Performance
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
/ Computer Architecture and Design
Hyperthreading Technology
Computer Architecture: Multithreading (I)
Levels of Parallelism within a Single Processor
Hardware Multithreading
15-740/ Computer Architecture Lecture 5: Precise Exceptions
Adaptive Single-Chip Multiprocessing
Fine-grained vs Coarse-grained multithreading
Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 9/30/2011
/ Computer Architecture and Design
Embedded Computer Architecture 5SAI0 Chip Multi-Processors (ch 8)
Levels of Parallelism within a Single Processor
Prof. Onur Mutlu Carnegie Mellon University 9/26/2012
Prof. Onur Mutlu Carnegie Mellon University 9/28/2012
Presentation transcript:

Computer Architecture: Multithreading (I) Prof. Onur Mutlu Carnegie Mellon University

A Note on This Lecture These slides are partly from Fall 2012, Parallel Computer Architecture, Lecture 9: Multithreading Video of that lecture:  Xmh4cDkC3s1VBB7-njlgiG5d&index=9 Xmh4cDkC3s1VBB7-njlgiG5d&index=9 2

Readings: Multithreading Required  Spracklen and Abraham, “Chip Multithreading: Opportunities and Challenges,” HPCA Industrial Session,  Kalla et al., “IBM Power5 Chip: A Dual-Core Multithreaded Processor,” IEEE Micro  Tullsen et al., “Exploiting choice: instruction fetch and issue on an implementable simultaneous multithreading processor,” ISCA  Eyerman and Eeckhout, “A Memory-Level Parallelism Aware Fetch Policy for SMT Processors,” HPCA Recommended  Hirata et al., “An Elementary Processor Architecture with Simultaneous Instruction Issuing from Multiple Threads,” ISCA 1992  Smith, “A pipelined, shared resource MIMD computer,” ICPP  Gabor et al., “Fairness and Throughput in Switch on Event Multithreading,” MICRO  Agarwal et al., “APRIL: A Processor Architecture for Multiprocessing,” ISCA

Multithreading (Outline) Multiple hardware contexts Purpose Initial incarnations  CDC 6600  HEP  Tera Levels of multithreading  Fine-grained (cycle-by-cycle)  Coarse grained (multitasking) Switch-on-event  Simultaneous Uses: traditional + creative (now that we have multiple contexts, why do we not do …) 4

Multithreading: Basics Thread  Instruction stream with state (registers and memory)  Register state is also called “thread context” Threads could be part of the same process (program) or from different programs  Threads in the same program share the same address space (shared memory model) Traditionally, the processor keeps track of the context of a single thread Multitasking: When a new thread needs to be executed, old thread’s context in hardware written back to memory and new thread’s context loaded 5

Hardware Multithreading General idea: Have multiple thread contexts in a single processor  When the hardware executes from those hardware contexts determines the granularity of multithreading Why?  To tolerate latency (initial motivation) Latency of memory operations, dependent instructions, branch resolution By utilizing processing resources more efficiently  To improve system throughput By exploiting thread-level parallelism By improving superscalar/OoO processor utilization  To reduce context switch penalty 6

Initial Motivations Tolerate latency  When one thread encounters a long-latency operation, the processor can execute a useful operation from another thread CDC 6600 peripheral processors  I/O latency: 10 cycles  10 I/O threads can be active to cover the latency  Pipeline with 100ns cycle time, memory with 1000ns latency  Idea: Each I/O “processor” executes one instruction every 10 cycles on the same pipeline  Thornton, “Design of a Computer: The Control Data 6600,”  Thornton, “Parallel Operation in the Control Data 6600,” AFIPS

Hardware Multithreading Benefit + Latency tolerance + Better hardware utilization (when?) + Reduced context switch penalty Cost - Requires multiple thread contexts to be implemented in hardware (area, power, latency cost) - Usually reduced single-thread performance - Resource sharing, contention - Switching penalty (can be reduced with additional hardware) 8

Types of Multithreading Fine-grained  Cycle by cycle Coarse-grained  Switch on event (e.g., cache miss)  Switch on quantum/timeout Simultaneous  Instructions from multiple threads executed concurrently in the same cycle 9

Fine-grained Multithreading Idea: Switch to another thread every cycle such that no two instructions from the thread are in the pipeline concurrently Improves pipeline utilization by taking advantage of multiple threads Alternative way of looking at it: Tolerates the control and data dependency latencies by overlapping the latency with useful work from other threads Thornton, “Parallel Operation in the Control Data 6600,” AFIPS Smith, “A pipelined, shared resource MIMD computer,” ICPP

Fine-grained Multithreading CDC 6600’s peripheral processing unit is fine-grained multithreaded  Processor executes a different I/O thread every cycle  An operation from the same thread is executed every 10 cycles Denelcor HEP  Smith, “A pipelined, shared resource MIMD computer,” ICPP  120 threads/processor 50 user, 70 OS functions  available queue vs. unavailable (waiting) queue  each thread can only have 1 instruction in the processor pipeline; each thread independent  to each thread, processor looks like a sequential machine  throughput vs. single thread speed 11

Fine-grained Multithreading in HEP Cycle time: 100ns 8 stages  800 ns to complete an instruction  assuming no memory access 12

Fine-grained Multithreading Advantages + No need for dependency checking between instructions (only one instruction in pipeline from a single thread) + No need for branch prediction logic + Otherwise-bubble cycles used for executing useful instructions from different threads + Improved system throughput, latency tolerance, utilization Disadvantages - Extra hardware complexity: multiple hardware contexts, thread selection logic - Reduced single thread performance (one instruction fetched every N cycles) - Resource contention between threads in caches and memory - Dependency checking logic between threads remains (load/store) 13

Multithreaded Pipeline Example Slide from Joel Emer 14

Sun Niagara Multithreaded Pipeline 15

Tera MTA Fine-grained Multithreading 256 processors, each with a 21-cycle pipeline 128 active threads A thread can issue instructions every 21 cycles  Then, why 128 threads? Memory latency: approximately 150 cycles  No data cache  Threads can be blocked waiting for memory  More threads  better ability to tolerate memory latency Thread state per processor  128 x 32 general purpose registers  128 x 1 thread status registers 16

Tera MTA Pipeline Threads move to/from different pools as an instruction executes  More accurately, thread IDs are kept in each pool 17

Coarse-grained Multithreading Idea: When a thread is stalled due to some event, switch to a different hardware context  Switch-on-event multithreading Possible stall events  Cache misses  Synchronization events (e.g., load an empty location)  FP operations HEP, Tera combine fine-grained MT and coarse-grained MT  Thread waiting for memory becomes blocked (un-selectable) Agarwal et al., “APRIL: A Processor Architecture for Multiprocessing,” ISCA Explicit switch on event 18

Coarse-grained Multithreading in APRIL Agarwal et al., “APRIL: A Processor Architecture for Multiprocessing,” ISCA hardware thread contexts  Called “task frames” Thread switch on  Cache miss  Network access  Synchronization fault How?  Empty processor pipeline, change frame pointer (PC) 19

Fine-grained vs. Coarse-grained MT Fine-grained advantages + Simpler to implement, can eliminate dependency checking, branch prediction logic completely + Switching need not have any performance overhead (i.e. dead cycles) + Coarse-grained requires a pipeline flush or a lot of hardware to save pipeline state  Higher performance overhead with deep pipelines and large windows Disadvantages - Low single thread performance: each thread gets 1/Nth of the bandwidth of the pipeline 20

IBM RS64-IV 4-way superscalar, in-order, 5-stage pipeline Two hardware contexts On an L2 cache miss  Flush pipeline  Switch to the other thread Considerations  Memory latency vs. thread switch overhead  Short pipeline, in-order execution (small instruction window) reduces the overhead of switching 21

Intel Montecito McNairy and Bhatia, “Montecito: A Dual-Core, Dual-Thread Itanium Processor,” IEEE Micro Thread switch on  L3 cache miss/data return  Timeout – for fairness  Switch hint instruction  ALAT invalidation – synchronization fault  Transition to low power mode <2% area overhead due to CGMT 22

Fairness in Coarse-grained Multithreading Resource sharing in space and time always causes fairness considerations  Fairness: how much progress each thread makes In CGMT, the time allocated to each thread affects both fairness and system throughput  When do we switch?  For how long do we switch? When do we switch back?  How does the hardware scheduler interact with the software scheduler for fairness?  What is the switching overhead vs. benefit? Where do we store the contexts? 23

We did not cover the following slides in lecture. These are for your preparation for the next lecture.

Fairness in Coarse-grained Multithreading Gabor et al., “Fairness and Throughput in Switch on Event Multithreading,” MICRO How can you solve the below problem? 25

Fairness vs. Throughput Switch not only on miss, but also on data return Problem: Switching has performance overhead  Pipeline and window flush  Reduced locality and increased resource contention (frequent switches increase resource contention and reduce locality) One possible solution  Estimate the slowdown of each thread compared to when run alone  Enforce switching when slowdowns become significantly unbalanced  Gabor et al., “Fairness and Throughput in Switch on Event Multithreading,” MICRO

Thread Switching Urgency in Montecito Thread urgency levels  0-7 Nominal level 5: active progress After timeout: set to 7 After ext. interrupt: set to 6 Reduce urgency level for each blocking operation  L3 miss Switch if urgency of foreground lower than that of background 27