Managing GPU Concurrency in Heterogeneous Architect ures Shared Resources Network LLC Memory.

Slides:



Advertisements
Similar presentations
Application-aware Memory System for Fair and Efficient Execution of Concurrent GPGPU Applications Adwait Jog 1, Evgeny Bolotin 2, Zvika Guz 2,a, Mike Parker.
Advertisements

Communication-Avoiding Algorithms Jim Demmel EECS & Math Departments UC Berkeley.
Application-to-Core Mapping Policies to Reduce Memory System Interference Reetuparna Das * Rachata Ausavarungnirun $ Onur Mutlu $ Akhilesh Kumar § Mani.
Orchestrated Scheduling and Prefetching for GPGPUs Adwait Jog, Onur Kayiran, Asit Mishra, Mahmut Kandemir, Onur Mutlu, Ravi Iyer, Chita Das.
Application-Aware Memory Channel Partitioning † Sai Prashanth Muralidhara § Lavanya Subramanian † † Onur Mutlu † Mahmut Kandemir § ‡ Thomas Moscibroda.
International Symposium on Low Power Electronics and Design Energy-Efficient Non-Minimal Path On-chip Interconnection Network for Heterogeneous Systems.
Optimization on Kepler Zehuan Wang
OWL: Cooperative Thread Array (CTA) Aware Scheduling Techniques for Improving GPGPU Performance Adwait Jog, Onur Kayiran, Nachiappan CN, Asit Mishra, Mahmut.
†The Pennsylvania State University
Neither More Nor Less: Optimizing Thread-level Parallelism for GPGPUs Onur Kayıran, Adwait Jog, Mahmut Kandemir, Chita R. Das.
|Introduction |Background |TAP (TLP-Aware Cache Management Policy) Core sampling Cache block lifetime normalization TAP-UCP and TAP-RRIP |Evaluation Methodology.
Hermes: An Integrated CPU/GPU Microarchitecture for IPRouting Author: Yuhao Zhu, Yangdong Deng, Yubei Chen Publisher: DAC'11, June 5-10, 2011, San Diego,
1 Pipelining for Multi- Core Architectures. 2 Multi-Core Technology Single Core Dual CoreMulti-Core + Cache + Cache Core 4 or more cores.
Veynu Narasiman The University of Texas at Austin Michael Shebanow
A Lightweight Transactional Design in Flash-based SSDs to Support Flexible Transactions Youyou Lu 1, Jiwu Shu 1, Jia Guo 1, Shuai Li 1, Onur Mutlu 2 LightTx:
Lecture 2 : Introduction to Multicore Computing Bong-Soo Sohn Associate Professor School of Computer Science and Engineering Chung-Ang University.
1 Application Aware Prioritization Mechanisms for On-Chip Networks Reetuparna Das Onur Mutlu † Thomas Moscibroda ‡ Chita Das § Reetuparna Das § Onur Mutlu.
Massively Parallel Mapping of Next Generation Sequence Reads Using GPUs Azita Nouri, Reha Oğuz Selvitopi, Özcan Öztürk, Onur Mutlu, Can Alkan Bilkent University,
HIgh Performance Computing & Systems LAB Staged Memory Scheduling: Achieving High Performance and Scalability in Heterogeneous Systems Rachata Ausavarungnirun,
How should a highly multithreaded architecture, like a GPU, pick which threads to issue? Cache-Conscious Wavefront Scheduling Use feedback from the memory.
Design and Evaluation of Hierarchical Rings with Deflection Routing Rachata Ausavarungnirun, Chris Fallin, Xiangyao Yu, ​ Kevin Chang, Greg Nazario, Reetuparna.
Autonomic scheduling of tasks from data parallel patterns to CPU/GPU core mixes Published in: High Performance Computing and Simulation (HPCS), 2013 International.
Carnegie Mellon /18-213: Introduction to Computer Systems Instructors: Anthony Rowe, Seth Goldstein and Gregory Kesden 27 th (and last) Lecture,
Breakout Session 3 Stack of adaptive systems (with a view on self-adaptation)
Some key aspects of NVIDIA GPUs and CUDA. Silicon Usage.
Trading Cache Hit Rate for Memory Performance Wei Ding, Mahmut Kandemir, Diana Guttman, Adwait Jog, Chita R. Das, Praveen Yedlapalli The Pennsylvania State.
Staged Memory Scheduling Rachata Ausavarungnirun, Kevin Chang, Lavanya Subramanian, Gabriel H. Loh*, Onur Mutlu Carnegie Mellon University, *AMD Research.
Porting Irregular Reductions on Heterogeneous CPU-GPU Configurations Xin Huo Vignesh T. Ravi Gagan Agrawal Department of Computer Science and Engineering,
HAT: Heterogeneous Adaptive Throttling for On-Chip Networks Kevin Kai-Wei Chang Rachata Ausavarungnirun Chris Fallin Onur Mutlu.
Carnegie Mellon /18-243: Introduction to Computer Systems Instructors: Anthony Rowe and Gregory Kesden 27 th (and last) Lecture, 28 April 2011 Multi-Core.
Institute of Software,Chinese Academy of Sciences An Insightful and Quantitative Performance Optimization Chain for GPUs Jia Haipeng.
Heterogeneous Processing KYLE ADAMSKI. Overview What is heterogeneous processing? Why it is necessary Issues with heterogeneity CPU’s vs. GPU’s Heterogeneous.
RFVP: Rollback-Free Value Prediction with Safe to Approximate Loads Amir Yazdanbakhsh, Bradley Thwaites, Hadi Esmaeilzadeh Gennady Pekhimenko, Onur Mutlu,
Fall 2012 Parallel Computer Architecture Lecture 4: Multi-Core Processors Prof. Onur Mutlu Carnegie Mellon University 9/14/2012.
1 Carnegie Mellon A Tour of Computer Systems Instructors: Sejin Park.
µC-States: Fine-grained GPU Datapath Power Management
Computer Architecture: Parallel Task Assignment
EECE571R -- Harnessing Massively Parallel Processors ece
A Case for Core-Assisted Bottleneck Acceleration in GPUs Enabling Flexible Data Compression with Assist Warps Session 2A: Today, 10:20 AM Nandita Vijaykumar.
Exploiting Inter-Warp Heterogeneity to Improve GPGPU Performance
Controlled Kernel Launch for Dynamic Parallelism in GPUs
Image Transformation 4/30/2009
A Case for Core-Assisted Bottleneck Acceleration in GPUs Enabling Flexible Data Compression with Assist Warps Nandita Vijaykumar Gennady Pekhimenko, Adwait.
ISPASS th April Santa Rosa, California
Mosaic: A GPU Memory Manager
Managing GPU Concurrency in Heterogeneous Architectures
Rachata Ausavarungnirun, Kevin Chang
Organizers Adwait Jog EJ Kim
RFVP: Rollback-Free Value Prediction with Safe to Approximate Loads
Rachata Ausavarungnirun GPU 2 (Virginia EF) Tuesday 2PM-3PM
Rachata Ausavarungnirun
“The Brain”… I will rule the world!
Rachata Ausavarungnirun, Joshua Landgraf, Vance Miller
MASK: Redesigning the GPU Memory Hierarchy
Gather-Scatter DRAM In-DRAM Address Translation to Improve the Spatial Locality of Non-unit Strided Accesses Session C1, Tuesday 10:40 AM Vivek Seshadri.
Organizers Adwait Jog Assistant Professor (William & Mary)
Exploiting Inter-Warp Heterogeneity to Improve GPGPU Performance
Vulnerabilities in MLC NAND Flash Memory Programming: Experimental Analysis, Exploits, and Mitigation Techniques HPCA Session 3A – Monday, 3:15 pm,
Accelerating Approximate Pattern Matching with Processing-In-Memory (PIM) and Single-Instruction Multiple-Data (SIMD) Programming Damla Senol Cali1 Zülal.
Managing GPU Concurrency in Heterogeneous Architectures
Amirhossein Mirhosseini
740: Computer Architecture Grading and Policies
Shanjiang Tang1, Bingsheng He2, Shuhao Zhang2,4, Zhaojie Niu3
Accelerating Dependent Cache Misses with an Enhanced Memory Controller
“The Brain”… I will rule the world!
` A Framework for Memory Oversubscription Management in Graphics Processing Units Chen Li, Rachata Ausavarungnirun, Christopher J. Rossbach, Youtao.
ITAP: Idle-Time-Aware Power Management for GPU Execution Units
CoNDA: Efficient Cache Coherence Support for Near-Data Accelerators
Presented by Ondrej Cernin
BitMAC: An In-Memory Accelerator for Bitvector-Based
Presentation transcript:

Managing GPU Concurrency in Heterogeneous Architect ures Shared Resources Network LLC Memory

Managing GPU Concurrency in Heterogeneous Architect ures Shared Resources Network LLC Memory

Managing GPU Concurrency in Heterogeneous Architect ures Shared Resources Network LLC Memory

Managing GPU Concurrency in Heterogeneous Architect ures Shared Resources Network LLC Memory

Our Proposal Warp Scheduler Controls GPU Thread-Level Parallelism

Our Proposal Warp Scheduler Controls GPU Thread-Level Parallelism CPU-centric Strategy Improved GPU performance Improved CPU performance × 

Our Proposal Warp Scheduler Controls GPU Thread-Level Parallelism CPU-centric Strategy CPU-GPU Balanced Strategy Improved GPU performance Improved CPU performance ×   

Our Proposal Warp Scheduler Controls GPU Thread-Level Parallelism CPU-centric Strategy CPU-GPU Balanced Strategy Improved GPU performance Improved CPU performance ×    Control the trade-off

Our Proposal CPU-centric Strategy Memory Congestion CPU Performance

Our Proposal CPU-centric Strategy Memory Congestion CPU Performance IF Memory Congestion GPU TLP

Our Proposal CPU-centric Strategy Memory Congestion CPU Performance IF Memory Congestion GPU TLP Results Summary: +24% CPU & -11% GPU

Our Proposal CPU-centric Strategy Memory Congestion CPU Performance IF Memory Congestion GPU TLP Results Summary: +24% CPU & -11% GPU CPU-GPU Balanced Strategy GPU TLP GPU Latency Tolerance

Our Proposal CPU-centric Strategy Memory Congestion CPU Performance IF Memory Congestion GPU TLP Results Summary: +24% CPU & -11% GPU CPU-GPU Balanced Strategy GPU TLP GPU Latency Tolerance IF Latency Tolerance GPU TLP

Our Proposal CPU-centric Strategy Memory Congestion CPU Performance IF Memory Congestion GPU TLP Results Summary: +24% CPU & -11% GPU CPU-GPU Balanced Strategy GPU TLP GPU Latency Tolerance IF Latency Tolerance GPU TLP Results Summary: +7% both CPU & GPU

Managing GPU Concurrency in Heterogeneous Architect ures Onur Kayıran 1, Nachiappan CN 1, Adwait Jog 1, Rachata Ausavarungnirun 2, Mahmut T. Kandemir 1, Gabriel H. Loh 3, Onur Mutlu 2, Chita R. Das 1 1 Penn State 2 Carnegie Mellon 3 AMD Research

Managing GPU Concurrency in Heterogeneous Architect ures Onur Kayıran 1, Nachiappan CN 1, Adwait Jog 1, Rachata Ausavarungnirun 2, Mahmut T. Kandemir 1, Gabriel H. Loh 3, Onur Mutlu 2, Chita R. Das 1 Today Session 1B – Main 3 pm 1 Penn State 2 Carnegie Mellon 3 AMD Research