1 Utility-Based Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches Written by Moinuddin K. Qureshi and Yale N.

Slides:



Advertisements
Similar presentations
1 Utility-Based Partitioning of Shared Caches Moinuddin K. Qureshi Yale N. Patt International Symposium on Microarchitecture (MICRO) 2006.
Advertisements

1 A Case for MLP-Aware Cache Replacement International Symposium on Computer Architecture (ISCA) 2006 Moinuddin K. Qureshi Daniel N. Lynch, Onur Mutlu,
Application-Aware Memory Channel Partitioning † Sai Prashanth Muralidhara § Lavanya Subramanian † † Onur Mutlu † Mahmut Kandemir § ‡ Thomas Moscibroda.
ACM: An Efficient Approach for Managing Shared Caches in Chip Multiprocessors Mohammad Hammoud, Sangyeun Cho, and Rami Melhem Presenter: Socrates Demetriades.
A KTEC Center of Excellence 1 Cooperative Caching for Chip Multiprocessors Jichuan Chang and Gurindar S. Sohi University of Wisconsin-Madison.
FLEXclusion: Balancing Cache Capacity and On-chip Bandwidth via Flexible Exclusion Jaewoong Sim Jaekyu Lee Moinuddin K. Qureshi Hyesoon Kim.
Thread Criticality Predictors for Dynamic Performance, Power, and Resource Management in Chip Multiprocessors Abhishek Bhattacharjee Margaret Martonosi.
1 Line Distillation: Increasing Cache Capacity by Filtering Unused Words in Cache Lines Moinuddin K. Qureshi M. Aater Suleman Yale N. Patt HPCA 2007.
4/17/20151 Improving Memory Bank-Level Parallelism in the Presence of Prefetching Chang Joo Lee Veynu Narasiman Onur Mutlu* Yale N. Patt Electrical and.
1 Lecture 9: Large Cache Design II Topics: Cache partitioning and replacement policies.
1 The 3P and 4P cache replacement policies Pierre Michaud INRIA Cache Replacement Championship June 20, 2010.
|Introduction |Background |TAP (TLP-Aware Cache Management Policy) Core sampling Cache block lifetime normalization TAP-UCP and TAP-RRIP |Evaluation Methodology.
Improving Cache Performance by Exploiting Read-Write Disparity
1 PATH: Page Access Tracking Hardware to Improve Memory Management Reza Azimi, Livio Soares, Michael Stumm, Tom Walsh, and Angela Demke Brown University.
1 Lecture 10: Large Cache Design III Topics: Replacement policies, prefetch, dead blocks, associativity Sign up for class mailing list Pseudo-LRU has a.
Adaptive Cache Compression for High-Performance Processors Alaa R. Alameldeen and David A.Wood Computer Sciences Department, University of Wisconsin- Madison.
Mitigating Prefetcher-Caused Pollution Using Informed Caching Policies for Prefetched Blocks Vivek Seshadri Samihan Yedkar ∙ Hongyi Xin ∙ Onur Mutlu Phillip.
1 Coordinated Control of Multiple Prefetchers in Multi-Core Systems Eiman Ebrahimi * Onur Mutlu ‡ Chang Joo Lee * Yale N. Patt * * HPS Research Group The.
Dyer Rolan, Basilio B. Fraguela, and Ramon Doallo Proceedings of the International Symposium on Microarchitecture (MICRO’09) Dec /7/14.
A Bandwidth-aware Memory-subsystem Resource Management using Non-invasive Resource Profilers for Large CMP Systems Dimitris Kaseridis, Jeffery Stuecheli,
Cooperative Caching for Chip Multiprocessors Jichuan Chang Guri Sohi University of Wisconsin-Madison ISCA-33, June 2006.
Achieving Non-Inclusive Cache Performance with Inclusive Caches Temporal Locality Aware (TLA) Cache Management Policies Aamer Jaleel,
Unifying Primary Cache, Scratch, and Register File Memories in a Throughput Processor Mark Gebhart 1,2 Stephen W. Keckler 1,2 Brucek Khailany 2 Ronny Krashinsky.
© 2007 IBM Corporation HPCA – 2010 Improving Read Performance of PCM via Write Cancellation and Write Pausing Moinuddin Qureshi Michele Franceschini and.
Ramazan Bitirgen, Engin Ipek and Jose F.Martinez MICRO’08 Presented by PAK,EUNJI Coordinated Management of Multiple Interacting Resources in Chip Multiprocessors.
Thread Cluster Memory Scheduling : Exploiting Differences in Memory Access Behavior Yoongu Kim Michael Papamichael Onur Mutlu Mor Harchol-Balter.
ECE8833 Polymorphous and Many-Core Computer Architecture Prof. Hsien-Hsin S. Lee School of Electrical and Computer Engineering Lecture 6 Fair Caching Mechanisms.
Moinuddin K.Qureshi, Univ of Texas at Austin MICRO’ , 12, 05 PAK, EUNJI.
(1) Scheduling for Multithreaded Chip Multiprocessors (Multithreaded CMPs)
Improving Cache Performance by Exploiting Read-Write Disparity Samira Khan, Alaa R. Alameldeen, Chris Wilkerson, Onur Mutlu, and Daniel A. Jiménez.
Abdullah Aldahami ( ) March 23, Introduction 2. Background 3. Simulation Techniques a.Experimental Settings b.Model Description c.Methodology.
Sampling Dead Block Prediction for Last-Level Caches
MadCache: A PC-aware Cache Insertion Policy Andrew Nere, Mitch Hayenga, and Mikko Lipasti PHARM Research Group University of Wisconsin – Madison June 20,
HPCA Laboratory for Computer Architecture1/11/2010 Dimitris Kaseridis 1, Jeff Stuecheli 1,2, Jian Chen 1 & Lizy K. John 1 1 University of Texas.
Exploiting Compressed Block Size as an Indicator of Future Reuse
Time Parallel Simulations I Problem-Specific Approach to Create Massively Parallel Simulations.
MIAO ZHOU, YU DU, BRUCE CHILDERS, RAMI MELHEM, DANIEL MOSSÉ UNIVERSITY OF PITTSBURGH Writeback-Aware Bandwidth Partitioning for Multi-core Systems with.
Adaptive GPU Cache Bypassing Yingying Tian *, Sooraj Puthoor†, Joseph L. Greathouse†, Bradford M. Beckmann†, Daniel A. Jiménez * Texas A&M University *,
CMP L2 Cache Management Presented by: Yang Liu CPS221 Spring 2008 Based on: Optimizing Replication, Communication, and Capacity Allocation in CMPs, Z.
PIPP: Promotion/Insertion Pseudo-Partitioning of Multi-Core Shared Caches Yuejian Xie, Gabriel H. Loh Georgia Institute of Technology Presented by: Yingying.
Hardware Architectures for Power and Energy Adaptation Phillip Stanley-Marbell.
Embedded System Lab. 정범종 PIPP: Promotion/Insertion Pseudo-Partitioning of Multi-Core Shared Caches Yuejian Xie et al. ACM, 2009.
The Evicted-Address Filter
Embedded System Lab. 오명훈 Addressing Shared Resource Contention in Multicore Processors via Scheduling.
Achieving High Performance and Fairness at Low Cost Lavanya Subramanian, Donghyuk Lee, Vivek Seshadri, Harsha Rastogi, Onur Mutlu 1 The Blacklisting Memory.
Exploiting Value Locality in Physical Register Files Saisanthosh Balakrishnan Guri Sohi University of Wisconsin-Madison 36 th Annual International Symposium.
Speaker : Kyu Hyun, Choi. Problem: Interference in shared caches – Lack of isolation → no QoS – Poor cache utilization → degraded performance.
15-740/ Computer Architecture Lecture 18: Caching in Multi-Core Prof. Onur Mutlu Carnegie Mellon University.
- 세부 1 - 이종 클라우드 플랫폼 데이터 관리 브로커 연구 및 개발 Network and Computing Lab.
Spring 2011 Parallel Computer Architecture Lecture 25: Shared Resource Management Prof. Onur Mutlu Carnegie Mellon University.
18-740/640 Computer Architecture Lecture 14: Memory Resource Management I Prof. Onur Mutlu Carnegie Mellon University Fall 2015, 10/26/2015.
15-740/ Computer Architecture Lecture 22: Caching in Multi-Core Systems Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 11/7/2011.
Mellow Writes: Extending Lifetime in Resistive Memories through Selective Slow Write Backs Lunkai Zhang, Diana Franklin, Frederic T. Chong 1 Brian Neely,
Cache Replacement Policy Based on Expected Hit Count
Samira Khan University of Virginia April 21, 2016
Prof. Onur Mutlu Carnegie Mellon University
ASR: Adaptive Selective Replication for CMP Caches
Less is More: Leveraging Belady’s Algorithm with Demand-based Learning
18742 Parallel Computer Architecture Caching in Multi-core Systems
Moinuddin K. Qureshi ECE, Georgia Tech Gabriel H. Loh, AMD
Prefetch-Aware Cache Management for High Performance Caching
Bank-aware Dynamic Cache Partitioning for Multicore Architectures
Energy-Efficient Address Translation
Adaptive Cache Replacement Policy
Using Dead Blocks as a Virtual Victim Cache
A Case for MLP-Aware Cache Replacement
CARP: Compression-Aware Replacement Policies
Massachusetts Institute of Technology
Faustino J. Gomez, Doug Burger, and Risto Miikkulainen
Lecture 14: Large Cache Design II
Presentation transcript:

1 Utility-Based Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches Written by Moinuddin K. Qureshi and Yale N. Patt Presented by Rubao Lee 10/16/2008 International Symposium on Microarchitecture (MICRO) 2006

2 Introduction Core 1 ICDC … Core 1 ICDC Core 1 ICDC Shared Cache Applications compete for the shared cache Partitioning policies critical for high performance Set Partitioning and Way Partitioning

3 Way Partitioning Core 1 ICDC … Set 1, Way 1Set 1, Way 2Set 1, Way Y … Set 2, Way 1Set 2, Way 2Set 2, Way Y … Set X, Way 1Set X, Way 2Set X, Way Y … … Core 1 ICDC Core 1 ICDC Shared Cache

4 Paper Overview GoalDecisionInformation Metrics? Evaluation?How?Who?When? What information? How?Who?When? Accuracy/Overhead? What decision? Accuracy/Overhead?Last Effect? Utility-Based Partitioning: A Low-Overhead, High-Performance, Runtime Mechanism to Partition Shared Caches

5 Existing Cache Partitioning Policies  Equal (half-and-half)  Performance isolation  No adaptation  LRU  Demand based  Demand is not benefit!  For example, a streaming application can access a large number of unique cache blocks without reuse. Partition the cache based on how much the app is likely to benefit from the cache rather than its demand for the cache.

6 Utility reflects benefit Applications have different memory access patterns. (1) Some do benefit significantly from more cache; (2) Some do not; (3) Some only need a fix amount of cache. An important concept: Application’s Utility of Cache Resource

7 Utility Curves of SEPC benchmarks Utility U a b = Misses with a ways – Misses with b ways Low Utility High Utility Saturating Utility Num ways from 16-way 1MB L2 Misses per 1000 instructions

8 Motivating Example Num ways from 16-way 1MB L2 Misses per 1000 instructions (MPKI) equake vpr LRU UTIL Improve performance by giving more cache to the application that benefits more from cache

9 Outline  Introduction and Motivation  Utility-Based Cache Partitioning  Evaluation  Scalable Partitioning Algorithm  Related Work and Summary

10 Framework for UCP Three components:  Utility Monitors (UMON) per core  Partitioning Algorithm (PA)  Replacement support to enforce partitions I$ D$ Core1 I$ D$ Core2 Shared L2 cache Main Memory UMON1 UMON2 PA

11 How to monitor Utility Monitoring the utility information of an app requires a mechanism that tracks the number of misses for all possible number of ways. For 16-way cache, we can use 16 tag directories, each having the same number of sets as the shared cache, but each having a different number of ways from 1 to 16. (Note: no data lines) … … …

12 LRU as a stack algorithm We Just Need One Tag Directory 1: A 4-way set-associative cache shown in (a) 2: Each set has four counters for each recency position from MRU to LRU 3: If a hit, the counter corresponding to the position is incremented. 4: The counters represent the number of misses saved by each recency position.

13 UMON-local vs UMON-global Set D Set C Set B Set A UMON-global Set C Set B Set D Set A UMON-local Tag entry in the Auxiliary Tag Directory (ADT) Hit counter for a recency position MRULRU The capability of cache partitioning on a per-set basis is lost!

14 Utility Monitors (UMON)  For each core, simulate LRU policy using ATD  Hit counters in ATD to count hits per recency position  LRU is a stack algorithm: hit counts  utility E.g. hits(2 ways) = H0+H1 MTD Set B Set E Set G Set A Set C Set D Set F Set H ATD Set B Set E Set G Set A Set C Set D Set F Set H (MRU)H0 H1 H2…H15(LRU)

15 Dynamic Set Sampling (DSS)  Extra tags incur hardware and power overhead  DSS reduces overhead [Qureshi+ ISCA’06]  32 sets sufficient (analytical bounds)analytical bounds  Storage < 2kB/UMON MTD ATD Set B Set E Set G Set A Set C Set D Set F Set H (MRU)H0 H1 H2…H15(LRU) Set B Set E Set G Set A Set C Set D Set F Set H Set B Set E Set G Set A Set C Set D Set F Set H Set B Set E Set G UMON (DSS)

16 Partitioning algorithm  Evaluate all possible partitions and select the best  With a ways to core1 and (16-a) ways to core2: Hits core1 = (H 0 + H 1 + … + H a-1 ) ---- from UMON1 Hits core2 = (H 0 + H 1 + … + H 16-a-1 ) ---- from UMON2  Select a that maximizes (Hits core1 + Hits core2 )  Partitioning done once every 5 million cycles

17 Way Partitioning Way partitioning support: [ Suh+ HPCA’02, Iyer ICS’04 ] 1.Each line has core-id bits 2.On a miss, count ways_occupied in set by miss-causing app ways_occupied < ways_given Yes No Victim is the LRU line from other app Victim is the LRU line from miss-causing app

18 Outline  Introduction and Motivation  Utility-Based Cache Partitioning  Evaluation  Scalable Partitioning Algorithm  Related Work and Summary

19 Methodology Configuration: Two cores: 8-wide, 128-entry window, private L1s L2: Shared, unified, 1MB, 16-way, LRU-based Memory: 400 cycles, 32 banks Used 20 workloads (four from each type) Benchmarks: Two-threaded workloads divided into 5 categories Weighted speedup for the baseline

20 Metrics Three metrics for performance: 1.Weighted Speedup (default metric)  perf = IPC 1 /SingleIPC 1 + IPC 2 /SingleIPC 2  correlates with reduction in execution time 2.Throughput  perf = IPC 1 + IPC 2  can be unfair to low-IPC application 3.Hmean-fairness  perf = hmean(IPC 1 /SingleIPC 1, IPC 2 /SingleIPC 2 )  balances fairness and performance

21 Results for weighted speedup UCP improves average weighted speedup by 11%

22 Results for throughput UCP improves average throughput by 17%

23 Results for hmean-fairness UCP improves average hmean-fairness by 11%

24 Effect of Number of Sampled Sets Dynamic Set Sampling (DSS) reduces overhead, not benefits 8 sets 16 sets 32 sets All sets

25 Hardware Overhead of UCP L2 Cache: 1MB, 16-way, 64B cache lines

26 Outline  Introduction and Motivation  Utility-Based Cache Partitioning  Evaluation  Scalable Partitioning Algorithm  Related Work and Summary

27 Scalability issues  Time complexity of partitioning low for two cores (number of possible partitions ≈ number of ways)  Possible partitions increase exponentially with cores  For a 32-way cache, possible partitions:  4 cores  6545  8 cores  15.4 million  Problem NP hard  need scalable partitioning algorithm

28 Greedy Algorithm [Stone+ ToC ’92]  GA allocates 1 block to the app that has the max utility for one block. Repeat till all blocks allocated  Optimal partitioning when utility curves are convex  Pathological behavior for non-convex curves Num ways from a 32-way 2MB L2 Misses per 100 instructions

29 Greedy Algorithm [Stone+ ToC ’92]

30 Problem with Greedy Algorithm In each iteration, the utility for 1 block: U(A) = 10 misses U(B) = 0 misses Problem: GA considers benefit only from the immediate block. Hence it fails to exploit huge gains from ahead Blocks assigned Misses All blocks assigned to A, even if B has same miss reduction with fewer blocks

31 Lookahead Algorithm  Marginal Utility (MU) = Utility per cache resource MU a b = U a b /(b-a)  GA considers MU for 1 block. LA considers MU for all possible allocations  Select the app that has the max value for MU. Allocate it as many blocks required to get max MU  Repeat till all blocks assigned

32 Lookahead Algorithm

33 Lookahead Algorithm (example) Time complexity ≈ ways 2 /2 (512 ops for 32-ways) Iteration 1: MU(A) = 10/1 block MU(B) = 80/3 blocks B gets 3 blocks Result: A gets 5 blocks and B gets 3 blocks (Optimal) Next five iterations: MU(A) = 10/1 block MU(B) = 0 A gets 1 block Blocks assigned Misses

34 Results for partitioning algorithms Four cores sharing a 2MB 32-way L2 Mix2 (swm-glg-mesa-prl) Mix3 (mcf-applu-art-vrtx) Mix4 (mcf-art-eqk-wupw) Mix1 (gap-applu-apsi-gzp) LA performs similar to EvalAll, with low time-complexity LRU UCP(Greedy) UCP(Lookahead) UCP(EvalAll)

35 Outline  Introduction and Motivation  Utility-Based Cache Partitioning  Evaluation  Scalable Partitioning Algorithm  Related Work and Summary

36 Related work Zhou+ [ASPLOS’04] Perf += 11% Storage += 64kB/core X UCP Perf += 11% Storage += 2kB/core Suh+ [HPCA’02] Perf += 4% Storage += 32B/core Performance Low High Overhead LowHigh UCP is both high-performance and low-overhead

37 Summary  CMP and shared caches are common  Partition shared caches based on utility, not demand  UMON estimates utility at runtime with low overhead  UCP improves performance : oWeighted speedup by 11% oThroughput by 17% oHmean-fairness by 11%  Lookahead algorithm is scalable to many cores sharing a highly associative cache

38 Questions

39 DSS Bounds with Analytical Model Us = Sampled mean (Num ways allocated by DSS) Ug = Global mean (Num ways allocated by Global) P = P(Us within 1 way of Ug) By Cheb. inequality: P ≥ 1 – variance/n n = number of sampled sets In general, variance ≤ 3 back

40 Galgel – concave utility galgel twolf parser