Presentation is loading. Please wait.

Presentation is loading. Please wait.

Thread Cluster Memory Scheduling : Exploiting Differences in Memory Access Behavior Yoongu Kim Michael Papamichael Onur Mutlu Mor Harchol-Balter.

Similar presentations


Presentation on theme: "Thread Cluster Memory Scheduling : Exploiting Differences in Memory Access Behavior Yoongu Kim Michael Papamichael Onur Mutlu Mor Harchol-Balter."— Presentation transcript:

1 Thread Cluster Memory Scheduling : Exploiting Differences in Memory Access Behavior Yoongu Kim Michael Papamichael Onur Mutlu Mor Harchol-Balter

2 Motivation Memory is a shared resource Threads’ requests contend for memory – Degradation in single thread performance – Can even lead to starvation How to schedule memory requests to increase both system throughput and fairness? 2 Core Memory

3 Previous Scheduling Algorithms are Biased 3 System throughput bias Fairness bias No previous memory scheduling algorithm provides both the best fairness and system throughput Ideal Better system throughput Better fairness

4 Take turns accessing memory Why do Previous Algorithms Fail? 4 Fairness biased approach thread C thread B thread A less memory intensive higher priority Prioritize less memory-intensive threads Throughput biased approach Good for throughput starvation  unfairness thread C thread B thread A Does not starve not prioritized  reduced throughput Single policy for all threads is insufficient

5 Insight: Achieving Best of Both Worlds 5 thread higher priority thread Prioritize memory-non-intensive threads For Throughput Unfairness caused by memory-intensive being prioritized over each other Shuffle threads Memory-intensive threads have different vulnerability to interference Shuffle asymmetrically For Fairness thread

6 Outline  Motivation & Insights  Overview  Algorithm  Bringing it All Together  Evaluation  Conclusion 6

7 Overview: Thread Cluster Memory Scheduling 1.Group threads into two clusters 2.Prioritize non-intensive cluster 3.Different policies for each cluster 7 thread Threads in the system thread Non-intensive cluster Intensive cluster thread Memory-non-intensive Memory-intensive Prioritized higher priority higher priority Throughput Fairness

8 Outline  Motivation & Insights  Overview  Algorithm  Bringing it All Together  Evaluation  Conclusion 8

9 TCM Outline 9 1. Clustering

10 Clustering Threads Step1 Sort threads by MPKI (misses per kiloinstruction) 10 thread higher MPKI T α < 10% ClusterThreshold Intensive cluster αTαT Non-intensive cluster T = Total memory bandwidth usage Step2 Memory bandwidth usage αT divides clusters

11 TCM Outline 11 1. Clustering 2. Between Clusters

12 Prioritize non-intensive cluster Increases system throughput – Non-intensive threads have greater potential for making progress Does not degrade fairness – Non-intensive threads are “light” – Rarely interfere with intensive threads Prioritization Between Clusters 12 > priority

13 TCM Outline 13 1. Clustering 2. Between Clusters 3. Non-Intensive Cluster Throughput

14 Prioritize threads according to MPKI Increases system throughput – Least intensive thread has the greatest potential for making progress in the processor Non-Intensive Cluster 14 thread higher priority lowest MPKI highest MPKI

15 TCM Outline 15 1. Clustering 2. Between Clusters 3. Non-Intensive Cluster 4. Intensive Cluster Throughput Fairness

16 Periodically shuffle the priority of threads Is treating all threads equally good enough? BUT: Equal turns ≠ Same slowdown Intensive Cluster 16 thread Increases fairness Most prioritized higher priority thread

17 Case Study: A Tale of Two Threads Case Study: Two intensive threads contending 1.random-access 2.streaming 17 Prioritize random-accessPrioritize streaming random-access thread is more easily slowed down 7x prioritized 1x 11x prioritized 1x Which is slowed down more easily?

18 Why are Threads Different? 18 random-accessstreaming req Bank 1Bank 2Bank 3Bank 4 Memory rows All requests parallel High bank-level parallelism All requests  Same row High row-buffer locality req activated row req stuck Vulnerable to interference

19 TCM Outline 19 1. Clustering 2. Between Clusters 3. Non-Intensive Cluster 4. Intensive Cluster Fairness Throughput

20 Niceness How to quantify difference between threads? 20 Vulnerability to interference Bank-level parallelism Causes interference Row-buffer locality + Niceness - HighLow

21 Shuffling: Round-Robin vs. Niceness-Aware 1.Round-Robin shuffling 2.Niceness-Aware shuffling 21 Most prioritized ShuffleInterval Priority Time Nice thread Least nice thread GOOD: Each thread prioritized once  What can go wrong? A B C D DABCD

22 Shuffling: Round-Robin vs. Niceness-Aware 1.Round-Robin shuffling 2.Niceness-Aware shuffling 22 Most prioritized ShuffleInterval Priority Time Nice thread Least nice thread  What can go wrong? A B C D DABCD A B D C B C A D C D B A D A C B BAD: Nice threads receive lots of interference GOOD: Each thread prioritized once

23 Shuffling: Round-Robin vs. Niceness-Aware 1.Round-Robin shuffling 2.Niceness-Aware shuffling 23 Most prioritized ShuffleInterval Priority Time Nice thread Least nice thread GOOD: Each thread prioritized once A B C D DCBAD

24 Shuffling: Round-Robin vs. Niceness-Aware 1.Round-Robin shuffling 2.Niceness-Aware shuffling 24 Most prioritized ShuffleInterval Priority Time Nice thread Least nice thread A B C D DCBAD D A C B B A C D A D B C D A C B GOOD: Each thread prioritized once GOOD: Least nice thread stays mostly deprioritized

25 TCM Outline 25 1. Clustering 2. Between Clusters 3. Non-Intensive Cluster 4. Intensive Cluster 1. Clustering 2. Between Clusters 3. Non-Intensive Cluster 4. Intensive Cluster Fairness Throughput

26 Outline  Motivation & Insights  Overview  Algorithm  Bringing it All Together  Evaluation  Conclusion 26

27 Quantum-Based Operation 27 Time Previous quantum (~1M cycles) During quantum: Monitor thread behavior 1.Memory intensity 2.Bank-level parallelism 3.Row-buffer locality Beginning of quantum: Perform clustering Compute niceness of intensive threads Current quantum (~1M cycles) Shuffle interval (~1K cycles)

28 TCM Scheduling Algorithm 1.Highest-rank: Requests from higher ranked threads prioritized Non-Intensive cluster > Intensive cluster Non-Intensive cluster: l ower intensity  higher rank Intensive cluster: r ank shuffling 2.Row-hit: Row-buffer hit requests are prioritized 3.Oldest: Older requests are prioritized 28

29 Implementation Costs Required storage at memory controller (24 cores) No computation is on the critical path 29 Thread memory behaviorStorage MPKI~0.2kb Bank-level parallelism~0.6kb Row-buffer locality~2.9kb Total< 4kbits

30 Outline  Motivation & Insights  Overview  Algorithm  Bringing it All Together  Evaluation  Conclusion 30 Fairness Throughput

31 Metrics & Methodology Metrics System throughput 31 Unfairness Methodology – Core model 4 GHz processor, 128-entry instruction window 512 KB/core L2 cache – Memory model: DDR2 – 96 multiprogrammed SPEC CPU2006 workloads

32 Previous Work FRFCFS [Rixner et al., ISCA00]: Prioritizes row-buffer hits – Thread-oblivious  Low throughput & Low fairness STFM [Mutlu et al., MICRO07]: Equalizes thread slowdowns – Non-intensive threads not prioritized  Low throughput PAR-BS [Mutlu et al., ISCA08]: Prioritizes oldest batch of requests while preserving bank-level parallelism – Non-intensive threads not always prioritized  Low throughput ATLAS [Kim et al., HPCA10]: Prioritizes threads with less memory service – Most intensive thread starves  Low fairness 32

33 Results: Fairness vs. Throughput 33 Better system throughput Better fairness 5% 39% 8% 5% TCM provides best fairness and system throughput Averaged over 96 workloads

34 Results: Fairness-Throughput Tradeoff 34 When configuration parameter is varied… Adjusting ClusterThreshold TCM allows robust fairness-throughput tradeoff STFM PAR-BS ATLAS TCM Better system throughput Better fairness

35 Operating System Support ClusterThreshold is a tunable knob – OS can trade off between fairness and throughput Enforcing thread weights – OS assigns weights to threads – TCM enforces thread weights within each cluster 35

36 Outline  Motivation & Insights  Overview  Algorithm  Bringing it All Together  Evaluation  Conclusion 36 Fairness Throughput

37 Conclusion 37 No previous memory scheduling algorithm provides both high system throughput and fairness – Problem: They use a single policy for all threads TCM groups threads into two clusters 1.Prioritize non-intensive cluster  throughput 2.Shuffle priorities in intensive cluster  fairness 3.Shuffling should favor nice threads  fairness TCM provides the best system throughput and fairness

38 THANK YOU 38

39 Thread Cluster Memory Scheduling : Exploiting Differences in Memory Access Behavior Yoongu Kim Michael Papamichael Onur Mutlu Mor Harchol-Balter

40 Thread Weight Support Even if heaviest weighted thread happens to be the most intensive thread… – Not prioritized over the least intensive thread 40

41 Harmonic Speedup 41 Better system throughput Better fairness

42 Shuffling Algorithm Comparison Niceness-Aware shuffling – Average of maximum slowdown is lower – Variance of maximum slowdown is lower 42 Shuffling Algorithm Round-RobinNiceness-Aware E(Maximum Slowdown)5.584.84 VAR(Maximum Slowdown)1.610.85

43 Sensitivity Results 43 ShuffleInterval (cycles) 500600700800 System Throughput14.214.314.214.7 Maximum Slowdown6.05.45.95.5 Number of Cores 48162432 System Throughput (compared to ATLAS) 0%3%2%1% Maximum Slowdown (compared to ATLAS) -4%-30%-29%-30%-41%


Download ppt "Thread Cluster Memory Scheduling : Exploiting Differences in Memory Access Behavior Yoongu Kim Michael Papamichael Onur Mutlu Mor Harchol-Balter."

Similar presentations


Ads by Google