Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 ICCD 2010 Amsterdam, the Netherlands Rami Sheikh North Carolina State University Mazen Kharbutli Jordan Univ. of Science and Technology Improving Cache.

Similar presentations


Presentation on theme: "1 ICCD 2010 Amsterdam, the Netherlands Rami Sheikh North Carolina State University Mazen Kharbutli Jordan Univ. of Science and Technology Improving Cache."— Presentation transcript:

1 1 ICCD 2010 Amsterdam, the Netherlands Rami Sheikh North Carolina State University Mazen Kharbutli Jordan Univ. of Science and Technology Improving Cache Performance by Combining Cost-Sensitivity and Locality Principles in Cache Replacement Algorithms

2 Outline 2 Motivation and Contribution Related Work LACS Storage Organization LACS ImplementationEvaluation Environment Evaluation Conclusion

3 Motivation 3 The processor-memory performance gap. L2 cache performance is very crucial. Traditionally, L2 cache replacement algorithms focus on improving the hit rate. But, cache misses have different costs. Better to take the cost of a miss into consideration. Processors ability to (partially) hide the L2 cache miss latency differs between misses. Depends on: dependency chain, miss bursts..etc.

4 Motivation 4 Issued Instructions per Miss Histogram.

5 Contributions 5 A novel, effective, but simple cost estimation method. Based on the number of instructions a processor manages to issue during the miss latency. A reflection of the processors ability to hide the miss latency. Number of issued instructions during the miss SmallLarge High cost miss/blockLow cost miss/block

6 Contributions 6 LACS: Locality-Aware Cost-Sensitive Cache Replacement Algorithm. Integrates our novel cost estimation method with a locality algorithm (e.g. LRU). Attempts to reserve high cost blocks in the cache while their locality is still high. On a cache miss, a low-cost block is chosen for eviction. Excellent performance improvement at feasible cost. Performance improvement: 15% average and up to 85%. Effective in uniprocessors and CMPs. Effective for different cache configurations.

7 Outline 7 Motivation and Contribution Related Work LACS Storage Organization LACS ImplementationEvaluation Environment Evaluation Conclusion

8 Related Work 8 Cache replacement algorithms traditionally attempt to reduce the cache miss rate. Beladys OPT algorithm [Belady 1966]. Dead block predictors [Kharbutli 2008..etc]. OPT emulators [Rajan 2007]. Cache misses are not uniform and have different costs [Srinivasan 1998, Puzak 2008]. A new class of replacement algorithms. Miss cost can be latency, power consumption, penalty..etc.

9 Related Work 9 Jeong and Dubois [1999, 2003, 2006]: In the context of CC-NUMA multiprocessors. Cost of miss mapping to remote memory higher than if mapping to local memory. LACS estimates cost based on processors ability to tolerate the miss latency not the miss latency value itself. Jeong et al. [2008]: In the context of uniprocessors. Next access predicted: Load (high cost); Store (low cost). All load misses treated equally. LACS does not treat load misses equally (different costs). A store miss may have a high cost.

10 Related Work 10 Srinivasan et al. [2001]: Critical blocks preserved in special critical cache. Criticality estimated from loads dependence chain. No significant improvement under realistic configurations. LACS does not track the dependence chain. Uses a simpler cost heuristic. LACS achieves considerable performance improvement under realistic configurations.

11 Related Work 11 Qureshi et al. [2006]: Based on Memory-level Parallelism (MLP). Cache misses occur in isolation (high cost) or concurrently (low cost). Suffers from pathological cases. Integrated with a tournament predictor to choose between it and LRU (SBAR). LACS does not slow down any of the 20 benchmarks in our study. LACS outperforms MLP-SBAR in our study.

12 Outline 12 Motivation and Contribution Related Work LACS Storage Organization LACS ImplementationEvaluation Environment Evaluation Conclusion

13 LACS Storage Organization 13 P P L1 $ L2$ IIC (32 bits) MSHR IIRs (32 bits each) Prediction Table Each entry: 6-bit hashed tag, 5-bit cost, 1-bit confidence (8K sets x 4 ways x 1.5 bytes/entry = 48 KB) Total Storage Overhead 48 KB 9.4% of a 512KB Cache 4.7% of a 1MB Cache

14 Outline 14 Motivation and Contribution Related Work LACS Storage Organization LACS ImplementationEvaluation Environment Evaluation Conclusion

15 LACS Implementation 15 On an L2 cache miss on block B in set S: (1) Copy IIC into IIR (2) Find a victim (3) When miss returns, update Bs info MSHR[B].IIR = IIC

16 LACS Implementation 16 On an L2 cache miss on block B in set S: (1) Copy IIC into IIR (2) Find a victim (3) When miss returns, update Bs info Identify all low cost blocks in set S. If there is at least one, choose a victim randomly from among them. Otherwise, the LRU block is the victim. Block X is a low cost block if: X.cost > threshold, and X.conf == 1

17 LACS Implementation 17 On an L2 cache miss on block B in set S: (1) Copy IIC into IIR (2) Find a victim (3) When miss returns, update Bs info When miss returns, calculate Bs new cost: newCost = IIC – MSHR[B].IIR Update Bs table info: if(newCost B.cost) B.conf=1, else B.conf=0 B.cost = newCost

18 Outline 18 Motivation and Contribution Related Work LACS Storage Organization LACS ImplementationEvaluation Environment Evaluation Conclusion

19 Evaluation Environment 19 Evaluation using SESC: a detailed, cycle-accurate, execution- driven simulator. 20 of the 26 SPEC2000 benchmarks are used. Reference input sets. 2 billion instructions simulated after skipping the first 2 billion instructions. Benchmarks divided into two groups (GrpA, GrpB). GrpA: L2 cache performance-constrained - ammp, applu, art, equake, gcc, mcf, mgrid, swim, twolf, and vpr. L2 cache: 512 KB, 8-way, WB, LRU.

20 Outline 20 Motivation and Contribution Related Work LACS Storage Organization LACS ImplementationEvaluation Environment Evaluation Conclusion

21 Evaluation 21 Performance Improvement: L2 Cache Miss Rates:

22 Evaluation 22 L2 Cache Miss Rates: Fraction of LRU blocks reserved by LACS that get re-used: ammpappluartequakegccmcfmgridswimtwolfvpr 94%22%51%15%89%1%33%11%21%22% Low-cost blocks in the cache: <20% OPT evicted blocks that were low-cost: 40% to 98% Strong correlation between blocks evicted by OPT and their cost.

23 Evaluation 23 Performance improvement in a CMP architecture:

24 Evaluation 24 Sensitivity to cache parameters: ConfigurationMinimumAverageMaximum 256 KB, 8-way0%3%9% 512 KB, 8-way0%15%85% 1 MB, 8-way-3%8%47% 2 MB, 8-way-3%19%195% 512 KB, 4-way0%12%69% 512 KB, 16-way-1%17%101%

25 Outline 25 Motivation and Contribution Related Work LACS Storage Organization LACS ImplementationEvaluation Environment Evaluation Conclusion

26 26 LACSs Exquisite Features: Novelty New metric for measuring cost-sensitivity. Combines Two Principles Locality and cost-sensitivity. Performance Improvements at Feasible Cost 15% average speedup in L2 cache performance-constrained benchmarks. Effective in uniprocessor and CMP architectures. Effective for different cache configurations.

27 27 Thank You ! Questions?


Download ppt "1 ICCD 2010 Amsterdam, the Netherlands Rami Sheikh North Carolina State University Mazen Kharbutli Jordan Univ. of Science and Technology Improving Cache."

Similar presentations


Ads by Google