Presentation is loading. Please wait.

Presentation is loading. Please wait.

ARC: A self-tuning, low overhead Replacement Cache Nimrod Megiddo and Dharmendra S. Modha Presented by Gim Tcaesvk.

Similar presentations


Presentation on theme: "ARC: A self-tuning, low overhead Replacement Cache Nimrod Megiddo and Dharmendra S. Modha Presented by Gim Tcaesvk."— Presentation transcript:

1 ARC: A self-tuning, low overhead Replacement Cache Nimrod Megiddo and Dharmendra S. Modha Presented by Gim Tcaesvk

2 Cache Auxiliary memory Cache is expensive. The replacement policy is the only algorithm of interest.

3 Cache management problem Maximize the hit rate. Minimize the overhead. Computation Space

4 Beladys MIN Replaces the furthest referenced page. Optimum for every case. Provides upper bound of hit ratio. But, Who knows the future?

5 LRU (Least Recently Used) Replaces the least recently used page. Optimum policy for SDD (Stack Depth Distribution). Captures recency but not frequency

6 LFU (Least Frequently Used) Replaces the least frequently used page. Optimum policy for IRM (Independent Reference Model). Captures frequency but not recency

7 Drawbacks of LFU Logarithmic complexity in cache size High frequently used page is hardly to paged out. Even if it is no longer useful.

8 LRU-2 Replaces the least 2 nd recently used page. Optimum online policy which knows at most 2 most recent reference for IRM.

9 Limitations of LRU-2 Logarithmic complexity Due to priority queue Tunable parameter, CIP (Correlated Information Period) 2(2,4) 1(1,3) 0(-,0)

10 2Q (modified LRU-2) Uses simple LRU list instead of the priority queue. Two parameter, K in and K out K in =Correlated Information Period K out =Retained Information Period

11 LIRS (Low Inter-reference Recency Set) Maintains 2 LRU stacks of different size. L lirs maintains LRU page at least twice recently seen. L hirs maintains LRU page only once recently seen. Works well for IRM, but not for SDD.

12 FBR (Frequency-Base Replacement) Divide LRU list into 3 sections; New Reference count in new section is not incremented. Middle Old Replaces page in old section with smallest reference count.

13 LRFU (Least Recently/Frequently Used) Exponential smoothing. λ is hard to tune.

14 MQ (Multi-Queue) Uses m LRU queues Q 0, Q 1, … Q m-1 Q i contains page that have been seen [2 i,2 i+1 ) times recently.

15 Introducing a Class of Replacement Policies

16 DBL(2c): DouBLe Maintains 2 variable-sized lists. Holds 2c pages.

17 DBL(2c) flow True False Most recent c page will always be contained.

18 : A new class of policy

19 FRC p (Fixed Replacement Cache) FRC p (c) is tuned We define a parameter p such that. 0pc.

20 FRC p (c) flow Replace Delete Replace True Delete False

21 ARC (Adaptive Replacement Cache) US Patent Assignee Name: IBM Corp.

22 ARC (Adaptive RC) Adaptation parameter p [0,c] For a given (fixed) value of p, Exactly same as FRC p. But it learns! ARC continuously adapts and tunes p.

23 Learning We should

24 ARC flow Replace Delete Replace True Delete False

25 Scan-Resistant Long sequence of 1-time-only reads will pass through L 1. Less hits will be encountered in B 1 compared to B 2. If hit in B 1, T 2 will grow by learning.

26 ARC example T1T1 B1B1 B2B2 T2T2 p= p=3 3 0 p=

27 Experiment Various traces used in this paper.

28 Experiment: Hit ratio of OLTP ARC Outperforms online parameter algorithms. Performs as well as offline parameter algorithms.

29 Experiment: Hit ratio (cont.) MQ outperforms LRU, while ARC outperforms all.

30 Experiment: Hit ratio (cont.) Comparison for various workloads shows ARC outperforms.

31 Experiment: Cache size of P1 ARC performs always better than LRU.

32 Experiment: parameter p Dances to the tune being played by workload.

33 Summary ARC Is online and self-tuning. Is robust for a wider range of workloads. Has less overhead Is scan-resistant. Outperforms LRU.


Download ppt "ARC: A self-tuning, low overhead Replacement Cache Nimrod Megiddo and Dharmendra S. Modha Presented by Gim Tcaesvk."

Similar presentations


Ads by Google