Presentation is loading. Please wait.

Presentation is loading. Please wait.

ARC: A self-tuning, low overhead Replacement Cache

Similar presentations


Presentation on theme: "ARC: A self-tuning, low overhead Replacement Cache"— Presentation transcript:

1 ARC: A self-tuning, low overhead Replacement Cache
Nimrod Megiddo and Dharmendra S. Modha Presented by Gim Tcaesvk

2 Cache Cache is expensive.
The replacement policy is the only algorithm of interest.

3 Cache management problem
Maximize the hit rate. Minimize the overhead. Computation Space

4 Belady’s MIN Replaces the furthest referenced page.
Optimum for every case. Provides upper bound of hit ratio. But, Who knows the future?

5 LRU (Least Recently Used)
Replaces the least recently used page. Optimum policy for SDD (Stack Depth Distribution). Captures recency but not frequency. 2 3 2 1 3 2 2 3 2 2 3 1 1 2 3 2 3 2 1 1 3

6 LFU (Least Frequently Used)
Replaces the least frequently used page. Optimum policy for IRM (Independent Reference Model). Captures frequency but not recency. 1 1 1 2 3 3 2 1

7 Drawbacks of LFU Logarithmic complexity in cache size
High frequently used page is hardly to paged out. Even if it is no longer useful.

8 LRU-2 Replaces the least 2nd recently used page.
Optimum online policy which knows at most 2 most recent reference for IRM.

9 Limitations of LRU-2 Logarithmic complexity Tunable parameter,
Due to priority queue Tunable parameter, CIP (Correlated Information Period) 2(2,4) 1(1,3) 0(-∞,0)

10 2Q (modified LRU-2) Uses simple LRU list instead of the priority queue. Two parameter, Kin and Kout Kin=Correlated Information Period Kout=Retained Information Period

11 LIRS (Low Inter-reference Recency Set)
Maintains 2 LRU stacks of different size. Llirs maintains LRU page at least twice recently seen. Lhirs maintains LRU page only once recently seen. Works well for IRM, but not for SDD.

12 FBR (Frequency-Base Replacement)
Divide LRU list into 3 sections; New Reference count in new section is not incremented. Middle Old Replaces page in old section with smallest reference count.

13 LRFU (Least Recently/Frequently Used)
Exponential smoothing. λ is hard to tune.

14 MQ (Multi-Queue) Uses m LRU queues Q0, Q1, … Qm-1
Qi contains page that have been seen [2i,2i+1) times recently.

15 Introducing a Class of Replacement Policies

16 DBL(2c): DouBLe Maintains 2 variable-sized lists. Holds 2c pages.

17 DBL(2c) flow True False True False
Most recent c page will always be contained.

18 : A new class of policy

19 FRCp(Fixed Replacement Cache)
FRCp(c) is tuned We define a parameter p such that. 0≤p≤c.

20 FRCp(c) flow True False Delete Delete Replace Replace

21 ARC (Adaptive Replacement Cache)
US Patent Assignee Name: IBM Corp.

22 ARC (Adaptive RC) Adaptation parameter p∈[0,c]
For a given (fixed) value of p, Exactly same as FRCp. But it learns! ARC continuously adapts and tunes p.

23 Learning We should

24 ARC flow True False Delete Delete Replace Replace

25 Scan-Resistant Long sequence of 1-time-only reads will pass through L1. Less hits will be encountered in B1 compared to B2. If hit in B1, T2 will grow by learning.

26 ARC example 2 1 p=2 p=2 p=3 1 B2 2 1 T2 1 2 1 3 4 2 5 6 7 T1 6 1 2 3 4 7 5 5 4 3 2 B1 4 6 3 2 5 1 2 4 3 3

27 Experiment Various traces used in this paper.

28 Experiment: Hit ratio of OLTP
ARC Outperforms online parameter algorithms. Performs as well as offline parameter algorithms.

29 Experiment: Hit ratio (cont.)
MQ outperforms LRU, while ARC outperforms all.

30 Experiment: Hit ratio (cont.)
Comparison for various workloads shows ARC outperforms.

31 Experiment: Cache size of P1
ARC performs always better than LRU.

32 Experiment: parameter p
“Dances to the tune being played by workload.”

33 Summary ARC Is online and self-tuning.
Is robust for a wider range of workloads. Has less overhead Is scan-resistant. Outperforms LRU.


Download ppt "ARC: A self-tuning, low overhead Replacement Cache"

Similar presentations


Ads by Google