Presentation on theme: "ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE"— Presentation transcript:
1ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE Nimrod Megiddo Dharmendra S. ModhaIBM Almaden Research Center
2Introduction (I) Caching is widely used in storage systems databases web serversprocessorsfile systemsdisk drivesRAID controllersoperating systems
3Introduction (II) ARC is a new cache replacement policy: Scan-resistant:Better than LRUSelf-tuning:Avoids problem of many recent cache replacement policiesTested on numerous workloads
4Our Model (I) Cache/ Main Memory (pages) Secondary Storage (pages) on demandreplacement policySecondary Storage(pages)
5Our Model (II) Caches stores uniformly sized items (pages) On demand fetches into cacheCache expulsions decided by cache replacement policyPerformance metrics includeHit rate (= 1 - miss rate)Overhead of policy
6Previous Work (I)Offline Optimal (MIN): replaces the page that has the greatest forward distanceRequires knowledge of futureProvides an upper-boundRecency (LRU):Most widely used policyFrequency (LFU):Optimal under independent reference model
7Previous Work (II)LRU-2: replaces page with the least recent penultimate referenceBetter hit ratioNeeds to maintain a priority queueCorrected in 2Q policyMust still decide how long a page that has only been accessed once should be kept in the cache2Q policy has same problem
8LRU expels B because A was accessed after this last reference to B ExampleLast two references to pages A and BXAX BX BXATimeLRU -2 expels A because B was accessed twice after this next to last reference to ALRU expels B because A was accessed after this last reference to B
9Previous Work (III) Low Inter-Reference Recency Set (LIRS) Frequency-Based Replacement (FBR)Least Recently/Frequently Used(LRFU): subsumes LRU and LFUAll require a tuning parameterAutomatic LRFU (ALRFU)Adaptive version of LRFUStill requires a tuning parameter
10ARC (I) Maintains two LRU lists Pages that have been referenced only once (L1)Pages that have been referenced at least twice (L2)Each list has same length c as cacheCache contains tops of both lists: T1 and T2Bottoms B1 and B2 are not in cache
11ARC (II) L-1 L-2 T1 T2 “Ghost caches” (not in memory) B1 B2 |T1| + |T2| = c“Ghost caches”(not in memory)B2
12ARC (III) ARC attempts to maintain a target size target_T1 for list T1 When cache is full, ARC expelsLRU page from T1 if|T1| target_T1LRU page from T2 otherwise
13ARC (IV)If missing page was in bottom B1 of L-1, ARC increases target_T1target_T1= min(target_T1+max(|B2|/|B1|,1),c)If missing page was in bottom B2 of L-2, ARC decreases target_T1target_T1= max(target_T1-max(|B1|/|B2|,1),0)
14ARC (V) Overall result is Two heuristics compete with each other Each heuristic gets rewarded any time it can show that adding more pages to its top list would have avoided a cache missNote that ARC has no tunable parameterCannot get it wrong!
15Experimental Results Tested over 23 traces: Always outperforms LRU Performs as well as more sophisticated policies even when they are specifically tuned for the workloadSole exception is 2QStill outperforms 2Q when 2Q has no advance knowledge of the workload characteristics.