ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE

Slides:



Advertisements
Similar presentations
Dissemination-based Data Delivery Using Broadcast Disks.
Advertisements

ARC: A self-tuning, low overhead Replacement Cache
LRU-K Page Replacement Algorithm
Song Jiang1 and Xiaodong Zhang1,2 1College of William and Mary
Online Algorithm Huaping Wang Apr.21
Seoul National University Archi & Network LAB LRFU (Least Recently/Frequently Used) Block Replacement Policy Sang Lyul Min Dept. of Computer Engineering.
Chapter 11 – Virtual Memory Management
Yannis Smaragdakis / 11-Jun-14 General Adaptive Replacement Policies Yannis Smaragdakis Georgia Tech.
The Performance Impact of Kernel Prefetching on Buffer Cache Replacement Algorithms (ACM SIGMETRIC 05 ) ACM International Conference on Measurement & Modeling.
ULC: An Unified Placement and Replacement Protocol in Multi-level Storage Systems Song Jiang and Xiaodong Zhang College of William and Mary.
A Survey of Web Cache Replacement Strategies Stefan Podlipnig, Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys, December 2003 Presenter:
Scribe for 7 th April 2014 Page Replacement Algorithms Payal Priyadarshini 11CS30023.
1 Cache and Caching David Sands CS 147 Spring 08 Dr. Sin-Min Lee.
Outperforming LRU with an Adaptive Replacement Cache Algorithm Nimrod megiddo Dharmendra S. Modha IBM Almaden Research Center.
Application-Controlled File Caching Policies Pei Cao, Edward W. Felten and Kai Li Presented By: Mazen Daaibes Gurpreet Mavi ECE 7995 Presentation.
Qinqing Gan Torsten Suel Improved Techniques for Result Caching in Web Search Engines Presenter: Arghyadip ● Konark.
Cache Definition Cache is pronounced cash. It is a temporary memory to store duplicate data that is originally stored elsewhere. Cache is used when the.
Lecture 34: Chapter 5 Today’s topic –Virtual Memories 1.
COT 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM.
Cache Memory By JIA HUANG. "Computer Science has only three ideas: cache, hash, trash.“ - Greg Ganger, CMU.
Review CPSC 321 Andreas Klappenecker Announcements Tuesday, November 30, midterm exam.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.
1 Probabilistic Models for Web Caching David Starobinski, David Tse UC Berkeley Conference and Workshop on Stochastic Networks Madison, Wisconsin, June.
ECE7995 Caching and Prefetching Techniques in Computer Systems Lecture 8: Buffer Cache in Main Memory (IV)
A Case for Delay-conscious Caching of Web Documents Peter Scheuermann, Junho Shim, Radek Vingralek Department of Electrical and Computer Engineering Northwestern.
Slide 12-1 Copyright © 2004 Pearson Education, Inc. Operating Systems: A Modern Perspective, Chapter Virtual Memory.
1 Ekow J. Otoo Frank Olken Arie Shoshani Adaptive File Caching in Distributed Systems.
RAID Ref: Stallings. Introduction The rate in improvement in secondary storage performance has been considerably less than the rate for processors and.
1 Design and Performance of a Web Server Accelerator Eric Levy-Abegnoli, Arun Iyengar, Junehwa Song, and Daniel Dias INFOCOM ‘99.
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
1 I/O Management and Disk Scheduling Chapter Categories of I/O Devices Human readable Used to communicate with the user Printers Video display terminals.
Online Paging Algorithm By: Puneet C. Jain Bhaskar C. Chawda Yashu Gupta Supervisor: Dr. Naveen Garg, Dr. Kavitha Telikepalli.
Search Engine Caching Rank-preserving two-level caching for scalable search engines, Paricia Correia Saraiva et al, September 2001
« Performance of Compressed Inverted List Caching in Search Engines » Proceedings of the International World Wide Web Conference Commitee, Beijing 2008)
Chapter Twelve Memory Organization
Design and Analysis of Advanced Replacement Policies for WWW Caching Kai Cheng, Yusuke Yokota, Yahiko Kambayashi Department of Social Informatics Graduate.
Chapter 21 Virtual Memoey: Policies Chien-Chung Shen CIS, UD
By Andrew Yee. Virtual Memory Memory Management What is Page Replacement?
Multicache-Based Content Management for Web Caching Kai Cheng and Yahiko Kambayashi Graduate School of Informatics, Kyoto University Kyoto JAPAN.
COT 4600 Operating Systems Fall 2009 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 3:00-4:00 PM.
Goal-Oriented Buffer Management Revisited Kurt P. Brown, Michael J. Carey, Miron Livny Presented by Mike Nie.
Improving Disk Throughput in Data-Intensive Servers Enrique V. Carrera and Ricardo Bianchini Department of Computer Science Rutgers University.
Time Parallel Simulations I Problem-Specific Approach to Create Massively Parallel Simulations.
Clock-Pro: An Effective Replacement in OS Kernel Xiaodong Zhang College of William and Mary.
A BRIEF INTRODUCTION TO CACHE LOCALITY YIN WEI DONG 14 SS.
Simulation case studies J.-F. Pâris University of Houston.
An Overview of Proxy Caching Algorithms Haifeng Wang.
1  2004 Morgan Kaufmann Publishers Locality A principle that makes having a memory hierarchy a good idea If an item is referenced, temporal locality:
Transforming Policies into Mechanisms with Infokernel Andrea C. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau, Nathan C. Burnett, Timothy E. Denehy, Thomas J.
Using Multiple Predictors to Improve the Accuracy of File Access Predictions Gary A. S. Whittle, U of Houston Jehan-François Pâris, U of Houston Ahmed.
LIRS: Low Inter-reference Recency Set Replacement for VM and Buffer Caches Xiaodong Zhang College of William and Mary.
Clustered Web Server Model
LRFU (Least Recently/Frequently Used) Block Replacement Policy
Computer Architecture
Informed Prefetching and Caching
ECE7995 Caching and Prefetching Techniques in Computer Systems
ECE-752 Zheng Zheng, Anuj Gadiyar
Andy Wang Operating Systems COP 4610 / CGS 5765
Distributed Systems CS
Andy Wang Operating Systems COP 4610 / CGS 5765
Qingbo Zhu, Asim Shankar and Yuanyuan Zhou
COT 4600 Operating Systems Spring 2011
Web Proxy Caching Model
Lecture 9: Caching and Demand-Paged Virtual Memory
ARC (Adaptive Replacement Cache)
Page Cache and Page Writeback
Sarah Diesburg Operating Systems CS 3430
Andy Wang Operating Systems COP 4610 / CGS 5765
Sarah Diesburg Operating Systems CS 3430
Sarah Diesburg Operating Systems COP 4610
Presentation transcript:

ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE Nimrod Megiddo Dharmendra S. Modha IBM Almaden Research Center

Introduction (I) Caching is widely used in storage systems databases web servers processors file systems disk drives RAID controllers operating systems

Introduction (II) ARC is a new cache replacement policy: Scan-resistant: Better than LRU Self-tuning: Avoids problem of many recent cache replacement policies Tested on numerous workloads

Our Model (I) Cache/ Main Memory (pages) Secondary Storage (pages) on demand replacement policy Secondary Storage (pages)

Our Model (II) Caches stores uniformly sized items (pages) On demand fetches into cache Cache expulsions decided by cache replacement policy Performance metrics include Hit rate (= 1 - miss rate) Overhead of policy

Previous Work (I) Offline Optimal (MIN): replaces the page that has the greatest forward distance Requires knowledge of future Provides an upper-bound Recency (LRU): Most widely used policy Frequency (LFU): Optimal under independent reference model

Previous Work (II) LRU-2: replaces page with the least recent penultimate reference Better hit ratio Needs to maintain a priority queue Corrected in 2Q policy Must still decide how long a page that has only been accessed once should be kept in the cache 2Q policy has same problem

LRU expels B because A was accessed after this last reference to B Example Last two references to pages A and B X A X B X B X A Time LRU -2 expels A because B was accessed twice after this next to last reference to A LRU expels B because A was accessed after this last reference to B

Previous Work (III) Low Inter-Reference Recency Set (LIRS) Frequency-Based Replacement (FBR) Least Recently/Frequently Used(LRFU): subsumes LRU and LFU All require a tuning parameter Automatic LRFU (ALRFU) Adaptive version of LRFU Still requires a tuning parameter

ARC (I) Maintains two LRU lists Pages that have been referenced only once (L1) Pages that have been referenced at least twice (L2) Each list has same length c as cache Cache contains tops of both lists: T1 and T2 Bottoms B1 and B2 are not in cache

ARC (II) L-1 L-2 T1 T2 “Ghost caches” (not in memory) B1 B2 |T1| + |T2| = c “Ghost caches” (not in memory) B2

ARC (III) ARC attempts to maintain a target size target_T1 for list T1 When cache is full, ARC expels LRU page from T1 if |T1|  target_T1 LRU page from T2 otherwise

ARC (IV) If missing page was in bottom B1 of L-1, ARC increases target_T1 target_T1= min(target_T1+max(|B2|/|B1|,1),c) If missing page was in bottom B2 of L-2, ARC decreases target_T1 target_T1= max(target_T1-max(|B1|/|B2|,1),0)

ARC (V) Overall result is Two heuristics compete with each other Each heuristic gets rewarded any time it can show that adding more pages to its top list would have avoided a cache miss Note that ARC has no tunable parameter Cannot get it wrong!

Experimental Results Tested over 23 traces: Always outperforms LRU Performs as well as more sophisticated policies even when they are specifically tuned for the workload Sole exception is 2Q Still outperforms 2Q when 2Q has no advance knowledge of the workload characteristics.