Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3.

Slides:



Advertisements
Similar presentations
Page Replacement Algorithms
Advertisements

Chapter 11 – Virtual Memory Management
Cache Replacement Algorithm Outline Exiting document replacement algorithm Squids cache replacement algorithm Ideal Problem.
Cost-Based Cache Replacement and Server Selection for Multimedia Proxy Across Wireless Internet Qian Zhang Zhe Xiang Wenwu Zhu Lixin Gao IEEE Transactions.
Background Virtual memory – separation of user logical memory from physical memory. Only part of the program needs to be in memory for execution. Logical.
A Survey of Web Cache Replacement Strategies Stefan Podlipnig, Laszlo Boszormenyl University Klagenfurt ACM Computing Surveys, December 2003 Presenter:
Paging: Design Issues. Readings r Silbershatz et al: ,
ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Silberschatz, Galvin and Gagne  Operating System Concepts Chapter 10: Virtual Memory Background Demand Paging Process Creation Page Replacement.
Virtual Memory Background Demand Paging Performance of Demand Paging
Virtual Memory Introduction to Operating Systems: Module 9.
Latency-sensitive hashing for collaborative Web caching Presented by: Xin Qi Yong Yang 09/04/2002.
Memory System Characterization of Big Data Workloads
1 School of Computing Science Simon Fraser University, Canada Modeling and Caching of P2P Traffic Mohamed Hefeeda Osama Saleh ICNP’06 15 November 2006.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Improving Proxy Cache Performance: Analysis of Three Replacement Policies John Dilley and Martin Arlitt IEEE internet computing volume3 Nov-Dec 1999 Chun-Fu.
Chapter 11 – Virtual Memory Management Outline 11.1 Introduction 11.2Locality 11.3Demand Paging 11.4Anticipatory Paging 11.5Page Replacement 11.6Page Replacement.
Submitting: Barak Pinhas Gil Fiss Laurent Levy
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Internet Cache Pollution Attacks and Countermeasures Yan Gao, Leiwen Deng, Aleksandar Kuzmanovic, and Yan Chen Electrical Engineering and Computer Science.
Web Caching Robert Grimm New York University. Before We Get Started  Illustrating Results  Type Theory 101.
Web-Conscious Storage Management for Web Proxies Evangelos P. Markatos, Dionisios N. Pnevmatikatos, Member, IEEE, Michail D. Flouris, and Manolis G. H.
A Hybrid Caching Strategy for Streaming Media Files Jussara M. Almeida Derek L. Eager Mary K. Vernon University of Wisconsin-Madison University of Saskatchewan.
Measurement Based Intelligent Prefetch and Cache Technique & Intelligent Proxy Techniques in Plasma Physics LAboratories Yantai Shu, Gang Zhang, Zheng.
Web Caching Schemes For The Internet – cont. By Jia Wang.
Evaluating Content Management Techniques for Web Proxy Caches Martin Arlitt, Ludmila Cherkasova, John Dilley, Rich Friedrich and Tai Jin Hewlett-Packard.
Least Popularity-per-Byte Replacement Algorithm for a Proxy Cache Kyungbaek Kim and Daeyeon Park. Korea Advances Institute of Science and Technology (KAIST)
Memory Management Last Update: July 31, 2014 Memory Management1.
Maninder Kaur VIRTUAL MEMORY 24-Nov
1 Ekow J. Otoo Frank Olken Arie Shoshani Adaptive File Caching in Distributed Systems.
Caching and Virtual Memory. Main Points Cache concept – Hardware vs. software caches When caches work and when they don’t – Spatial/temporal locality.
Rensselaer Polytechnic Institute CSC 432 – Operating Systems David Goldschmidt, Ph.D.
Storage Allocation in Prefetching Techniques of Web Caches D. Zeng, F. Wang, S. Ram Appeared in proceedings of ACM conference in Electronic commerce (EC’03)
Page 19/17/2015 CSE 30341: Operating Systems Principles Optimal Algorithm  Replace page that will not be used for longest period of time  Used for measuring.
Web Cache Replacement Policies: Properties, Limitations and Implications Fabrício Benevenuto, Fernando Duarte, Virgílio Almeida, Jussara Almeida Computer.
1 On the Placement of Web Server Replicas Lili Qiu, Microsoft Research Venkata N. Padmanabhan, Microsoft Research Geoffrey M. Voelker, UCSD IEEE INFOCOM’2001,
« Performance of Compressed Inverted List Caching in Search Engines » Proceedings of the International World Wide Web Conference Commitee, Beijing 2008)
Design and Analysis of Advanced Replacement Policies for WWW Caching Kai Cheng, Yusuke Yokota, Yahiko Kambayashi Department of Social Informatics Graduate.
Proxy Cache and YOU By Stuart H. Schwartz. What is cache anyway? The general idea of cache is simple… Buffer data from a slow, large source within a (usually)
1 On the Placement of Web Server Replicas Lili Qiu, Microsoft Research Venkata N. Padmanabhan, Microsoft Research Geoffrey M. Voelker, UCSD IEEE INFOCOM’2001,
L/O/G/O Cache Memory Chapter 3 (b) CS.216 Computer Architecture and Organization.
An Effective Disk Caching Algorithm in Data Grid Why Disk Caching in Data Grids?  It takes a long latency (up to several minutes) to load data from a.
System Software Lab 1 Enhancement and Validation of Squid ’ s Cache Replacement Policy John Delley Martin Arlitt Stephane Perret WCW99 김 재 섭 EECS System.
PROP: A Scalable and Reliable P2P Assisted Proxy Streaming System Computer Science Department College of William and Mary Lei Guo, Songqing Chen, and Xiaodong.
Performance of Web Proxy Caching in Heterogeneous Bandwidth Environments IEEE Infocom, 1999 Anja Feldmann et.al. AT&T Research Lab 발표자 : 임 민 열, DB lab,
Improving Disk Throughput in Data-Intensive Servers Enrique V. Carrera and Ricardo Bianchini Department of Computer Science Rutgers University.
Evaluating Content Management Techniques for Web Proxy Caches Martin Arlitt, Ludmila Cherkasova, John Dilley, Rich Friedrich and Tai Jin Proceeding on.
A BRIEF INTRODUCTION TO CACHE LOCALITY YIN WEI DONG 14 SS.
An Overview of Proxy Caching Algorithms Haifeng Wang.
MiddleMan: A Video Caching Proxy Server NOSSDAV 2000 Brian Smith Department of Computer Science Cornell University Ithaca, NY Soam Acharya Inktomi Corporation.
Computer Organization CS224 Fall 2012 Lessons 41 & 42.
Evaluating Content Management Technique for Web Proxy Cache M. Arlitt, L. Cherkasova, J. Dilley, R. Friedrich and T. Jin MinSu Shin.
Content Delivery Networks: Status and Trends Speaker: Shao-Fen Chou Advisor: Dr. Ho-Ting Wu 5/8/
Peer-to-Peer Video Systems: Storage Management CS587x Lecture Department of Computer Science Iowa State University.
Lectures 8 & 9 Virtual Memory - Paging & Segmentation System Design.
On the Placement of Web Server Replicas Yu Cai. Paper On the Placement of Web Server Replicas Lili Qiu, Venkata N. Padmanabhan, Geoffrey M. Voelker Infocom.
Overview on Web Caching COSC 513 Class Presentation Instructor: Prof. M. Anvari Student name: Wei Wei ID:
Silberschatz, Galvin and Gagne ©2011 Operating System Concepts Essentials – 8 th Edition Chapter 9: Virtual Memory.
On Caching Search Engine Query Results Evangelos Markatos Evangelos Markatoshttp://archvlsi.ics.forth.gr/OS/os.html Computer Architecture and VLSI Systems.
COSC3330 Computer Architecture
Computer Architecture
Memory Management 6/20/ :27 PM
Memory Management for Scalable Web Data Servers
Module 9: Virtual Memory
TLC: A Tag-less Cache for reducing dynamic first level Cache Energy
Cooperative Caching, Simplified
Network Traffic Modeling
Module 9: Virtual Memory
The Design and Implementation of a Log-Structured File System
Presentation transcript:

Improving Proxy Cache Performance: Analysis of Three Replacement Policies Dilley, J.; Arlitt, M. A journal paper of IEEE Internet Computing, Volume: 3 Issue: 6, Nov.-Dec Page(s):

Outline Introduction Related work New replacement policies Simulation Conclusion

Introduction The goal of web cache  Reducing the bandwidth demand on the external network.  Reducing the average time it takes for a web page to load.

The location of web cache  Client side  Proxy side  Web server side

Cache replacement policy The reason why we need replacement  A cache server has a fixed amount of storage.  When this storage space fills, the cache must choose a set of objects to evict.

Cache replacement policy (cont.) The goal of cache replacement policy  To make the best use of available resources, including disk and memory space and network bandwidth.

Web proxy workload characterization The characteristic of workload that we interest  The patterns in the number of objects referenced and the relationship among accesses.

Web proxy workload characterization (cont.) The workload that we choose to simulate  Traces of actual live execution The drawback of traces of actual live execution  It does not capture changing behavior or the behavior of a different user set.

Measures of efficiency  Hit rate It’s the most popular measure.  Byte hit rate It’s interesting on the limited bandwidth and expensive resource.  CPU and I/O system utilization  Object retrieval latency (or page load time)

An optimal cache replacement policy  To know a document’s future popularity and choose the most advantageous way to use its finite space.

The heuristics to approximate optimal cache replacement  Caching a few documents with high download latency might actually reduce average latency.  To maximize object hit rate, it is better to keep many small popular objects.

Related work Replacement algorithms  Least Recently Used (LRU) It works well when there is a high temporal locality in workload. It’s data structure is doubly linked list to implement.

Replacement algorithms (cont.)  Least Frequently Used (LFU) It evicts the the objects with the lowest reference count. It’s data structure is heap. It’s drawback is the problem of cache pollution.

Replacement algorithms (cont.)  LFU-Aging (LFUA) It considers both an object’s access frequency and its age in cache. It avoid the problem of cache pollution.

Replacement algorithms (cont.)  Greedy Dual-Size (GDS) It takes size and a cost function for retrieving objects into account.

New replacement policies LFU with Dynamic Aging (LFUDA)  It adds a cache age factor to the reference count, when a new object is added or an existing object is referenced.

New replacement policies (cont.) GDS-Frequency (GDSF)  It considers reference frequency in addition to object size.  It assigns a key to each object.  Key= (reference count of object / size ) plus cache age

Simulation Two types of cache replacement algorithm  Heap-based such as GDSF, LFUDA, LRU_H It’s data structure is heap. Let memory usage grow to a high watermark before examining any cache metadata

Two types of cache replacement algorithm (cont.)  Listed-based such as LRU_L It’s data structure is linked list Examining object metadata more often to implement the LRU reference age.

The result of simulation Hit rate

Byte hit rate

CPU utilization

Finding LRU_L has higher and more variable response times than LRU_H because of the extra time spent computing the reference age and choose the object to evict. LRU_L ’ s extra time seems to translate into a higher hit rate for LRU_L. GDSF shows consistent improvement over LRU.

Conclusion More sophisticated policy (heap-base for example) can be implement without degrading the cache ’ s performance. The bottleneck is on I/O (network and disk), not on CPU utilization. To achieve the greatest latency reduction, to optimize the cache hit rate.

While the bandwidth id dear, we must focus on improving the byte hit rate. The impact of consistency validation of cached objects becomes more significant.