Presentation is loading. Please wait.

Presentation is loading. Please wait.

Caching And Prefetching For Web Content Distribution Presented By:- Harpreet Singh Sidong Zeng ECE -7995 Fall 2007.

Similar presentations


Presentation on theme: "Caching And Prefetching For Web Content Distribution Presented By:- Harpreet Singh Sidong Zeng ECE -7995 Fall 2007."— Presentation transcript:

1 Caching And Prefetching For Web Content Distribution Presented By:- Harpreet Singh Sidong Zeng ECE -7995 Fall 2007

2 Contents Introduction Overview: Proxy Caching Systems Caching Challenges and solutions. Cache Replacement and Prefetching. Consistency Management. Cache Co-operation. Further Enhancements Conclusion.

3 Introduction WWW is internet widely used tool for information access. But now users often experience long access latency due to network congestion. In order to solve this problem Caching and prefetching Techniques plays an important role.

4 Proxy Caching System Proxy caching :- Proxy is generally deployed at network edge as an enterprise network gateway or fire wall. Proxy process internal client request either locally or forward the request to remote server. Proxy is shared by internal clients having similar interest,so its natural it cache commonly requested objects.

5 Proxy Caching System Proxy can’t satisfy a request, a cache miss occurs.

6 Proxy Caching System Proxy hit

7 Caching Challenges Cache replacement and prefetching Consistency management Co-operative management

8 Issue regarding Web caching Issue of size:-Proxy cache must be capable of handling numerous concurrent user requests. Heterogeneity in hardware and software configurations,connection bandwidth, and access behaviors which makes cache management tough. loose coupling :-Proxy cache consumers (Web browsers) and suppliers (servers) are loosely coupled. It makes managing consistency and cooperation among proxy caches particularly difficult.

9 Cache Replacement Insufficient disk space, a proxy must decide which existing objects to purge when a new object arrives. Replacement algorith like LRU are used. LRU offers limited room for improvement.

10 Prefetching policies Three types of prefetching policies Mixed access pattern. Per-client access pattern. Object structural information.

11 Mixed access pattern. This policy uses aggregate access patterns from different clients, but doesn’t explore which client made the request. E.g Top 10 proposal,Which uses popularity based prediction. This scheme determine how many objects to prefetch from which servers using two parameters.

12 Mixed access pattern. This policy uses aggregate access patterns from different clients, but doesn’t explore which client made the request. E.g Top 10 proposal,Which uses popularity based prediction. This scheme determine how many objects to prefetch from which servers using two parameters.

13 Mixed access pattern(cont..) M, the number of times the client has contacted a server before it can prefetch. N, the maximum number of objects the client can prefetch from a server. If the number of objects fetched in the previous measurement period L reaches the threshold N,the client will prefetch the K most popular objects from the server, where K = min{N, L}.

14 Per-client access pattern Policy first analyzes access patterns on a per-client basis, then uses the aggregated access patterns for prediction. One client access which object at particular time is analysed and according to that prefetching is predicted.

15 Per-client access pattern Markov modeling analysis tool,in which the policy establishes a Markov graph based on access histories and uses the graph to make prefetching predictions. Set of Web objects is represented as a node; if the same client accesses two nodes (A and B) in order within a certain period of time, the policy draws a direct link from A to B and assigns a weight with the transition probability from A to B.

16 Per-client access pattern

17 The probability of accessing B after A is 0.3. The probability of accessing C after A is 0.7. To make a prefetching prediction a search algorithm traverses the graph starting from the current object set and computes the access likelihood for its successors; then prefetching algorithm decide how many successors to preload,depending on factors such as access likelihood and the bandwidth available for prefetching.

18 Object structural information This scheme exploit the local information contained in objects themselves. Hyperlinks, for example, are good indictors of future accesses because users tend to access objects by clicking on links rather than typing new URLs. This Algorithm can also combine object information with access-pattern based policies to further improve predication efficiency and accuracy.

19 Consistency Management If the origin server updates an object after a proxy caches it, the cached copy becomes stale. Consistency Algorithm should ensure the consistency between the cached copy and the original object.

20 Consistency Algorithm Consistency algorithm can be classified: – strong consistency – weak consistency If t is the delay between the proxy and server, a strong consistency algorithm returns object outdated by t at most.

21 Enforce Strong Consistency Server-driven invalidation – Server must invalidate a proxy’s copies before it can update the objects. – Require extra space to maintain all objects’ states Clients-driven validation – The proxy validates the cached copies freshness with the server for every cache-hit access – Generate numerous unnecessary messages A hybrid approach is developed to balance the space required to maintain states with message volume that validations required

22 Weak Consistency Generally supported by validation, in which proxies verify the validity of their cached objects with the origin server – TTL-based validation – Proactive polling

23 Cache Cooperation The stand alone proxy has disadvantage – A single point failure – Performance bottleneck Caching proxies collaborate with one another in serving requests Three kind of architectures for cooperative caching proxies – Hierarchical caches architecture – Distributed cache architecture – Hybrid architecture

24 Cache Cooperation Limitation of hierarchy depth: most operational hierarchies have only three levels: – Institutional – Regional – National. Hierarchical caches

25 Distributed cache architecture All the participating proxy caches are peers.

26 Hybrid architecture Combine the advantages of the hierarchical and distributed caching.

27 Recent Researches Caching dynamic content Caching streaming objects Security and integrity issues

28 Caching Dynamic Content Contributes up to 40 percent of the total Web traffic. To improve the performance, developer have deployed reverse caches near the origin server to support dynamic content caching.

29 Caching streaming objects Represent a significant portion of Web traffic, such as music or video clips Streaming objects have three distinctive features – huge size – intensive bandwidth use – high interactivity One solution is partial caching

30 Security and integrity Difficult to protect it from various attacks for stand alone proxy Establishing a trust model among participants is a challenging for cooperative proxies Intermediate proxy violates the SSL’s functionality.

31 Conclusion Proxy caching effectively reduces the network resources that Web services consume, while minimizing user access latencies. Deploying Web caching proxies over the Internet is complicated and difficult.


Download ppt "Caching And Prefetching For Web Content Distribution Presented By:- Harpreet Singh Sidong Zeng ECE -7995 Fall 2007."

Similar presentations


Ads by Google