Presentation is loading. Please wait.

Presentation is loading. Please wait.

Storage Allocation in Prefetching Techniques of Web Caches D. Zeng, F. Wang, S. Ram Appeared in proceedings of ACM conference in Electronic commerce (EC’03)

Similar presentations


Presentation on theme: "Storage Allocation in Prefetching Techniques of Web Caches D. Zeng, F. Wang, S. Ram Appeared in proceedings of ACM conference in Electronic commerce (EC’03)"— Presentation transcript:

1 Storage Allocation in Prefetching Techniques of Web Caches D. Zeng, F. Wang, S. Ram Appeared in proceedings of ACM conference in Electronic commerce (EC’03) San Diego June 9-12, 2003 Presented by Laura D. Goadrich

2 The Web Large-scale distributed information system where data Objects are published and accessible by users Problems caused by the demand of increased web capacity: Network traffic congestion Web server overloads Solution: web caching

3 Web caching: Benefits: Improves web performance (reduces access latency) Increases web capacity Alleviate traffic congestion (reducing network bandwidth consumption) Reducing number of client requests (workload) Possibly improve failure tolerance and robustness of Web (maintaining cached copies of web objects for unreachable networks) Prefetching: Anticipate users’ future needs This research: Focuses on making cache-related storage capacity decisions (storage capacity limits the number of prefetched web objects) Therefore allocate cache storage in prefetching The authors state this focus has not been researched**

4 Ideas: Current research: Predict user web accesses without considering cache storage limit This research: optimization based models Maximize hit rate Maximize byte hit rate Minimize access latency (first 2 are primary goals of web caching: maximize) Benefit of this research: guide the operations of a prefetching system

5 Web prefetching techniques Client-initiated policies User A is likely to access URL U2 right after URL U1 Patterns learned via Markov algorithms Server-initiated policies Anticipate future requests based on server logs and proactively send the corresponding Web objects to participating cache servers or client browsers Top-n algorithm Hybrid policies Combine user access patterns from clients and general statistics from servers to improve the quality of prediction Failing of policies: how to make decisions of which Web objects to prefetch considering storage capacity

6 Assumptions/Notation Cmaximum amount of storage space is available to store prefetched Web objects iURL of potential interest PiPi Predicted probability with which URL i will be visited (i, P i )Prediction of users’ future accesses NSet of all URLs of potential interest S i э (S i <C)Size of each Web object referred to by i

7 Hit Rate (HR) Model (1) (2) (3)

8 Byte Hit Rate (BHR) Model (4) (2) (3)

9 Byte Hit Rate (BHR) Model (7) (2) (3) αiαi # of seconds to establish the network connection between the client machine and the Web server hosting i βiβi # of seconds per byte to transmit i over the network

10 Transforming HR, BHR & AL into the Knapsack problem Benefits of Knapsack problem Well studied “easiest” NP-hard problem Can solve optimally by a pseudo-polynomial algorithm based on dynamic programming A fully polynomial approximation is possible Focus on greedy algorithm (due to paper length limits)

11 Greedy Algorithm: 1. Sort all URLs into a sequence 2. Determine a threshold k defined as: 3. Prefetch Web objects referred to by URLs

12 Other Allocation Policies Tested Optimal policy using CPLEX Disadvantages Complex Increased implementation time Difficult to implement Top-n Developed for Web usage prediction Used to regulate storage allocations by appropriately setting n Equivalent to Greedy BHR relying only on P i

13 Simulations SmallLarge |N|50200 repTextmultimedia C100,000 α/ β 5,000 (slow)30,000 (fast) LN(μ,σ)= lognormal distribution with mean e μ and shape σ a. b.

14 Performance Comparison Experimental Condition Hit RateByte Hit Rate% Savings in Access Latency OptG-HRTop-nOptG-HRTop-nOptG-HRTop-n a=50, LN(10,.05), b=5000.47.45.44.45.44.45.44 a=50, LN(10,.05) ), b=30000.47.45.44.45.44.46.44 a=50, LN(10,1) ), b=5000.47.44.35.32.28.33.29 a=50, LN(10,1) ), b=30000.47.44.35.32.28.38.34.32 a=200, LN(10,.05) ), b=5000.36.34.33.34.33.34.33 a=200, LN(10,.05) ), b=30000.36.34.33.34.33.35.34.33 a=200, LN(10,1) ), b=5000.36.34.25.20.17.22.18 a=200, LN(10,1) ), b=30000.36.34.25.20.17.27.23.21

15 Results Greedy algorithms and Top-n in general achieve reasonable performance Greedy algorithms outperform Top-n with respect to hit rate and access latency There exists a relatively large performance gap between an optimal approach and fast heuristic methods when Web objects vary greatly in size Suggests the need for developing more sophisticated allocation policies such as a dynamic programming-based approach

16 Contributions: Focus: stress importance of effective storage allocation in prefetching Paper contributions: 1. Provide new formulations for prefetching storage allocation 2. Create computationally efficient allocation policies based on storage allocations solved by the knapsack problem 3. Models created lead to more precise understanding of the applicability and effectiveness of Top-n policy

17 Future Work Trace-based simulation Actual web access logs More realistic environment Modeling Integrate allocation models with caching storage management models i.e. Cache replacement

18 Changes- Recommendations Not renaming the same constraints More resources (5 articles, 2 books) Discuss feasible solve times (opt) Test/Hypothesize implementation strategies for real application


Download ppt "Storage Allocation in Prefetching Techniques of Web Caches D. Zeng, F. Wang, S. Ram Appeared in proceedings of ACM conference in Electronic commerce (EC’03)"

Similar presentations


Ads by Google