Presentation is loading. Please wait.

Presentation is loading. Please wait.

Web Cache Behavior The Laboratory of Computer Communication and Networking Submitted by: Lena Vardit Liraz

Similar presentations


Presentation on theme: "Web Cache Behavior The Laboratory of Computer Communication and Networking Submitted by: Lena Vardit Liraz"— Presentation transcript:

1 Web Cache Behavior The Laboratory of Computer Communication and Networking Submitted by: Lena Salmansihaya@t2 Vardit Zadiksvardit@t2 Liraz Perlmooterslirazp@t2

2 Outline  Introduction - Web Caching Motivation  Project flow design  Project Modules  Prowgen – producing the requests  Network topology  WebCache Tool  NS simulation part  Statistics and graphs of the simulation results  Evaluation of the cache behavior and the different algorithms

3 Motivation  The World-Wide Web has grown tremendously in the past few years to become the most prevalent source of traffic on the Internet today.  One solution that could help relieve these problems of congestion and overloaded web-servers is.  One solution that could help relieve these problems of congestion and overloaded web-servers is web caching.

4 Motivation (2)  A web proxy cache sits between Web servers and clients, and stores frequently accessed web objects.  Having received a request from a client the proxy attempts to fulfill the request from among the files stored in the proxy’s cache.  If the requested file is found (a cache hit) the proxy can immediately respond to the client’s request. If the requested file is not found (a cache miss) the proxy then attempts to retrieve the file from the original location.  When the server gets the file from the original server, it can satisfy the request made by the client.

5 Proxy Server client Server A Server B Server C Server D Server E Internet Web Caching Illustration

6 Motivation (3)  When the cache is full, different replacement decisions are made, regarding to which file to invoke from the proxy.  The pruning algorithm is mainly cache management dependent, and plays a major role in reducing both latency and network traffic on the internet.

7 Motivation (4)  The cache concept helps the end user, the service provider and the content provider by reducing the server load, alleviating network congestion, reducing bandwidth consumption and reducing network latency.

8 What is a Web proxy cache?  Intermediary between Web clients (browsers) and Web servers.  Store local copies of popular documents  Forward requests to servers only if needed

9 The project purpose: Simulate a web cache behavior of a proxy, and see the hit-rate and the cost of different cache - pruning algorithms. Simulate a network, and run the simulator to estimate the time it takes the misses

10 Project Flow: Prowgen Generate requests Prowgen Generate requests Prowgen Parsing Creates database of requests Prowgen Parsing Creates database of requests WebCache Tool Simulates cache behavior LRU/LFU/HYB/FIFO WebCache Tool Simulates cache behavior LRU/LFU/HYB/FIFO NS simulator Runs the misses requests on the network 10, 50, 100 NS simulator Runs the misses requests on the network 10, 50, 100 Statistics and conclusions from the results

11 Prowgen:

12 Prowgen Part  ProWGen uses mathematical models to capture the salient characteristics of web proxy workload, as defined in the previous study of web proxy servers.  The main purpose of ProWGen is to generate synthetic workload for evaluating proxy caching techniques. This approach reduces the complexity of the models, as well as the time and space required for the generation and storage of synthetic workload.  The main purpose of ProWGen is to generate synthetic workload for evaluating proxy caching techniques. This approach reduces the complexity of the models, as well as the time and space required for the generation and storage of synthetic workload.  The following parameters can be changed in the Prowgen: 1. One-time referencing – set to 50% of the files 1. One-time referencing – set to 50% of the files 2. File popularity - 0.75 – medium distribution 2. File popularity - 0.75 – medium distribution 3. File size distribution – 1.4 (lighter tail index) 3. File size distribution – 1.4 (lighter tail index) 4. Correlation - correlation between file popularity and file size, we used normal correlation. 4. Correlation - correlation between file popularity and file size, we used normal correlation. 5.Temporal locality – used static configuration, which seems to have more temporal locality 5.Temporal locality – used static configuration, which seems to have more temporal locality

13 Network Topology PROXY 20% of the network – Medium servers 5-7 hops to the proxy 10% of the network – Closest servers- 3-4 hops to the proxy 70% of the network – “the rest of the world” 10-15 hops to the proxy

14 Division of files to servers:  10,000 file requests are related to servers from group 1 (10% - the closest servers)  20,000 file requests are related to server from group 2 (20% - the medium distance servers)  70,000 file requests are related to group 3. (70% - “the rest of the internet)  Division is done with the help of a hash function, so the division, won’t be influenced from the order of the files, in the Prowgen input.

15 ProWGen Output:  The output of Prowgen is a list of requests:  File_name file_size 9209 403 14722 11359 9209 403 34733 6544 4106 4041 24315 3653 5444 1220 29904 5266 8838 1485 18058 1570 33151 3577 24416 15669 6701 4075 18026 7172 9209 403 14722 11359 9209 403 34733 6544 4106 4041 24315 3653 5444 1220 29904 5266 8838 1485 18058 1570 33151 3577 24416 15669 6701 4075 18026 7172

16 WebCache TOOL:

17 Data bases File: size name server … File: size name server … Requests Server: latency bandwidth Server: latency bandwidth WebCache: List of files in cache Cache_size WebCache: List of files in cache Cache_size Servers: LRU FIFO LFU HYB

18 Data base We have 3 classes:  class File  class Server  class Cache

19 class File This class contain the file information: double name: name of file double name: name of file double size: size of file double size: size of file int server: any value between 0 - num_serv is valid int server: any value between 0 - num_serv is valid double prio: The Priority of the file double prio: The Priority of the file int nref:number of references to the document since it last entered the cache int nref:number of references to the document since it last entered the cache We used this DB for the list we receive from the prowgen, and for the list of files that are in the cache

20 class Server class Server This class contain the file information: double lat: The latency to this server double lat: The latency to this server int band: The bandwidth to this server int band: The bandwidth to this server We use the DB to contain the list of server

21 class Cache class Cache This class contain the file information: list FileList : a list of files that are in the cache list FileList : a list of files that are in the cache int CacheFreeSpace : The remaining place in the cache int CacheFreeSpace : The remaining place in the cache We use this class to simulate the cache itself

22 Evaluation the cache behavior Evaluation the cache behavior We used the following modules:  Prowgen  cacheLRU  cacheLFU  cacheHYB  cacheFIFO

23 cacheLRU cacheLRU An implementation of the LRU algorithm using STL list The main idea is: if the file is in the cache – HIT: 1. Move the file to the beginning of the list otherwise MISS: 1.“make room” for the requested file (by deleting the last files in the list), 2.print a request to the ns file for the requested file and then 4.insert the file to the cache DB (to the beginning of the list)

24 cacheLRU cacheLRU  Replaces least recently used page with the assumption that the page that have not be reference for the longest time will not be very likely to be reference in the future. Each newly fetched page is put on head of list  Tail page is deleted when storage is exceeded  Performs better than LFU in practice  Used in today caches (e.g., Squid Web Proxy Cache)

25 cacheLFU cacheLFU Implementation of the LFU algorithm using STL list The main idea is: if the file is in the cache – HIT: 1. Update the file priority (inc by 1) 2. Update the file place in the list according to its prio otherwise MISS: 1.“make room” for the requested file (by deleting the last files in the list), 2.print a request to the ns file for the requested file and then 3.initiate the file priority to 1 4.insert the file to the cache DB according to its prio

26 cacheLFU cacheLFU  Replaces least frequently used page with the assumption that the page has been least often used will not be likely to be referenced again in the future.  Optimal replacement policy if all pages have same size and page popularity does not change  In practice has disadvantages slow to react to popularity changes slow to react to popularity changes needs to keep statistics (counter) for every page needs to keep statistics (counter) for every page does not consider page size does not consider page size

27 c acheHYB c acheHYB Implementation of the HYB algorithm using STL list The main idea is: if the file is in the cache – HIT: 1. Update the file priority according to the algorithm 2. Update the file place in the list according to its prio otherwise MISS: 1.“make room” for the requested file (by deleting the last files in the list), 2.Print a request to the ns file for the requested file and then 3.Update the file priority according to the algorithm 4.insert the file to the cache DB according to its prio

28 cacheHYB cacheHYB  The three factors which Hybrid takes into account are size, transfer time, and number of references.  The Hybrid algorithm offers the best combination of guaranteed performance for frequently used objects and overall cache size.  Drawback: needs to keep statistics (counter and other values) for every page

29 cacheHYB cacheHYB  WB = 8Kb and WN = 0.9 for the HYB algorithm (100 servers)  WB = 1 and WN = 1 for the HYB algorithm (50 servers )   HYB selects for replacement the document with the lowest value value of the following expression:   Weight = ((Ref ** WN)*(latency + WB/bandwidth)) / (FileSize)   Therefore, a file is not likely to be removed if the expression above is large, which would occur if the file has been referenced frequently,and if the document size is small.

30 cacheHYB   The constant WB, whose units are bytes, is used for setting the relative importance of the connection time versus the connection bandwidth.   The constant WN, which is dimensionless, is used for setting the relative importance of nref versus size. As WN 0 the emphasis is placed upon size.   If WN = 0 nref is not taken into account at all.   If WN > 1 then the emphasis is placed more greatly upon nref than size.

31 cacheFIFO cacheFIFO Implementation of the FIFO algorithm using STL list The main idea is: if the file is in the cache – HIT: otherwise MISS: 1.“make room” for the requested file (by deleting the last files in the list), 2.Print a request to the ns file for the requested file and then 3.insert the file to the cache DB (to the beginning of the list)

32 cacheFIFO cacheFIFO  Replace the page that has been cached for longest time with the assumption that old caches will not be reference again in the future.  Regardless of the frequency of the page is request, size of the page and the cost to bring it back.  Does not take the frequency of the page into consideration, this policy will result that the same popular page to be brought into the cache over and over again.

33 NS implementation:

34 Network Topology PROXY 20% of the network – Medium servers 5-7 hops to the proxy 2 MB 10% of the network – Closest servers- 3-4 hops to the proxy 10 MB 70% of the network – “the rest of the world” 10-15 hops to the proxy 2 MB

35 Network Simulation:  Latency between hops – 10 ms  Latency to each server is decided: The group it belongs to – 10%, 20%, 70% The group it belongs to – 10%, 20%, 70% Inside the group – it is distributed uniformly Inside the group – it is distributed uniformly Group 1 – 3-4 hops * 10 msGroup 1 – 3-4 hops * 10 ms Group 2 - 5-7 hops * 10 msGroup 2 - 5-7 hops * 10 ms Group 3 – 10-15 hops * 10 msGroup 3 – 10-15 hops * 10 ms The algorithm responsible for the distribution uses counter, and modulo calculation. The algorithm responsible for the distribution uses counter, and modulo calculation.

36 Network Simulation:  Bandwidth is also decided, depeding on the group it belongs: Group 1 – 10 MB (closest servers) Group 1 – 10 MB (closest servers) Group 2 & 3 – 2MB Group 2 & 3 – 2MB

37 Connection Implementation:  TCP agents are created: Agent/TCP/Newreno for the Server Agent/TCP/Newreno for the Server Will implement TCP – New Reno protocolWill implement TCP – New Reno protocol Agent/TCPSink for the proxy ( the receiver ) Agent/TCPSink for the proxy ( the receiver )  On top of the agents: FTP/Application was attached to the TCP FTP/Application was attached to the TCPagent.

38 Requests:  When a miss has occurred in the CacheTool Part, it will write to the NS input file, a fetch request from the relevant server.  Those requests will be issued in varying times. At the particular time, the server will start sending the file.

39 Requests Times:  Requests times are distributed exponentially using the random generator implemented in the ns: Average – 0.5 Average – 0.5 Seed – 0 (default) Seed – 0 (default)  Each file request, will be treated within this time, counted from the previous request.

40 Requests:  When a miss is indicated by the cache – managing algorithm it will place a request to the simulator to fetch the file: it will place a request to the simulator to fetch the file:  NS will decide at which time to fetch this file (at the request time decided randomly)  When this file request will be completed ( all the acks will be received ) – done procedure will be called.

41 done procedure:  Done procedure Is called every time, a send command is finished Is called every time, a send command is finished Done procedure updates the timer for this request - counts how long the request took. Done procedure updates the timer for this request - counts how long the request took. Writes this time to the statistic file Writes this time to the statistic file

42 done procedure:  Done procedure Is called every time, a send command is finished Is called every time, a send command is finished Done procedure updates the timer for this request - counts how long the request took. Done procedure updates the timer for this request - counts how long the request took. The duration of request is counted as the difference between the beginning time, and the end time (when done is called) The duration of request is counted as the difference between the beginning time, and the end time (when done is called) Writes this time to the statistic file Writes this time to the statistic file

43 Screen shots:

44

45

46 Statistics and Evaluation:

47 statistics:  3 types of network: 10, 50, 100 servers  LFU, LRU, HYB, FIFO algorithms Hit count Hit count Byte hit count Byte hit count  4000 requests in the middle, are run over the simulator Total time for the misses among the requests are counted Total time for the misses among the requests are counted

48 conclusions:  Cache sizes are tested for different algorithms: 1MB – 256MB 1MB – 256MB

49 Performance metrics:

50 Hit Ratio  The cache hit ratio is the number of requests satisfied by the cache divided by the total number of requests from user.  Higher hit ratio, the better the replacement policy is, because this means that fewer requests are forwarded to the web server, thus reduce the network traffic

51

52

53

54  LRU, FIFO and HYB seem to get close results  HYB seems to be a little lower, maybe since it is taking into account the number of references, which doesn’t seem to be an efficient idea  LFU is the worse algorithm  At 256MB, all the algorithms seem to get to the same results, since the cache size seems to be big enough, and to contain reasonable amount of files, for optimal amount of misses. Conclusions

55 Byte-Hit Ratio  The ratio of total bytes satisfied by the cache divided by the total bytes transferred to the user.  Higher byte hit ratio means a lower volume of data flowing between the proxy and the web server, thus reduce the network traffic

56

57

58

59 Conclusions  LRU and FIFO seem to get good results again  HYB seems to get lower results – since it prefers to evict bigger files, and obviously will achieve lower byte-hit rate. It “pays” more for each miss.  LFU seems to be worse then LRU and FIFO  Again at 256MB, all the algorithms achieve similar results.

60 NS Latency  The simulated time that it takes to fetch the files from the internet.  The less the latency, the better is the algorithm to lower network traffic, by thus taking load from the internet.  Here we are reducing both latency and both traffic.

61

62

63  LFU, FIFO, LRU seem to get very close results on the files that ran on the simulator.  HYB seems to get worse results – This might be because of parameters not suited to the workload generated This might be because of parameters not suited to the workload generated Or perhaps because of the HYB preference to small files – which causes more time to bring the larger files. Or perhaps because of the HYB preference to small files – which causes more time to bring the larger files. Conclusions

64  LFU does not achieve good results in both Hit Ratio and Byte Hit Ratio. This implies that the assumption that user will request the same frequently requested document over and over again is not a very good assumption.  As for FIFO performance, the results were surprisingly good, when taking into account the simple and non-sophisticated implementation.

65 Conclusions  HYB achieves good hit-rate, but does not achieve neither good byte-hit rate, or low latency time.  Since HYB prefers files which have high reference, and are relatively small, the byte-hit ratio is not expected to be high.  As for network latency, it should be dependent on the network, and more parameters should be tested.


Download ppt "Web Cache Behavior The Laboratory of Computer Communication and Networking Submitted by: Lena Vardit Liraz"

Similar presentations


Ads by Google