Presentation is loading. Please wait.

Presentation is loading. Please wait.

Proxy Caching for Peer-to-Peer Live Streaming The International Journal of Computer Networks, 2010 Ke Xu, Ming Zhang, Mingjiang Ye Dept. of Computer Science,

Similar presentations


Presentation on theme: "Proxy Caching for Peer-to-Peer Live Streaming The International Journal of Computer Networks, 2010 Ke Xu, Ming Zhang, Mingjiang Ye Dept. of Computer Science,"— Presentation transcript:

1 Proxy Caching for Peer-to-Peer Live Streaming The International Journal of Computer Networks, 2010 Ke Xu, Ming Zhang, Mingjiang Ye Dept. of Computer Science, Tsinghua University, Beijing. Jiangchuan Liu Dept. of Computer Science, Simon Fraser University, Canada Zhijing Qin School of Software and Microelectronics, Peking University, Beijing Speaker: Yi-Ting Chen

2 Outline Introduction Proxy Caching for P2P Live Streaming: A General View Method Overview –Data Request analysis and data Request Synthesis –Sliding Window (SLW) Caching Algorithm Evaluation and Discussion Conclusions 2

3 Introduction 3 P2P has been widely used in such applications as file sharing [1][2], voice over IP (VoIP) [3], live streaming and video-on-demand (VOD) [4]. To mitigate the traffic load, caching data of interest closer to end-users has been frequently suggested in the literature. Studies [12][17] show that P2P traffic is highly redundant and caching can reduce as much as 50–60% of the traffic. [1]. [2]. [3]. [4] [12] R.J. Dunn, Effectiveness of caching on a peer-to-peer workload, Master’s Thesis, University of Washington, Seattle, 2002. [17] N. Leibowitz, A. Bergman, R. Ben-shaul, A. Shavit, Are file swapping networks cacheable? characterizing P2P traffic, in: Proceedings of the 7th International WWW Caching Workshop, 2002.

4 Introduction 4 The caches are generally deployed at gateways of institutions, referred to as proxy caching. Recent works have also examined caching for P2P file sharing [6] and [10]. The key issues: –Object popularity. –Temporal and spatial locality. [6] O. Saleh, M. Hefeeda, Modeling and caching of peer-to-peer traffic, in: ICNP ’06: Proceedings of the 2006 IEEE International Conference on Network Protocols, IEEE Computer Society, Washington, DC, USA, 2006, pp. 249–258. [10] A. Wierzbicki, N. Leibowitz, M. Ripeanu, R. Wozniak, Cache replacement policies revisited: the case of P2P traffic, in: Proceedings of the 2004 IEEE International Symposium on Cluster Computing and the Grid, 2004, pp. 182–189.

5 Generic Cache Architecture for P2P Traffic Objective: To serve as many data requests locally as possible. The gateway intercepts the P2P downloading requests and redirects them to the P2P cache server. The server will fetch data from remote peers only when the data cannot be found in its local cache. 5

6 P2P Proxy Cache Working Mechanism 6

7 Distinct Features of P2P Live Streaming 7

8 Method Overview 8 1. Analyzing the real data request of PPLive[4], and identify its key characteristics. 2. Finding that the request time of the same data piece from different peers exhibits a generalized extreme value distribution. 3. Developing a data request generator that can closely synthesize P2P live streaming traffic. 4. Proposing a novel sliding window (SLW) caching algorithm. – [4]

9 Data request analysis and data request synthesis 9 Stable requesting rate. Request Group

10 Request Group Size Most groups are of size 32 or 48. 10 25%75%

11 Request Group Size 11

12 The Interval Between Adjacent Requests of the Same Subgroup More than 97% is between 0 and 0.0005 s. 12

13 : Some peers watch frames in a channel minutes behind other peers. : Some peers fetch a data piece in a channel seconds behind other peers in the same network. –Indicates the lifetime of a data piece in live streaming. : The interval between the first and last requests of a data piece. : A data piece is already released and not yet obsolete (still being requested). –These active data pieces are continuous, called. Request Lag Distribution Among Peers 13 Playback Lag Request Lag Lag Length Active Requesting Window

14 Studying the Request Lag Dividing the data requests from five hosts into smaller segments each with 500 data requests, and 4140 request segments are acquired. Each request segment represents the requests of a single user. Obtained requests of 4140 peers from the data captured in five clients. 14

15 Studying the Request Lag We retrieve the time stamp of data request number 300 from each group. –Calculating the lag to the earliest request of the group. 15 Lag Length15 s

16 Probability Density Function (PDF) : 16

17 Caching Design 17 Playback Rate Typical Data Piece Size The cache 352 kbps Playback Rate

18 Channel Popularity Channel popularity varies when user joins the overlay network or aborts connection. Cache should be allocated according to channel popularity. 18

19 Studying Channel Popularity We measure the online user number of totally 4403 channels at 21:30 pm on 2008-11-07 with the help of PPLive. 19 The relationship is expressed as:

20 Data Request Generator 20 Step1: Determine online user number for each channel.

21 Data Request Generator 21 Step2: Generate lag data. The request lags obey the GEV distribution. Generate request lags for each channel. Assign a lag to each user as the initial request time stamp.

22 Data Request Generator 22 Step3: Determine request interval. –1 s between groups and 0.5 s between subgroups. –To match real traces better, some randomness is added (e.g., add -0.05 ~ +0.05 ). –The request interval is calculated when generating every new request. Step4: Determine subgroup size. –Only two group size : 32 is 0.25% and 0.75% for 48. –Subgroup1 or Subgroup 2 is determined according to Fig. 4e and f in a probabilistic fashion.

23 Data Request Generator 23 Step5: Generate request data (time stamp, channel ID, sequence number] entry) for each user.

24 Data Request Generator 24 Step6: Merge all the requests with the key of time stamp and get output.

25 Sliding window (SLW) caching algorithm 25 R: Requesting Rate L: Lag Length

26 26

27 Advantages of Caching Algorithm Compared with the typical LRU (Least Recently Used) algorithm, SLW also exploits spatial locality by maintaining a continuous caching window. The overhead of cache management is low. The time complexity for cache replacement is O(1), and the space complexity is O(chN). The periodic adjustment of channel cache size has O(chN) both as the time and space complexity. 27

28 Experimental setup We use synthetic data requests generated to evaluate caching algorithms. 28

29 Performance Result – Hit Rate 29

30 Performance Result – Hit Rate 30 [6] O. Saleh, M. Hefeeda, Modeling and caching of peer-to-peer traffic, in: ICNP ’06: Proceedings of the 2006 IEEE International Conference on Network Protocols, IEEE Computer Society, Washington, DC, USA, 2006, pp. 249–258.

31 Performance Result – Hit Rate 31 GD (GreedyDual-Size) [31]: Assigns a weight to each newly cached data piece. Similar to LRU. [31] N. Young, The k-server dual and loose competitiveness for paging, Algorithmica 11 (1994) 525–541.

32 Performance Gain of SLW over LRU 32 Gaining nearly 50% improvement

33 Conclusions We studied the characteristics of data requests in P2P live streaming and modeled the lag distribution. We designed a data request generator to generate synthetic traffic for P2P live streaming applications. Furthermore, we proposed a novel caching algorithm for P2P live streaming applications-SLW. The SLW algorithm explores both temporal and spatial locality of data requests. The algorithm gets the best performance among the online caching policies including LRU, LFU and FIFO. 33

34 Thanks for your listening! 34


Download ppt "Proxy Caching for Peer-to-Peer Live Streaming The International Journal of Computer Networks, 2010 Ke Xu, Ming Zhang, Mingjiang Ye Dept. of Computer Science,"

Similar presentations


Ads by Google