Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cluster and Grid Computing Lab, Huazhong University of Science and Technology, Wuhan, China Supporting VCR Functions in P2P VoD Services Using Ring-Assisted.

Similar presentations


Presentation on theme: "Cluster and Grid Computing Lab, Huazhong University of Science and Technology, Wuhan, China Supporting VCR Functions in P2P VoD Services Using Ring-Assisted."— Presentation transcript:

1 Cluster and Grid Computing Lab, Huazhong University of Science and Technology, Wuhan, China Supporting VCR Functions in P2P VoD Services Using Ring-Assisted Overlays Bin Cheng, Hai Jin, Xiaofei Liao Cluster and Grid Computing Lab Services Computing Technology and System Lab Huazhong University of Science and Technology, Wuhan, China

2 Motivation VoD is attractive, But cost –YouTube about 20 million views a day 200TB per day 1~2 million $ per month payed for the required bandwidth P2P is successful in file sharing and live streaming –BitTorrent –AnySee, CoolStreaming Can we provide VoD services based on similar P2P technologies? –Yes, but it is challenging!

3 Challenges Characteristics of VoD –Frequent joining/leaving user can watch any videos when at any time many short duration time, over 40% < 10 minutes –Random seek not uncommon in an Internet VoD, particularly for long videos 60% sessions have VCR operations average number of seek ~= 5 All of these make the overlay of P2P VoD harder to be organized

4 Related work Tree-based overlay –P2Cast, P2VoD load imbalance, the root node has to process more queries Mesh-based overlay –CoolStreaming randomly organized inefficient query for seeks Hybrid overlay [Tree + Mesh] –Tree-Assistant Gossip complicated algorithms to maintain overlay consistency is not easily obtained What we do? RINDY : A ring-assisted overlay for P2P VoDRINDY : A ring-assisted overlay for P2P VoD

5 Outline Basic Architecture RINDY overlay Overlay maintenance –how to construct? –how to maintain? –how to support random seeks? Evaluation –Overhead –Comparison with P2VoD Conclusions

6 Basic Architecture Tracker –a well-known rendezvous point –a normal peer with the fixed position zero –bootstrap newly joined peers Source Servers –store all videos, which are divided into a series of timeslots, each with one second content –provide media data for peers that can not find data from other peers Peers –each peer node maintains a sliding buffer window to manage the most recently received timeslots –two peers can share data only if their buffer windows are overlapped

7 RINDY Overlay (1) Structure of ring in RINDY –Each peer keeps a set of concentric, non-overlapping logical rings with power-law radii to organize its neighborhood Based on the relative playing position d j = cur j – cur i The radius of the ith ring is a*2 i, a = the size of buffer window –gossip-ring, the innermost ring, keeping some near-neighbors with relative small distances To improve sharing between those peers with close playing positions and overlapped buffer windows –skip-rings, the outer rings, sampling some far-neighbors with different distances To aggregate the speed of lookup operations and reduce the load of the tracker server by sampling some far-neighbors

8 Overlay Structure (2) An example buffer window size = 5 minutes the time length of the current movie = 45 minutes t = 45:00 15:30 0 18:20 11:40 16:20 gossip-ring skip-rings

9 Overlay Maintenance (1) How to construct? tracker

10 Overlay Maintenance (2) How to maintain? –scoped gossip over rings to discover new close peers –Method: each peer announce its position to near-neighbors periodically. The gossip messages will be discarded in three cases: 1) out of scope; 2) TTL = 0; 3) loop back No next hop The next hop is out of scope gossip ring TTL = 0 Gossip messages are only disseminated between close peers

11 Overlay Maintenance (2) Multiple Layer –Promote –Member  Neighbor (gossip over rings)  Partner (sharing data) –To promote better candidates to the upper layer –To accommodate the dynamics of overlay topology –To improve the quality of streaming –Check –Partner will be dropped if no data are shared for a pre-define time –Neighbor will be delete if a pre-define time no update

12 Overlay Maintenance (3) How to look up new partners? Step 1: send a query to look up some peers close to the destination d Step 2: each time, forward this query the closest router, until, |P – d| < W (buffer window size) Step 3: get the reply include new near-neighbors and far-neighbors t New neighborhood

13 Simulation Setup (1) Topology generated by GT-ITM –1000 peer nodes based on transit-stub model –3 transit domains, each with 5 transit nodes –a transit node is connected to 6 stub domains, each with 12 stub nodes Bandwidth –between two transit nodes, 100Mbps –a transit node and a stub node, 10 Mbps –heterogeneous upload bandwidths of stub nodes, including 4Mbps, 1Mbps, and 512Kbps Movie –a movie with 60 KB/s streaming rate, 60 minutes content

14 Simulation Setup (2) Parameter list ParameterValue and description w300 seconds, buffer window size B5, maximum ring number a300, radius coefficient of rings TTL5, maximum hop number for gossip messages t30 seconds, gossip period m15, number of near-neighbors in each gossip-ring k1, number of front far-neighbors in each skip-ring l1, number of back far-neighbors in each skip-ring n10, number of partners TpTp 3 minutes, checking period of partners in partner list TmTm 5 minutes, checking period of members in mCache T3600 seconds, length of a sample movie

15 Simulation Results Control Overhead (1) the overhead increases very slowly after the total number of peers reaches 400 each peer only needs to process about 5 messages in a second

16 Simulation Results Control Overhead (2) 10 percent of peers join, leave the network or perform VCR operations randomly When the total number of peers reaches about 400, the average control overhead to accommodate the overlay changes keeps at a constant level control overhead caused by peer dynamics is independent of the size of network

17 Simulation Results Control Overhead (3) normal gossip scoped gossip over rings a reduction of control overhead of 15-66% obtained by our gossip protocol

18 Simulation Results Server stress (1) –To examine the stress of source server under different arrival rates a lower arrival rate, leads to a higher server between 0.1 and 10, remains unchanged beyond 10, rises noticeably with the increase of arrival rate

19 Simulation Results Server stress (2) –To examine the stress of source server under different buffer window sizes for a low arrival rate, the increase of buffer window size significantly improves the performance for popular channels, a larger buffer window can not bring more benefits

20 RINDY vs. P2VoD (1) Server stress (3) for P2VoD, the server stress increases when the number of nodes increases due to the inefficient upload bandwidth utilization

21 RINDY vs. P2VoD (2) Quality of streaming Evaluated by the average timeslot missing rate (TMR) randomly stop some joined peers at a speed of 2 peers per minute RINDY achieves better reliability by using gossip protocol over rings and retrieving data packets from multiple partners

22 RINDY vs. P2VoD (3) Latency In P2VoD, a peer takes more time to join the overlay and the joining latency increases with the number of nodes In contrast to 2VoD, RINDY significantly decreases the latency of random seek

23 RINDY vs. P2VoD (4) Load balance RINDY obtain better load balance among peers during VCR operations Main reasons behind this result –For the tree-based overlay network, all lookup operations start from the root node –For RINDY, all lookup operations start from the local rings

24 Conclusions A novel ring-assisted overlay, namely RINDY, is potential to provide a large scale P2P VoD service. RINDY can efficiently support VCR functions, particularly random seeks. By keeping all neighbors in concentric rings with power law radii, each peer can fast locate new partners. A scoped gossip is used to discover new help RINDY decrease. Week consistency originated from gossip makes RINDY more reliable than tree-based overlay Simulation results show that RINDY outperforms P2VoD in terms of load balance, join and seek latency.

25 http://www.gridcast.cn Thank you! Q&A The End


Download ppt "Cluster and Grid Computing Lab, Huazhong University of Science and Technology, Wuhan, China Supporting VCR Functions in P2P VoD Services Using Ring-Assisted."

Similar presentations


Ads by Google