Presentation is loading. Please wait.

Presentation is loading. Please wait.

P2P Media streaming Rutuja Raghoji Ramya Tridandapani Malini Karunagaran.

Similar presentations


Presentation on theme: "P2P Media streaming Rutuja Raghoji Ramya Tridandapani Malini Karunagaran."— Presentation transcript:

1 P2P Media streaming Rutuja Raghoji Ramya Tridandapani Malini Karunagaran

2 P2P with respect to media streaming  Users contribute content they have already consumed back to other users of the application.  Considerable intelligence to be located at the edge of the network.  Distributes the resources required for content delivery throughout the Internet.  Efficient delivery model at scale but no guarantee on minimum level of resource availability.

3 Live Streaming Metrics  Startup Delay  Source-to-end delay  Playback continuity  Above metrics are directly related to user satisfaction.

4 CoolStreaming/DONet: A Data-Driven Overlay Network for Efficient Live Media Streaming  An efficient, robust and resilient system that is easy to implement.  Periodic exchange of information  Retrieves unavailable information from partners.  Supplies available data to partners.  A public Internet-based DONet implementation, called CoolStreaming v.0.9.  Evaluated over PlanetLab: 30,000 distinct users, peak load 4000 simultaneous users. CoolStreamin g

5 Need for media streaming  Multimedia applications have become more popular. Eg: Net TV, news broadcast  Increased network traffic  IP Multicast is most efficient vehicle.  Issues arise due to lack of incentives:  To install multicast routers  To carry multicast traffic.  Solution: Overlay networks – Application level CoolStreamin g

6 IP Multicast  Generally advocate tree based structure  Works well with dedicated infrastructure  Mismatches application level overlay with dynamic nodes  Vulnerable to failure  Partially solved by structures like mesh and forest  Multicast systems – two categories  Proxy-assisted  Peer-to-peer based CoolStreamin g

7 Overlay construction algorithms  DONet uses peer-to-peer paradigm  Tree based protocols and extensions  Two types of distribution trees  Centralized Eg: CoopNet  Distributed Eg: SpreadIt, ZIGZAG  Gossip based protocol  Used by DONet for membership management.  Newly generated message sent to randomly selected nodes  Repeated until all nodes get the message.  Suffers from significant redundancy, maybe severe for streaming applications. CoolStreamin g

8 Data-centric Approach  Adopted for the following reasons:  Migrating to application layer leads to greater flexibility  No prescribed roles for the nodes  Availability of data guides flow directions, not restricted by overlay structure  Suitable for overlay with high dynamic nodes  DONet targets live streaming with semi- synchronized nodes.  Design issues:  How are partnerships formed?  How do you encode and exchange data availability information?  How do you supply and retrieve video data among partners? CoolStreamin g

9 Design and Optimization of DONet CoolStreamin g

10 Design and Optimization of DONet(contd.)  Terms and definitions  Membership manager: node which maintains partial view of other overlay nodes  Partnership manager: establishes and maintains partnership with the partner nodes  Scheduler: schedules transmission of video data  Source node/origin node: always a supplier  Other nodes can be a receiver, a supplier or both depending on segment availability information CoolStreamin g

11 Node Join and Membership Management  Maintains membership cache (mCache)  mCache contains a partial list of identifiers for the active nodes  In joining algorithm, newly joined node contacts the origin node; which redirects request to obtain list to deputy node.  Scalable Gossip Membership protocol, SCAM: distribute membership messages periodically.  Two events also trigger mCache update:  Forward membership message through gossip.  Node serves as deputy, include entry in candidate list. CoolStreamin g

12 Partnership in DONet CoolStreamin g

13 Buffer Map Representation and Exchange  A video stream is divided into segments of equal length  Buffer Map represents availability of segments in the buffer of a node  Each node exchanges BM with its partners before scheduling segment transfer  playback progresses of the nodes are semi- synchronized  a sliding window of 120-segment, 1 second video segment  BM contains 120 bits, 1 for each segment with value 1 indicating segment is available, 0 otherwise. CoolStreamin g

14 Scheduling Algorithm  For homogenous and static network, simple round robin scheduler works  Dynamic network needs more intelligent scheduler!  Two constraints:  playback deadline for each segment  heterogeneous streaming bandwidth from the partners  Variation of the problem Parallel machine scheduling, known to be NP-hard!  Heuristic:  Ordered by the number of potential suppliers for each segment, and the node with the highest bandwidth and enough available time is selected  Execution time of implementation:15ms / execution  Adopted TCP-Friendly Rate Control protocol  Origin node advertises BM if needed CoolStreamin g

15 Failure Recovery and Partnership Refinement  Departure can be detected after an idle time of TFRC or BM exchange  Reschedule using the BM :the probability of concurrent departures is small.  Following operations to enhance resilience:  Graceful departure  Node failure  Each node periodically establishes new partnership which helps with:  Maintaining a stable number of partners  Nodes explore partners with better quality  Each node i calculates score for partner j: max{S i,j ; S j,i }  Reject the one with the lowest score CoolStreamin g

16 Planet-based Performance Evaluation  A collection of machines distributed over the globe: 435 machines, hosted by 203 sites, spanning over 25 countries CoolStreamin g

17 Comparison with Tree-based Overlay CoolStreamin g

18 Comparison with Tree-based Overlay(contd.) CoolStreamin g

19 Conclusion  Two interesting facts from results:  The current Internet has enough available bandwidth to support TV-quality streaming (¸ 450 Kbps).  The larger the data-driven overlay is, the better the streaming quality it delivers.  Drawbacks!  Copyright issues with content!  Nodes behind a NAT gateway are often restricted to serve as receivers only  30% users are behind NAT  TCP connection: 95% of the nodes can become relaying nodes  Supports VBR encoding only. CoolStreamin g

20 ZigZag - Introduction  What is ZIGZAG?  Peer-to-Peer protocol for single source media streaming to multiple receivers.  Uses a tree structure  What problem is it solving?  This protocol can maintain the stream under the effects of network dynamics and unpredictable client behavior. i.e. If failure then it can have a quick and graceful recovery.  Shorten delay for source to receivers.  Limit the required overhead on the receivers.  Realize that this basic structure is somewhat common but zigzag  connects them differently. ZIGZAG

21 ZigZag – Design Objectives  Peer to peer technique for single-source media streaming  The end-to-end delay from source to client should be low  The node degree should be small  Adapt to free join/leave receivers  Minimize the amount of control overhead ZIGZAG

22 ZigZag Protocol  Administrative Organization  Represents the logical relationships among peers  Multicast tree  Specify which peer data is received from  Built based on C-rules which helps limits the degree of a peer (outbound links)  Control protocol  Specify the exchange of state information  Policies adjusting the tree  Maintaining the robustness of the tree ZIGZAG

23 Administrative Organization  A multi-layer hierarchy of clusters  Partition peers into clusters of size [k, 3k]  Assign the role “head” and “associate head” to certain peers  Properties  H – number of layers  Bounded by [log 3k N, log k N+1]  Max. number of members in a cluster=3k  To prevent cluster undersize in the case of a client leave after splitting ZIGZAG

24 Administrative Organization  Organized into a tree of clusters.  Each cluster having at least 4 nodes (except top)  One of those nodes will be the head of the cluster  Head node of a cluster becomes a member of parent cluster ZIGZAG

25 Relationship among clusters and peers ZIGZAG

26 Terms  Subordinate: Non-head peers of a cluster headed by a peer X are called “subordinate” of X.  Foreign head: A non-head (or server) clustermate of a peer X at layer j > 0 is called a “foreign head” of layer-(j-1) subordinates of X.  Foreign subordinate: Layer-(j-1) subordinates of X are called “foreign subordinates” of any layer-j clustermate of X.  Foreign cluster: The layer-(j-1) cluster of X is called a “foreign cluster” any layer-j clustermate of X. ZIGZAG

27  Built based on the administrative organization  C-rules specify the actual data flow from source to any peer  A peer, when not at its highest layer, neither has a link out nor a link in  Nonhead members of a cluster must receive the content directly from its associate head. In other words, this associate head links to every other nonhead member  The associate head of a cluster, except for the server, must get the content directly from a foreign head of the cluster  Some nodes will stream data to more than 1 peers  Assumption:  The uplink capacity of peer is enough for streaming contents to multiple peers Multicast tree ZIGZAG

28 Multicast tree ZIGZAG

29 Multicast tree - Properties  The workload is shared among clients  Worst-case node degree is 6k-3  The end-to-end delay is small  Maximum height of the tree is 2log k N+1  Use of “associate head” for delivering media  Number of outbound links is lower  Bottleneck will less likely to appear in higher level ZIGZAG

30 Control protocol  Goal  Minimize the number of peers needed to be contacted  Only exchange information with parent, children and clustermates  Exchange as few states as possible  Each node in a cluster periodically communicates with its clustermates, children and parent on the tree.  Which nodes it is communicating with  Which clusters have room for another node  Which nodes are have paths to the lowest level of the tree  control overhead for an average member is a constant. The worst node has to communicate with other nodes, which is acceptable since the information exchanged is just soft-state refreshes. ZIGZAG

31 Client Join/Departure  Basic principle  Maintain C-rule so that nice properties of degree and end-to-end delay is preserved  Direct solution  Reconstruct the administrative organization and multicast tree  Costly in terms of exchange of state information  Proposed join/departure algorithm  Limits the number of nodes to connect during a join by O(k log k N)  Limits the number of peers that need to reconnect by 6k-2 ZIGZAG

32 Client Join  When a peer sends a request to the server  Push request down the multicast tree until a leaf node is found that has room in it’s cluster  Periodic check of the clusters to see if they are to big, if so split them  Procedure  If X is a layer-0 associate-head  Add P to the only cluster of X  Make P a new child of X  Else  If Addable(X)  Select a child Y s.t. Addable(Y) and D(Y)+d(Y,P) is min  Forward the join request to Y  Else  Select a child Y s.t. Reachable(Y) and D(Y)+d(Y,P) is min  Forward the join request to Y ZIGZAG

33 Client Departure  When a peer departs from the tree  Parent, subordinates and children of the node are notified  Non-head clustermate is selected and asked to take over forwarding data  If many departures causes undersize then 2 cluster at the same layer are merged  Tasks to do for client (X) departure  The parent removes link to X  The children of X needs a new parent  Each layer-i cluster X belongs to needs a new head  Layer-j cluster may require a new associate head ZIGZAG

34 Client Departure  If X’s highest level is at layer 0  If X is not the “associate head”  No extra work needed  If X is the “associate head”  The head of the cluster choose another member to take up the responsibilities  If X’s highest level is j (non zero)  It implies it is a “head” in layer [0,j-1]  A non-head peer (X’) at layer 0 is randomly chosen to replace the “head” responsibility  Head of children of X (Y) will choose a new parent for X that has a minimum degree ZIGZAG

35 Performance Optimization  The performance optimization procedure makes the service load fairly distributed among the peers without violating the multicast tree rules.  Periodically checking the administrative organization and  multicast tree.  If node has many children may consider switching parenthood of some of the children to a clustermate - Balancing based on number of nodes  Switching based on capacity or bandwidth – Balancing based on bandwidth  Refinement handling  Degree based switch  Balances degree by transferring service load  Capacity based switch  Balances the peer busyness among all non-head peers of a cluster ZIGZAG

36 Performance Analysis  Performance under different scenarios Look at various scenarios on a network simulator running 3240 nodes,2000 client. 5 – 15 nodes per cluster.  Scenario1: No failure  Looked at overhead such as adding new nodes average of 2.4% population were contacted  If too many children then it must ask most of them  Performance improved when a split of a cluster occurred  Scenario 2: Failure possible  Let every 200 nodes fail (out of 2000) sequentially.  Mostly less than 20 reconnections (2% of population)  Scenario 3: ZIGZAG vs NICE ZIGZAG

37 Scenario 1: No failure Join and split overhead a. the server has too many children and a new client has to ask all of them, thus resulting in many contacts. b. split procedure takes place after detecting a cluster at the second-highest layer is oversize. Thus results in very few children. ZIGZAG

38

39 Scenario 2: Failure Possible  recovery overhead is always bounded by a constant regardless of the client population size.  Merge overhead is always small regardless of the client population size. ZIGZAG

40 Scenario 3: ZIGZAG vs NICE ZIGZAG

41 Comparison with other P2P systems SpreadIt Single distribution tree Vulnerable to disruptions Long blocking time CoopNet Multiple distribution trees Heavy control overhead on source Narada Multi-sender multi-receiver streaming Works on small P2P network ZIGZAG

42 Advantages  Oddities in the overhead of streaming multimedia over the internet will not felt as much by the end user.  Efficient methods for nodes dropping & adding  Uses two trees  1 to maintain administration  1 for data flow  Short end-to-end delay  Tree structure  Low administration overhead  Nodes “talk” to each other on regular basis ZIGZAG

43 Disadvantages  Possible for a lot of overhead per node and especially head nodes  Needs to Optimize at random intervals to be effective which may cause large overhead. ZIGZAG

44 Conclusion  A P2P media streaming scheme  The maximum degree and end-to-end delay is small  A client join/leave algorithm is proposed aim at reducing the control overhead  Simulation result suggests that 5000 peers can be supported at a max. degree of 15 ZIGZAG

45 AnySee  Internet based Peer-to-Peer live streaming system  Released in the summer of 2004 in CERNET of China.  Over 60,000 users enjoy live videos including TV programs, movies, and academic conferences.

46 AnySee  What is it trying to achieve?  Improving global resource utilization.  Distributing traffic to all physical links evenly.  Assigning resources based on locality and delay.  Guarantee streaming service quality by using nearest peers.  How is it different from previously discussed systems?  Adopts Inter-overlay optimization.

47 Intra-Overlay Optimization  Resources cannot Join multiple overlays S1 -> A -> D = 3 +5 = 8 Not Optimal globally! S1 -> S2 -> D = 2 + 2 = 4 Globally Optimal!

48 Inter-Overlay Optimization  Resources can join Multiple overlays  Path S1->A->D(8) is replaced by S1->S2->D(4)  Path S2->G->H (9) is replaced by S2->G->C->G(7)

49 Inter-Overlay Optimization Key Challenges  Efficient neighbor discovery  Resource assignment  Overlay construction  Optimization

50 AnySee –System Overview

51 Mesh-based Overlay Manager  All peers belonging to different streaming overlays will join one substrate, mesh-based overlay.  Every peer has a unique identifier.  Key goal is to match the above overlay to the underlying physical topology.  Done by using a Location-aware topological Matching technique.  Uses ping and pong messages and flooding techniques to form the Mesh.

52 AnySee –System Overview Single Overlay Manager  Intra-Optimization Schemes similar to Narada, DONet etc.  Deals with join/leave operations of peers.

53 AnySee –System Overview Inter-Overlay optimization Manager  Each peer maintains Active Streaming Path Set and Backup Streaming Path Set.  Inter-Overlay Optimization algorithm is called when number of backup streaming paths is less than a threshold.  Path from Backup streaming path is chosen when active streaming path is cut off due to poor streaming QoS.

54 AnySee –System Overview Inter-Overlay optimization Manager  Two main tasks:  Backup streaming path set management  Active streaming path set management.  Backup streaming path set management is done using Reverse Tracing Algorithm.

55 AnySee –System Overview Key Node Manager  Allocates and manages resources.  Admission control policy. Buffer Manager  Responsible for receiving valid media data from multiple providers in the active streaming path set and continuously keeping the media playback.  Small buffer space is enough due to Inter- Overlay Optimization because of shorter startup delay.

56 System Evaluation For system evaluation two topologies are considered.  Physical topology  Real topology with Internet characteristics.  BRITE  Logical topology  Overlay P2P topology built on top of the physical topology.  Gnutella based Crawler

57 System Evaluation – Simulation Parameters S Number of streaming overlays M Number of neighbors N Size of one overlay r Streaming playback rate C Number of total B/W connections

58 System Evaluation – Performance Metrics  Resource Utilzation  Ratio between used connection to all connections.  Continuity Index  Number of segments that arrive before playback deadline over the total number of segments.  Represents playback continuity.

59 System Evaluation Continuity index V.S. streaming rates when N=400, S=12 and initial buffer size is 40 seconds

60 System Evaluation Resources utilization: overlay size V.S. the number of streaming overlays when M=12, r=300 Kbps

61 System Evaluation Continuity index under dynamic environments when M=5, N=400, r=300 Kbps and initial buffer size is 40 seconds

62 System Evaluation Resource utilization under dynamic environments when M=5, N=400, and r=300 Kbps

63 Conclusion  Efficient and scalable live-streaming overlay construction has become a hot topic recently.  Important metrics to be considered in any live streaming application are  Startup delay  Source-to-end delay  Playback continuity

64 Conclusion  Previously studied Intra-Overlay optimization techniques have drawbacks  Low resource utilization  High startup delay  Source- to-end delay  Inefficient resource assignment in global P2P networks.

65 Conclusion  AnySee uses Inter-Overlay optimization technique.  AnySee peers are able to construct efficient paths using peers in different overlays rather than selecting better paths in the same overlay.  Experimental results show that AnySee outperforms existing intra-overlay live streaming schemes, such as Coolstreaming.

66 CDN vs P2P CDNP2P Excellent quality to end-users when workload is in provisioning limits High Scalability Constrained by specifics of the existing servers and bandwidth Low Maintenance cost Not highly scalableLow stream quality with undesirable disruption Higher Operation CostsUnfairness in face of heterogeneous peer resources

67 LiveSky  Hybrid CDN-P2P for Live Video Streaming  Incorporates the best of both technologies  Mutually offsets each others’ deficiencies. LiveSky

68 LiveSky – System Architecture LiveSky

69 References  Duc A. Tran, "ZIGZAG: an efficient peer-to-peer scheme for media streaming" INFOCOM03  Xinyan Zhang, "CoolStreaming/DONet: a data-driven overlay network for peer-to-peer live media streaming" INFOCOM05  Xiaofei Liao, Hai Jin, Yunhao Liu, Lionel M. Ni, and Dafu Deng, "Anysee: Peer-to-peer live streaming" INFOCOM06  H. Yin et al., "Design and Deployment of a Hybrid CDN-P2P System for Live Video Streaming: Experiences with LiveSky", MM 2009  Duc A. Tran, Member, IEEE, Kien A. Hua, Senior Member, IEEE, and Tai T. Do “A Peer-to-Peer Architecture for Media Streaming”

70 Thank You


Download ppt "P2P Media streaming Rutuja Raghoji Ramya Tridandapani Malini Karunagaran."

Similar presentations


Ads by Google