P2P Media streaming Rutuja Raghoji Ramya Tridandapani Malini Karunagaran.

Slides:



Advertisements
Similar presentations
Network Resource Broker for IPTV in Cloud Computing Lei Liang, Dan He University of Surrey, UK OGF 27, G2C Workshop 15 Oct 2009 Banff,
Advertisements

A Construction of Locality-Aware Overlay Network: mOverlay and Its Performance Found in: IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 22, NO.
Dynamic Replica Placement for Scalable Content Delivery Yan Chen, Randy H. Katz, John D. Kubiatowicz {yanchen, randy, EECS Department.
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
NUS.SOC.CS Roger Zimmermann (based in part on slides by Ooi Wei Tsang) Peer-to-Peer Streaming.
Resource Management §A resource can be a logical, such as a shared file, or physical, such as a CPU (a node of the distributed system). One of the functions.
Clayton Sullivan PEER-TO-PEER NETWORKS. INTRODUCTION What is a Peer-To-Peer Network A Peer Application Overlay Network Network Architecture and System.
Playback delay in p2p streaming systems with random packet forwarding Viktoria Fodor and Ilias Chatzidrossos Laboratory for Communication Networks School.
On Large-Scale Peer-to-Peer Streaming Systems with Network Coding Chen Feng, Baochun Li Dept. of Electrical and Computer Engineering University of Toronto.
Cooperative Overlay Networking for Streaming Media Content Feng Wang 1, Jiangchuan Liu 1, Kui Wu 2 1 School of Computing Science, Simon Fraser University.
SplitStream: High- Bandwidth Multicast in Cooperative Environments Monica Tudora.
MMCN 19 Jan 2005 Ooi Wei Tsang Peer-to-Peer Streaming.
Small-world Overlay P2P Network
Scribe: A Large-Scale and Decentralized Application-Level Multicast Infrastructure Miguel Castro, Peter Druschel, Anne-Marie Kermarrec, and Antony L. T.
1 Inside the New Coolstreaming: Principles, Measurements and Performance Implications Bo Li, Susu Xie, Yang Qu, Gabriel Y. Keung, Chuang Lin, Jiangchuan.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
ZIGZAG A Peer-to-Peer Architecture for Media Streaming By Duc A. Tran, Kien A. Hua and Tai T. Do Appear on “Journal On Selected Areas in Communications,
A New Approach for the Construction of ALM Trees using Layered Coding Yohei Okada, Masato Oguro, Jiro Katto Sakae Okubo International Conference on Autonomic.
Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast J. Liu, S. G. Rao, B. Li and H. Zhang Proc. of The IEEE, 2008 Presented by: Yan Ding.
Network Coding for Large Scale Content Distribution Christos Gkantsidis Georgia Institute of Technology Pablo Rodriguez Microsoft Research IEEE INFOCOM.
Peer-to-Peer Based Multimedia Distribution Service Zhe Xiang, Qian Zhang, Wenwu Zhu, Zhensheng Zhang IEEE Transactions on Multimedia, Vol. 6, No. 2, April.
P2VoD: Providing Fault Tolerant Video-on-Demand Streaming in Peer-to-Peer Environment Tai T.Do, Kien A. Hua, Mounir A. Tantaoui Proc. of the IEEE Int.
Scalable Application Layer Multicast Suman Banerjee Bobby Bhattacharjee Christopher Kommareddy ACM SIGCOMM Computer Communication Review, Proceedings of.
CoolStreaming/DONet: A Data- driven Overlay Network for Peer- to-Peer Live Media Streaming INFOCOM 2005 Xinyan Zhang, Jiangchuan Liu, Bo Li, and Tak- Shing.
A Trust Based Assess Control Framework for P2P File-Sharing System Speaker : Jia-Hui Huang Adviser : Kai-Wei Ke Date : 2004 / 3 / 15.
Prefix Caching assisted Periodic Broadcast for Streaming Popular Videos Yang Guo, Subhabrata Sen, and Don Towsley.
Quality-Aware Segment Transmission Scheduling in Peer-to-Peer Streaming Systems Cheng-Hsin Hsu Senior Research Scientist Deutsche Telekom R&D Lab USA Los.
A General approach to MPLS Path Protection using Segments Ashish Gupta Ashish Gupta.
1March -05 Jiangchuan Liu with Xinyan Zhang, Bo Li, and T.S.P.Yum Infocom 2005 CoolStreaming/DONet: A Data-Driven Overlay Network for Peer-to-Peer Live.
An Alliance based PeeringScheme for P2P Live Media Streaming An Alliance based Peering Scheme for P2P Live Media Streaming Darshan Purandare Ratan Guha.
Understanding Mesh-based Peer-to-Peer Streaming Nazanin Magharei Reza Rejaie.
ZIGZAG: An Efficient Peer-to-Peer Scheme for Media Streaming Duc A. Tran, Kien A. Hua, Tai Do University of Central Florida INFOCOM Twenty-Second.
1 An Overlay Scheme for Streaming Media Distribution Using Minimum Spanning Tree Properties Journal of Internet Technology Volume 5(2004) No.4 Reporter.
CS218 – Final Project A “Small-Scale” Application- Level Multicast Tree Protocol Jason Lee, Lih Chen & Prabash Nanayakkara Tutor: Li Lao.
On-Demand Media Streaming Over the Internet Mohamed M. Hefeeda, Bharat K. Bhargava Presented by Sam Distributed Computing Systems, FTDCS Proceedings.
Nearcast: A Locality-Aware P2P Live Streaming Approach for Distance Education XUPING TU, HAI JIN, and XIAOFEI LIAO Huazhong University of Science and Technology.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 34 – Media Server (Part 3) Klara Nahrstedt Spring 2012.
1 Algorithms for Bandwidth Efficient Multicast Routing in Multi-channel Multi-radio Wireless Mesh Networks Hoang Lan Nguyen and Uyen Trang Nguyen Presenter:
PROMISE: Peer-to-Peer Media Streaming Using CollectCast Presented by: Randeep Singh Gakhal CMPT 886, July 2004.
Receiver-driven Layered Multicast Paper by- Steven McCanne, Van Jacobson and Martin Vetterli – ACM SIGCOMM 1996 Presented By – Manoj Sivakumar.
1 Napster & Gnutella An Overview. 2 About Napster Distributed application allowing users to search and exchange MP3 files. Written by Shawn Fanning in.
Communication (II) Chapter 4
Exploring VoD in P2P Swarming Systems By Siddhartha Annapureddy, Saikat Guha, Christos Gkantsidis, Dinan Gunawardena, Pablo Rodriguez Presented by Svetlana.
COCONET: Co-Operative Cache driven Overlay NETwork for p2p VoD streaming Abhishek Bhattacharya, Zhenyu Yang & Deng Pan.
GeoGrid: A scalable Location Service Network Authors: J.Zhang, G.Zhang, L.Liu Georgia Institute of Technology presented by Olga Weiss Com S 587x, Fall.
Overlay Network Physical LayerR : router Overlay Layer N R R R R R N.
Resilient Peer-to-Peer Streaming Presented by: Yun Teng.
A Case for End System Multicast Yang-hua Chu, Sanjay G. Rao, Srinivasan Seshan and Hui Zhang Presentation by Warren Cheung Some Slides from
Higashino Lab. Maximizing User Gain in Multi-flow Multicast Streaming on Overlay Networks Y.Nakamura, H.Yamaguchi and T.Higashino Graduate School of Information.
Using the Small-World Model to Improve Freenet Performance Hui Zhang Ashish Goel Ramesh Govindan USC.
Multicast Routing Algorithms n Multicast routing n Flooding and Spanning Tree n Forward Shortest Path algorithm n Reversed Path Forwarding (RPF) algorithms.
HUAWEI TECHNOLOGIES CO., LTD. Page 1 Survey of P2P Streaming HUAWEI TECHNOLOGIES CO., LTD. Ning Zong, Johnson Jiang.
Adaptive Transmission for layered streaming in heterogeneous Peer-to-Peer networks Xin Xiao, Yuanchun Shi, Yuan Gao Dept. of CS&T, Tsinghua University.
2007/03/26OPLAB, NTUIM1 A Proactive Tree Recovery Mechanism for Resilient Overlay Network Networking, IEEE/ACM Transactions on Volume 15, Issue 1, Feb.
On the Optimal Scheduling for Media Streaming in Data-driven Overlay Networks Meng ZHANG with Yongqiang XIONG, Qian ZHANG, Shiqiang YANG Globecom 2006.
A Membership Management Protocol for Mobile P2P Networks Mohamed Karim SBAI, Emna SALHI, Chadi BARAKAT.
PROP: A Scalable and Reliable P2P Assisted Proxy Streaming System Computer Science Department College of William and Mary Lei Guo, Songqing Chen, and Xiaodong.
On Reducing Mesh Delay for Peer- to-Peer Live Streaming Dongni Ren, Y.-T. Hillman Li, S.-H. Gary Chan Department of Computer Science and Engineering The.
Peer-to-Peer Media Streaming ZIGZAG - Ye Lin PROMISE – Chanjun Yang SASABE - Kung-En Lin.
Efficient Resource Allocation for Wireless Multicast De-Nian Yang, Member, IEEE Ming-Syan Chen, Fellow, IEEE IEEE Transactions on Mobile Computing, April.
A Bandwidth Scheduling Algorithm Based on Minimum Interference Traffic in Mesh Mode Xu-Yajing, Li-ZhiTao, Zhong-XiuFang and Xu-HuiMin International Conference.
Peer-to-Peer Video Systems: Storage Management CS587x Lecture Department of Computer Science Iowa State University.
Inside the New Coolstreaming: Principles, Measurements and Performance Implications Bo Li, Susu Xie, Yang Qu, Gabriel Y. Keung, Chuang Lin, Jiangchuan.
An overlay for latency gradated multicasting Anwitaman Datta SCE, NTU Singapore Ion Stoica, Mike Franklin EECS, UC Berkeley
1 FairOM: Enforcing Proportional Contributions among Peers in Internet-Scale Distributed Systems Yijun Lu †, Hong Jiang †, and Dan Feng * † University.
Network Computing Laboratory ZIGZAG: An Efficient Peer-to-Peer Sch eme for Media Streaming Duc A. Tran Kien A. Hua Tai Do University of Central Florida.
Buffer Analysis of Live P2P Media Streaming Approaches Atif Nazir BSc ’07, LUMS.
Cost-Effective Video Streaming Techniques Kien A. Hua School of EE & Computer Science University of Central Florida Orlando, FL U.S.A.
Ying Qiao Carleton University Project Presentation at the class:
Peer-to-Peer Streaming: An Hierarchical Approach
Presentation transcript:

P2P Media streaming Rutuja Raghoji Ramya Tridandapani Malini Karunagaran

P2P with respect to media streaming  Users contribute content they have already consumed back to other users of the application.  Considerable intelligence to be located at the edge of the network.  Distributes the resources required for content delivery throughout the Internet.  Efficient delivery model at scale but no guarantee on minimum level of resource availability.

Live Streaming Metrics  Startup Delay  Source-to-end delay  Playback continuity  Above metrics are directly related to user satisfaction.

CoolStreaming/DONet: A Data-Driven Overlay Network for Efficient Live Media Streaming  An efficient, robust and resilient system that is easy to implement.  Periodic exchange of information  Retrieves unavailable information from partners.  Supplies available data to partners.  A public Internet-based DONet implementation, called CoolStreaming v.0.9.  Evaluated over PlanetLab: 30,000 distinct users, peak load 4000 simultaneous users. CoolStreamin g

Need for media streaming  Multimedia applications have become more popular. Eg: Net TV, news broadcast  Increased network traffic  IP Multicast is most efficient vehicle.  Issues arise due to lack of incentives:  To install multicast routers  To carry multicast traffic.  Solution: Overlay networks – Application level CoolStreamin g

IP Multicast  Generally advocate tree based structure  Works well with dedicated infrastructure  Mismatches application level overlay with dynamic nodes  Vulnerable to failure  Partially solved by structures like mesh and forest  Multicast systems – two categories  Proxy-assisted  Peer-to-peer based CoolStreamin g

Overlay construction algorithms  DONet uses peer-to-peer paradigm  Tree based protocols and extensions  Two types of distribution trees  Centralized Eg: CoopNet  Distributed Eg: SpreadIt, ZIGZAG  Gossip based protocol  Used by DONet for membership management.  Newly generated message sent to randomly selected nodes  Repeated until all nodes get the message.  Suffers from significant redundancy, maybe severe for streaming applications. CoolStreamin g

Data-centric Approach  Adopted for the following reasons:  Migrating to application layer leads to greater flexibility  No prescribed roles for the nodes  Availability of data guides flow directions, not restricted by overlay structure  Suitable for overlay with high dynamic nodes  DONet targets live streaming with semi- synchronized nodes.  Design issues:  How are partnerships formed?  How do you encode and exchange data availability information?  How do you supply and retrieve video data among partners? CoolStreamin g

Design and Optimization of DONet CoolStreamin g

Design and Optimization of DONet(contd.)  Terms and definitions  Membership manager: node which maintains partial view of other overlay nodes  Partnership manager: establishes and maintains partnership with the partner nodes  Scheduler: schedules transmission of video data  Source node/origin node: always a supplier  Other nodes can be a receiver, a supplier or both depending on segment availability information CoolStreamin g

Node Join and Membership Management  Maintains membership cache (mCache)  mCache contains a partial list of identifiers for the active nodes  In joining algorithm, newly joined node contacts the origin node; which redirects request to obtain list to deputy node.  Scalable Gossip Membership protocol, SCAM: distribute membership messages periodically.  Two events also trigger mCache update:  Forward membership message through gossip.  Node serves as deputy, include entry in candidate list. CoolStreamin g

Partnership in DONet CoolStreamin g

Buffer Map Representation and Exchange  A video stream is divided into segments of equal length  Buffer Map represents availability of segments in the buffer of a node  Each node exchanges BM with its partners before scheduling segment transfer  playback progresses of the nodes are semi- synchronized  a sliding window of 120-segment, 1 second video segment  BM contains 120 bits, 1 for each segment with value 1 indicating segment is available, 0 otherwise. CoolStreamin g

Scheduling Algorithm  For homogenous and static network, simple round robin scheduler works  Dynamic network needs more intelligent scheduler!  Two constraints:  playback deadline for each segment  heterogeneous streaming bandwidth from the partners  Variation of the problem Parallel machine scheduling, known to be NP-hard!  Heuristic:  Ordered by the number of potential suppliers for each segment, and the node with the highest bandwidth and enough available time is selected  Execution time of implementation:15ms / execution  Adopted TCP-Friendly Rate Control protocol  Origin node advertises BM if needed CoolStreamin g

Failure Recovery and Partnership Refinement  Departure can be detected after an idle time of TFRC or BM exchange  Reschedule using the BM :the probability of concurrent departures is small.  Following operations to enhance resilience:  Graceful departure  Node failure  Each node periodically establishes new partnership which helps with:  Maintaining a stable number of partners  Nodes explore partners with better quality  Each node i calculates score for partner j: max{S i,j ; S j,i }  Reject the one with the lowest score CoolStreamin g

Planet-based Performance Evaluation  A collection of machines distributed over the globe: 435 machines, hosted by 203 sites, spanning over 25 countries CoolStreamin g

Comparison with Tree-based Overlay CoolStreamin g

Comparison with Tree-based Overlay(contd.) CoolStreamin g

Conclusion  Two interesting facts from results:  The current Internet has enough available bandwidth to support TV-quality streaming (¸ 450 Kbps).  The larger the data-driven overlay is, the better the streaming quality it delivers.  Drawbacks!  Copyright issues with content!  Nodes behind a NAT gateway are often restricted to serve as receivers only  30% users are behind NAT  TCP connection: 95% of the nodes can become relaying nodes  Supports VBR encoding only. CoolStreamin g

ZigZag - Introduction  What is ZIGZAG?  Peer-to-Peer protocol for single source media streaming to multiple receivers.  Uses a tree structure  What problem is it solving?  This protocol can maintain the stream under the effects of network dynamics and unpredictable client behavior. i.e. If failure then it can have a quick and graceful recovery.  Shorten delay for source to receivers.  Limit the required overhead on the receivers.  Realize that this basic structure is somewhat common but zigzag  connects them differently. ZIGZAG

ZigZag – Design Objectives  Peer to peer technique for single-source media streaming  The end-to-end delay from source to client should be low  The node degree should be small  Adapt to free join/leave receivers  Minimize the amount of control overhead ZIGZAG

ZigZag Protocol  Administrative Organization  Represents the logical relationships among peers  Multicast tree  Specify which peer data is received from  Built based on C-rules which helps limits the degree of a peer (outbound links)  Control protocol  Specify the exchange of state information  Policies adjusting the tree  Maintaining the robustness of the tree ZIGZAG

Administrative Organization  A multi-layer hierarchy of clusters  Partition peers into clusters of size [k, 3k]  Assign the role “head” and “associate head” to certain peers  Properties  H – number of layers  Bounded by [log 3k N, log k N+1]  Max. number of members in a cluster=3k  To prevent cluster undersize in the case of a client leave after splitting ZIGZAG

Administrative Organization  Organized into a tree of clusters.  Each cluster having at least 4 nodes (except top)  One of those nodes will be the head of the cluster  Head node of a cluster becomes a member of parent cluster ZIGZAG

Relationship among clusters and peers ZIGZAG

Terms  Subordinate: Non-head peers of a cluster headed by a peer X are called “subordinate” of X.  Foreign head: A non-head (or server) clustermate of a peer X at layer j > 0 is called a “foreign head” of layer-(j-1) subordinates of X.  Foreign subordinate: Layer-(j-1) subordinates of X are called “foreign subordinates” of any layer-j clustermate of X.  Foreign cluster: The layer-(j-1) cluster of X is called a “foreign cluster” any layer-j clustermate of X. ZIGZAG

 Built based on the administrative organization  C-rules specify the actual data flow from source to any peer  A peer, when not at its highest layer, neither has a link out nor a link in  Nonhead members of a cluster must receive the content directly from its associate head. In other words, this associate head links to every other nonhead member  The associate head of a cluster, except for the server, must get the content directly from a foreign head of the cluster  Some nodes will stream data to more than 1 peers  Assumption:  The uplink capacity of peer is enough for streaming contents to multiple peers Multicast tree ZIGZAG

Multicast tree ZIGZAG

Multicast tree - Properties  The workload is shared among clients  Worst-case node degree is 6k-3  The end-to-end delay is small  Maximum height of the tree is 2log k N+1  Use of “associate head” for delivering media  Number of outbound links is lower  Bottleneck will less likely to appear in higher level ZIGZAG

Control protocol  Goal  Minimize the number of peers needed to be contacted  Only exchange information with parent, children and clustermates  Exchange as few states as possible  Each node in a cluster periodically communicates with its clustermates, children and parent on the tree.  Which nodes it is communicating with  Which clusters have room for another node  Which nodes are have paths to the lowest level of the tree  control overhead for an average member is a constant. The worst node has to communicate with other nodes, which is acceptable since the information exchanged is just soft-state refreshes. ZIGZAG

Client Join/Departure  Basic principle  Maintain C-rule so that nice properties of degree and end-to-end delay is preserved  Direct solution  Reconstruct the administrative organization and multicast tree  Costly in terms of exchange of state information  Proposed join/departure algorithm  Limits the number of nodes to connect during a join by O(k log k N)  Limits the number of peers that need to reconnect by 6k-2 ZIGZAG

Client Join  When a peer sends a request to the server  Push request down the multicast tree until a leaf node is found that has room in it’s cluster  Periodic check of the clusters to see if they are to big, if so split them  Procedure  If X is a layer-0 associate-head  Add P to the only cluster of X  Make P a new child of X  Else  If Addable(X)  Select a child Y s.t. Addable(Y) and D(Y)+d(Y,P) is min  Forward the join request to Y  Else  Select a child Y s.t. Reachable(Y) and D(Y)+d(Y,P) is min  Forward the join request to Y ZIGZAG

Client Departure  When a peer departs from the tree  Parent, subordinates and children of the node are notified  Non-head clustermate is selected and asked to take over forwarding data  If many departures causes undersize then 2 cluster at the same layer are merged  Tasks to do for client (X) departure  The parent removes link to X  The children of X needs a new parent  Each layer-i cluster X belongs to needs a new head  Layer-j cluster may require a new associate head ZIGZAG

Client Departure  If X’s highest level is at layer 0  If X is not the “associate head”  No extra work needed  If X is the “associate head”  The head of the cluster choose another member to take up the responsibilities  If X’s highest level is j (non zero)  It implies it is a “head” in layer [0,j-1]  A non-head peer (X’) at layer 0 is randomly chosen to replace the “head” responsibility  Head of children of X (Y) will choose a new parent for X that has a minimum degree ZIGZAG

Performance Optimization  The performance optimization procedure makes the service load fairly distributed among the peers without violating the multicast tree rules.  Periodically checking the administrative organization and  multicast tree.  If node has many children may consider switching parenthood of some of the children to a clustermate - Balancing based on number of nodes  Switching based on capacity or bandwidth – Balancing based on bandwidth  Refinement handling  Degree based switch  Balances degree by transferring service load  Capacity based switch  Balances the peer busyness among all non-head peers of a cluster ZIGZAG

Performance Analysis  Performance under different scenarios Look at various scenarios on a network simulator running 3240 nodes,2000 client. 5 – 15 nodes per cluster.  Scenario1: No failure  Looked at overhead such as adding new nodes average of 2.4% population were contacted  If too many children then it must ask most of them  Performance improved when a split of a cluster occurred  Scenario 2: Failure possible  Let every 200 nodes fail (out of 2000) sequentially.  Mostly less than 20 reconnections (2% of population)  Scenario 3: ZIGZAG vs NICE ZIGZAG

Scenario 1: No failure Join and split overhead a. the server has too many children and a new client has to ask all of them, thus resulting in many contacts. b. split procedure takes place after detecting a cluster at the second-highest layer is oversize. Thus results in very few children. ZIGZAG

Scenario 2: Failure Possible  recovery overhead is always bounded by a constant regardless of the client population size.  Merge overhead is always small regardless of the client population size. ZIGZAG

Scenario 3: ZIGZAG vs NICE ZIGZAG

Comparison with other P2P systems SpreadIt Single distribution tree Vulnerable to disruptions Long blocking time CoopNet Multiple distribution trees Heavy control overhead on source Narada Multi-sender multi-receiver streaming Works on small P2P network ZIGZAG

Advantages  Oddities in the overhead of streaming multimedia over the internet will not felt as much by the end user.  Efficient methods for nodes dropping & adding  Uses two trees  1 to maintain administration  1 for data flow  Short end-to-end delay  Tree structure  Low administration overhead  Nodes “talk” to each other on regular basis ZIGZAG

Disadvantages  Possible for a lot of overhead per node and especially head nodes  Needs to Optimize at random intervals to be effective which may cause large overhead. ZIGZAG

Conclusion  A P2P media streaming scheme  The maximum degree and end-to-end delay is small  A client join/leave algorithm is proposed aim at reducing the control overhead  Simulation result suggests that 5000 peers can be supported at a max. degree of 15 ZIGZAG

AnySee  Internet based Peer-to-Peer live streaming system  Released in the summer of 2004 in CERNET of China.  Over 60,000 users enjoy live videos including TV programs, movies, and academic conferences.

AnySee  What is it trying to achieve?  Improving global resource utilization.  Distributing traffic to all physical links evenly.  Assigning resources based on locality and delay.  Guarantee streaming service quality by using nearest peers.  How is it different from previously discussed systems?  Adopts Inter-overlay optimization.

Intra-Overlay Optimization  Resources cannot Join multiple overlays S1 -> A -> D = 3 +5 = 8 Not Optimal globally! S1 -> S2 -> D = = 4 Globally Optimal!

Inter-Overlay Optimization  Resources can join Multiple overlays  Path S1->A->D(8) is replaced by S1->S2->D(4)  Path S2->G->H (9) is replaced by S2->G->C->G(7)

Inter-Overlay Optimization Key Challenges  Efficient neighbor discovery  Resource assignment  Overlay construction  Optimization

AnySee –System Overview

Mesh-based Overlay Manager  All peers belonging to different streaming overlays will join one substrate, mesh-based overlay.  Every peer has a unique identifier.  Key goal is to match the above overlay to the underlying physical topology.  Done by using a Location-aware topological Matching technique.  Uses ping and pong messages and flooding techniques to form the Mesh.

AnySee –System Overview Single Overlay Manager  Intra-Optimization Schemes similar to Narada, DONet etc.  Deals with join/leave operations of peers.

AnySee –System Overview Inter-Overlay optimization Manager  Each peer maintains Active Streaming Path Set and Backup Streaming Path Set.  Inter-Overlay Optimization algorithm is called when number of backup streaming paths is less than a threshold.  Path from Backup streaming path is chosen when active streaming path is cut off due to poor streaming QoS.

AnySee –System Overview Inter-Overlay optimization Manager  Two main tasks:  Backup streaming path set management  Active streaming path set management.  Backup streaming path set management is done using Reverse Tracing Algorithm.

AnySee –System Overview Key Node Manager  Allocates and manages resources.  Admission control policy. Buffer Manager  Responsible for receiving valid media data from multiple providers in the active streaming path set and continuously keeping the media playback.  Small buffer space is enough due to Inter- Overlay Optimization because of shorter startup delay.

System Evaluation For system evaluation two topologies are considered.  Physical topology  Real topology with Internet characteristics.  BRITE  Logical topology  Overlay P2P topology built on top of the physical topology.  Gnutella based Crawler

System Evaluation – Simulation Parameters S Number of streaming overlays M Number of neighbors N Size of one overlay r Streaming playback rate C Number of total B/W connections

System Evaluation – Performance Metrics  Resource Utilzation  Ratio between used connection to all connections.  Continuity Index  Number of segments that arrive before playback deadline over the total number of segments.  Represents playback continuity.

System Evaluation Continuity index V.S. streaming rates when N=400, S=12 and initial buffer size is 40 seconds

System Evaluation Resources utilization: overlay size V.S. the number of streaming overlays when M=12, r=300 Kbps

System Evaluation Continuity index under dynamic environments when M=5, N=400, r=300 Kbps and initial buffer size is 40 seconds

System Evaluation Resource utilization under dynamic environments when M=5, N=400, and r=300 Kbps

Conclusion  Efficient and scalable live-streaming overlay construction has become a hot topic recently.  Important metrics to be considered in any live streaming application are  Startup delay  Source-to-end delay  Playback continuity

Conclusion  Previously studied Intra-Overlay optimization techniques have drawbacks  Low resource utilization  High startup delay  Source- to-end delay  Inefficient resource assignment in global P2P networks.

Conclusion  AnySee uses Inter-Overlay optimization technique.  AnySee peers are able to construct efficient paths using peers in different overlays rather than selecting better paths in the same overlay.  Experimental results show that AnySee outperforms existing intra-overlay live streaming schemes, such as Coolstreaming.

CDN vs P2P CDNP2P Excellent quality to end-users when workload is in provisioning limits High Scalability Constrained by specifics of the existing servers and bandwidth Low Maintenance cost Not highly scalableLow stream quality with undesirable disruption Higher Operation CostsUnfairness in face of heterogeneous peer resources

LiveSky  Hybrid CDN-P2P for Live Video Streaming  Incorporates the best of both technologies  Mutually offsets each others’ deficiencies. LiveSky

LiveSky – System Architecture LiveSky

References  Duc A. Tran, "ZIGZAG: an efficient peer-to-peer scheme for media streaming" INFOCOM03  Xinyan Zhang, "CoolStreaming/DONet: a data-driven overlay network for peer-to-peer live media streaming" INFOCOM05  Xiaofei Liao, Hai Jin, Yunhao Liu, Lionel M. Ni, and Dafu Deng, "Anysee: Peer-to-peer live streaming" INFOCOM06  H. Yin et al., "Design and Deployment of a Hybrid CDN-P2P System for Live Video Streaming: Experiences with LiveSky", MM 2009  Duc A. Tran, Member, IEEE, Kien A. Hua, Senior Member, IEEE, and Tai T. Do “A Peer-to-Peer Architecture for Media Streaming”

Thank You