Presentation is loading. Please wait.

Presentation is loading. Please wait.

PPCast: A Peer-to-Peer based Video broadcast solution Presented by Shi Lu Feb. 28, 2006.

Similar presentations


Presentation on theme: "PPCast: A Peer-to-Peer based Video broadcast solution Presented by Shi Lu Feb. 28, 2006."— Presentation transcript:

1 PPCast: A Peer-to-Peer based Video broadcast solution Presented by Shi Lu Feb. 28, 2006

2 2 Outline Introduction Motivation Related work Challenges Our contributions The p2p streaming framework Overview Peer control overlay Data transfer protocol Peer local optimization Topology optimization The streaming framework: design and implementation Receiving peer software components The look-up service Deployment Experiments Empirical experience in CUHK Experiments on PlanetLab Conclusion

3 3 Motivation Video over Internet is pervasive today New challenge: on-line TV broadcasting fails on traditional client-server architecture  500Kbps, theoretical limitation for a 100Mbps Server is 200 concurrent users Is there any efficient way to support video broadcasting to a large group of network users?

4 4 Why Client/Server fail Traditional client/server solution is not scalable 3 bottlenecks  Server load The server bandwidth is the major bottleneck  Edge capacity One connection to one client The connection may degrade  End to end bandwidth Scalability?

5 5 Related work Peer-to-Peer file sharing system  BitTorrent, eMule, DC++  Peer collaborate with each other  Without (or with very little) need to dictated resource Why not suitable for video broadcasting?  Without in-bound rate requirement  Without real-time requirement

6 6 Related work Content Distribution Network (CDN)  Install a lot of dictated servers on the edge of the internet  Requests are directed to the best server  Very high cost on purchasing servers Tree based overlay  Coopnet, NICE  Rigid structure, not robust to node-failure and network condition changes Other Mesh-based systems  CoolStreaming, pplive, ppStream

7 7 Challenges Bandwidth  In-bound data bandwidth no less than the video rate  In-bound bandwidth should not have large fluctuation Network dynamics  Network bandwidth and latency may change  Peer nodes may leave and join at any time  Peer nodes may fail or shut down Real-time requirement  All media packets must be fetched before its playback deadline

8 8 Goals For each peer:  Provide satisfying in-bound bandwidth  Assign its traffic in a balanced and fair manner For the whole network:  Keep it as one-piece while peer may fail/leave  Keep the shape of the overlay from degrading  Keep the radius small

9 9 Outline Introduction Motivation Related work Challenges Our contributions The p2p streaming framework Overview Peer control overlay Data transfer protocol Peer local optimization Topology optimization The streaming framework: design and implementation Source peer Receiving peer software components The look-up service Deployment Experiments Empirical experience in CUHK Experiments on PlanetLab Conclusion

10 10 P2P based solution: Overview Collaboration between client peers Source server alleviated Better scalability Better reliability

11 11 Infrastructure Video source  Windows media encoder, RealProducer …  Source peer Takes content from the video streaming server and feed it to the p2p network Wrap packets, add seq no. Look-up service (tracker)  Track the peers viewing each channel  Help new peers to join the broadcast Participant peers (organized into a random graph)  Schedule and dispatch the video packets  Adapt to the network condition  Optimize the performance  Support the local video player

12 12 Peer join Peer join: Obtain a peer list Establish connections  Neighborhood selection Random ip matching History info Peer depth Peer performance Register its own service

13 13 The connection pool “ No one is reliable ”  A peer tries his best for better performance  A peer maintains a pool of active connections to peers  A peer keeps trying new peers when the incoming bandwidth is not reached  A peer keeps trying new connections after that, but in a slower manner.  Update peer list  Others may establish new connection

14 14 Connection pool maintenance For each connection, define connection utility  Recent bandwidth (I/O)  Recent latency  Peer depth  Peer recent progress When connections are more than  Drop several bad connections Out-rate bound Peer depth (distance to source)

15 15 The peer control overlay Random-graph shape  Evolving with time  Radius: will not degrade since all peers are trying to minimize its depth  Integrity: will not be broken into pieces Data transfer  The data transfer path is determined from the control overlay  Each data packet is transferred along a tree  Determined just-in-time from the control overlay

16 16 Data transfer protocol Receiver driven  While exchanging data, data availability information is also exchanged  The data receiver determine which block from which neighbor Driven by data distribution information  Peer knows where the missing data is  Peer issues data request to neighbors Peer synchronization  Content fetching progress

17 17 Data transfer protocol Data status bitmap (DSB)  Describe the data packet availability information  (1 available, 0 absent)  Foreign data status For each peer, it holds a connection to each neighbors DSB of the neighbors are embedded with the connection DSB of neighbors are frequently updated

18 18 Data transfer protocol Data transfer load scheduling Several factors:  Latency  Bandwidth  Data availability Connection status (busy, standby)  Data request issued (->busy)  Data arrived (->standby) Scheduling time: on data arrival  Get a packet that is recent available  Estimate the latency  The playback deadline is before the expected latency

19 19 Data transfer protocol Data transfer load scheduling Packet local status  Critical packet  The data packets still unavailable  The remaining time is less than t When in-bound bandwidth is ok  Check critical packet  Issue request to the fastest link When in-bound bandwidth is not ok  Do not check critical packet

20 20 Data transfer protocol Connection status Busy standby Data request issued data arrived When data arrive:  Measure the latency of the previous packet  Get a weighted delay  Find the packet whose playback deadline is ok for that delay  Issue request (  busy) Critical packet

21 21 Data transfer protocol Multicast tree  Each data packet would not go by a peer twice  For each data packet, a multicast tree is constructed  The tree is built just-in-time To adapt to the transient properties of the network links  Each data packet may have different trees

22 22 Neighborhood management Performance monitoring:  Measured once in every interval (e.g. 10 sec.)  Avg. in-bound data rate  Avg. out-bound data rate  Avg. data packet latency Neighborhood goodness Neighborhood number  Lower bound of Nb  Upper bound of Nb

23 23 Peer control protocol Data and performance While in-bound data rate is not enough While neighbor number is lower than lower bound  Establish new connections While neighbor number is higher than upper bound  Discard the worst connection  The benefit is two fold: both side release something bad.

24 24 Local media player support When incoming rate is satisfying Media buffer size Enough buffered data When packet not ready  Break and continue  Lose some quality  Up to the video codec Data smoothness  For one peer, in an interval  The ratio of packets that can be fetched before its deadline

25 25 Overlay integrity Peers may leave or fail Normal leave: notify neighbors and look-up service Abnormal leave: other will know when time-out  If one neighbor quits, a peer can still get content from other connections in the connection pool  Establish new connections if the incoming bandwidth is less than expected  Buffer size

26 26 Overlay integrity Peers may leave or fail The overlay shall not be broken into pieces Maintain the connectivity of the overlay Solution: Peers try to connect to neighbors with lower depth Since each peer tries to lower its depth, so the probability for (articulation) critical points to occur becomes small …… Source peer, depth = 0 depth_i = \min_{j \in neightbor_i} depth_j +1

27 27 Overlay integrity Playback progress difference:  Higher depth difference  higher playback progress difference  The attempt to reduce local peer depth may reduce that progress difference

28 28 Behind-NAT problem Some peers are running behind NAT(Network Address Translation) Other peers may not be able to actively connect to it Port mapping (need to be set by the router admin) Behind-NAT peers need some restrictions  Can only connect to peers whose depth is higher  Need to actively connect to other peers  Actively help others after its performance is ok Direct communication between behind-NAT peers?

29 29 Outline Introduction Motivation Related work Challenges Our contributions The p2p streaming framework Overview Peer control overlay Data transfer protocol Peer local optimization Topology optimization The streaming framework: design and implementation Receiving peer software components The look-up service Deployment Experiments Empirical experience in CUHK Experiments on PlanetLab Conclusion

30 30 Receiving peer software components

31 31 The look-up service Centralized server Register new video channels Register new peers Receive peer reports from time to time Provide peer list to new peers

32 32 Deployment LAN deployment  Pure java-based software  Video broadcast source (windows media encoder)  Client software on each participant machine  Look-up service  Source peer Experiment  Successfully deployed in the CSE department LAN  Planet-lab experiment

33 33 Benefits Low cost:  Without any extra hardware expenditure  Pure software solution Scalable: can support theoretically infinite number of users Reliable and resilient to user join/leave  Multiple connections  Intelligent content scheduling between peers  Quick adapt to the change of network conditions Solution for:  Low-cost large scale live broadcast

34 34 Limitations Upstream capability  Peers need to have some upstream ability to support each other  ADSL (upload power is a problem)  LAN users are just fine  Solution: higher ratio of LAN users Backbone stress  The traffic may result in stress on back bones  Example Source in China 10000 peer connections from China to US Backbone need to support 10000 connections  Solution: Traffic localization 1 stream from China to US US peers support each other

35 35 Outline Introduction Motivation Related work Challenges Our contributions The p2p streaming framework Overview Peer control overlay Data transfer protocol Peer local optimization Topology optimization The streaming framework: design and implementation Receiving peer software components The look-up service Deployment Experiments Empirical experience in CUHK Experiments on PlanetLab Conclusion

36 36 Experiments Planet-Lab  300+ nodes deployed worldwide Performance test  450kb/s streaming  Data packets: 32KB each  Static performance Start-up latency Data smoothness  Dynamic experiment Peer sojourn time is exponential distributed (unstable) All peers are unstable Overlay size:  50, 80, 120, 150, 180 ……

37 37 Experiments Data smoothness  The ratio of the data packets that can be fetched before its playback deadline  Determines the quality of the video playing on each client peer Influence  Peer buffer size  Peer stability

38 38 Static performance The impact of overlay size and peer buffer size

39 39 Dynamic performance The impact of peer stability The peers ’ mean sojourn time is exponentially distributed The larger the mean time is, the more stable the peers are

40 40 Observations More users, better performance Resilient in dynamic network environment Robust to peer join-leave More stable, better performance Bigger buffer increases performance

41 41 Conclusion In this presentation, we have:  Introduced the challenges in the p2p streaming system  Defined several goals that a p2p system to support real-time video broadcast  Proposed a solution for large scale p2pvideo streaming service  Discussed the design, implementation and experiments of the system Future work  Distributed look-up service  Content copyright protection (Identity authentication, data encryption)  Overlay topology control (Traffic localization)  Possible VOD system based on this infrastructure  Deal with firewalls

42 42 Q & A Thank you!


Download ppt "PPCast: A Peer-to-Peer based Video broadcast solution Presented by Shi Lu Feb. 28, 2006."

Similar presentations


Ads by Google