An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System Bo Li, Gabriel Y. Keung, Susu Xie, Fangming Liu, Ye Sun, and Hao.

Slides:



Advertisements
Similar presentations
Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast Speaker: Shao-Fen Chou Adivisor: Dr. Ho-Ting Wu 11/14/
Advertisements

Scheduling in Web Server Clusters CS 260 LECTURE 3 From: IBM Technical Report.
Incentives Build Robustness in BitTorrent Bram Cohen.
Novasky: Cinematic-Quality VoD in a P2P Storage Cloud Speaker : 童耀民 MA1G Authors: Fangming Liu†, Shijun Shen§,Bo Li†, Baochun Li‡, Hao Yin§,
CLive Cloud-Assisted P2P Live Streaming
Playback delay in p2p streaming systems with random packet forwarding Viktoria Fodor and Ilias Chatzidrossos Laboratory for Communication Networks School.
Playback-buffer Equalization For Streaming Media Using Stateless Transport Prioritization By Wai-tian Tan, Weidong Cui and John G. Apostolopoulos Presented.
On Large-Scale Peer-to-Peer Streaming Systems with Network Coding Chen Feng, Baochun Li Dept. of Electrical and Computer Engineering University of Toronto.
PPL IVE : A M EASUREMENT S TUDY OF P2P IPTV S YSTEM Sergio Chacon.
Cooperative Overlay Networking for Streaming Media Content Feng Wang 1, Jiangchuan Liu 1, Kui Wu 2 1 School of Computing Science, Simon Fraser University.
Conducted by:Cheng Wen Chi Chiu Kwok Shing Choi Kwok Yam Advised by Prof. Danny Tsang TD1a-09, BEng of Computer Engineering, HKUST.
Optimization of Data Caching and Streaming Media Kristin Martin November 24, 2008.
Kangaroo: Video Seeking in P2P Systems Xiaoyuan Yang †, Minas Gjoka ¶, Parminder Chhabra †, Athina Markopoulou ¶, Pablo Rodriguez † † Telefonica Research.
1 Nazanin Magharei, Reza Rejaie University of Oregon INFOCOM 2007 PRIME: P2P Receiver-drIven MEsh based Streaming.
Network Coding in Peer-to-Peer Networks Presented by Chu Chun Ngai
Suphakit Awiphan, Takeshi Muto, Yu Wang, Zhou Su, Jiro Katto
1 P2P Streaming Amit Lichtenberg Based on: - Hui Zhang, Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast,
Small-world Overlay P2P Network
CStream: Neighborhood Bandwidth Aggregation For Better Video Streaming Thangam Vedagiri Seenivasan Advisor: Mark Claypool Reader: Robert Kinicki 1 M.S.
1 Inside the New Coolstreaming: Principles, Measurements and Performance Implications Bo Li, Susu Xie, Yang Qu, Gabriel Y. Keung, Chuang Lin, Jiangchuan.
Web Caching Schemes1 A Survey of Web Caching Schemes for the Internet Jia Wang.
Opportunities and Challenges of Peer-to-Peer Internet Video Broadcast J. Liu, S. G. Rao, B. Li and H. Zhang Proc. of The IEEE, 2008 Presented by: Yan Ding.
Network Coding for Large Scale Content Distribution Christos Gkantsidis Georgia Institute of Technology Pablo Rodriguez Microsoft Research IEEE INFOCOM.
1 A Framework for Lazy Replication in P2P VoD Bin Cheng 1, Lex Stein 2, Hai Jin 1, Zheng Zhang 2 1 Huazhong University of Science & Technology (HUST) 2.
Measurement-Based Optimization Techniques for Bandwidth-Demanding Peer-to- Peer Systems T. S. Eugene Ng, Yang-hua Chu, Sanjay G. Rao, Kunwadee Sripanidkulchai.
Peer-to-Peer Based Multimedia Distribution Service Zhe Xiang, Qian Zhang, Wenwu Zhu, Zhensheng Zhang IEEE Transactions on Multimedia, Vol. 6, No. 2, April.
CoolStreaming/DONet: A Data- driven Overlay Network for Peer- to-Peer Live Media Streaming INFOCOM 2005 Xinyan Zhang, Jiangchuan Liu, Bo Li, and Tak- Shing.
Issues in Offering Live P2P Streaming Service to Residential Users Nazanin Magharei, *Yang Guo, and Reza Rejaie Dept. of Computer and Information Science.
Exploiting Content Localities for Efficient Search in P2P Systems Lei Guo 1 Song Jiang 2 Li Xiao 3 and Xiaodong Zhang 1 1 College of William and Mary,
Quality-Aware Segment Transmission Scheduling in Peer-to-Peer Streaming Systems Cheng-Hsin Hsu Senior Research Scientist Deutsche Telekom R&D Lab USA Los.
1March -05 Jiangchuan Liu with Xinyan Zhang, Bo Li, and T.S.P.Yum Infocom 2005 CoolStreaming/DONet: A Data-Driven Overlay Network for Peer-to-Peer Live.
An Alliance based PeeringScheme for P2P Live Media Streaming An Alliance based Peering Scheme for P2P Live Media Streaming Darshan Purandare Ratan Guha.
Understanding Mesh-based Peer-to-Peer Streaming Nazanin Magharei Reza Rejaie.
An Overlay Multicast Infrastructure for Live/Stored Video Streaming Visual Communication Laboratory Department of Computer Science National Tsing Hua University.
6/28/2015Reza Rejaie INFOCOM 07 1 Nazanin Magharei, Reza Rejaie University of Oregon PRIME: P2P Receiver-drIven MEsh based.
Peer-to-peer Multimedia Streaming and Caching Service by Won J. Jeon and Klara Nahrstedt University of Illinois at Urbana-Champaign, Urbana, USA.
On-Demand Media Streaming Over the Internet Mohamed M. Hefeeda, Bharat K. Bhargava Presented by Sam Distributed Computing Systems, FTDCS Proceedings.
Nearcast: A Locality-Aware P2P Live Streaming Approach for Distance Education XUPING TU, HAI JIN, and XIAOFEI LIAO Huazhong University of Science and Technology.
Department of Computer Science & Engineering The Chinese University of Hong Kong Constructing Robust and Resilient Framework for Cooperative Video Streaming.
Tradeoffs in CDN Designs for Throughput Oriented Traffic Minlan Yu University of Southern California 1 Joint work with Wenjie Jiang, Haoyuan Li, and Ion.
CS Spring 2012 CS 414 – Multimedia Systems Design Lecture 34 – Media Server (Part 3) Klara Nahrstedt Spring 2012.
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 24 – P2P Streaming Klara Nahrstedt Ramsés Morales.
Slide courtesy: Dr. Sumi Helal & Dr. Choonhwa Lee at University of Florida, USA Prof. Darshan Purandare at University of Central Florida, USA Dr. Meng.
Can Internet Video-on-Demand Be Profitable? SIGCOMM 2007 Cheng Huang (Microsoft Research), Jin Li (Microsoft Research), Keith W. Ross (Polytechnic University)
Exploring VoD in P2P Swarming Systems By Siddhartha Annapureddy, Saikat Guha, Christos Gkantsidis, Dinan Gunawardena, Pablo Rodriguez Presented by Svetlana.
COCONET: Co-Operative Cache driven Overlay NETwork for p2p VoD streaming Abhishek Bhattacharya, Zhenyu Yang & Deng Pan.
1 V1-Filename.ppt / yyyy-mm-dd / Initials P2P content distribution T Applications and Services in Internet, Fall 2008 Jukka K. Nurminen.
The application of P2P technology. Team Member: LIU Chang, ZHANG jianing Presentation: LIU Chang.
1 Towards Cinematic Internet Video-on-Demand Bin Cheng, Lex Stein, Hai Jin and Zheng Zhang HUST and MSRA Huazhong University of Science & Technology Microsoft.
Resilient Peer-to-Peer Streaming Presented by: Yun Teng.
Department of Information Engineering The Chinese University of Hong Kong A Framework for Monitoring and Measuring a Large-Scale Distributed System in.
Mohamed Hefeeda 1 School of Computing Science Simon Fraser University, Canada Video Streaming over Cooperative Wireless Networks Mohamed Hefeeda (Joint.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 37 – P2P Streaming and P2P Applications/PPLive Klara Nahrstedt Spring 2011.
ACM NOSSDAV 2007, June 5, 2007 IPTV Experiments and Lessons Learned Panelist: Klara Nahrstedt Panel: Large Scale Peer-to-Peer Streaming & IPTV Technologies.
MULTI-TORRENT: A PERFORMANCE STUDY Yan Yang, Alix L.H. Chow, Leana Golubchik Internet Multimedia Lab University of Southern California.
TOMA: A Viable Solution for Large- Scale Multicast Service Support Li Lao, Jun-Hong Cui, and Mario Gerla UCLA and University of Connecticut Networking.
HUAWEI TECHNOLOGIES CO., LTD. Page 1 Survey of P2P Streaming HUAWEI TECHNOLOGIES CO., LTD. Ning Zong, Johnson Jiang.
PRIME: P2P Receiver-drIven MEsh based Streaming Nazanin Magharei, Reza Rejaie University of Oregon Presenter Jungsik Yoon.
2007/03/26OPLAB, NTUIM1 A Proactive Tree Recovery Mechanism for Resilient Overlay Network Networking, IEEE/ACM Transactions on Volume 15, Issue 1, Feb.
On the Optimal Scheduling for Media Streaming in Data-driven Overlay Networks Meng ZHANG with Yongqiang XIONG, Qian ZHANG, Shiqiang YANG Globecom 2006.
PROP: A Scalable and Reliable P2P Assisted Proxy Streaming System Computer Science Department College of William and Mary Lei Guo, Songqing Chen, and Xiaodong.
SocialTube: P2P-assisted Video Sharing in Online Social Networks
On Reducing Mesh Delay for Peer- to-Peer Live Streaming Dongni Ren, Y.-T. Hillman Li, S.-H. Gary Chan Department of Computer Science and Engineering The.
SHADOWSTREAM: PERFORMANCE EVALUATION AS A CAPABILITY IN PRODUCTION INTERNET LIVE STREAM NETWORK ACM SIGCOMM CING-YU CHU.
Inside the New Coolstreaming: Principles, Measurements and Performance Implications Bo Li, Susu Xie, Yang Qu, Gabriel Y. Keung, Chuang Lin, Jiangchuan.
Multimedia Retrieval Architecture Electrical Communication Engineering, Indian Institute of Science, Bangalore – , India Multimedia Retrieval Architecture.
1 FairOM: Enforcing Proportional Contributions among Peers in Internet-Scale Distributed Systems Yijun Lu †, Hong Jiang †, and Dan Feng * † University.
Buffer Analysis of Live P2P Media Streaming Approaches Atif Nazir BSc ’07, LUMS.
Accelerating Peer-to-Peer Networks for Video Streaming
Aditya Ganjam, Bruce Maggs*, and Hui Zhang
Presentation transcript:

An Empirical Study of Flash Crowd Dynamics in a P2P-based Live Video Streaming System Bo Li, Gabriel Y. Keung, Susu Xie, Fangming Liu, Ye Sun, and Hao Yin Hong Kong University of Science & Technology Dec 2, IEEE GLOBECOM, New Orleans

Overview: Internet Video Streaming Enable video distribution from any place to anywhere in the world in any format

Cont. Recently, significant deployment in adopting Peer-to-Peer (P2P) technology for Internet live video streaming  Protocol design: Overcast, CoopNet, SplitStream, Bullet, and etc.  Real deployment: ESM, CoolStreaming, PPLive, and etc.  Key Requires minimum support from the infrastructure Greater demands also generate more resources: Each peer not only downloading the video content, but also uploading it to other participants Easy to deploy Good scalability

Challenges Real-time constraints, requiring timely and sustained streaming delivery to all participating peers Performance-demanding, involving bandwidth requirements of hundreds of kilobits per second and even more for higher quality video Large-scale and extreme peer dynamics, corresponding to tens of thousands of users simultaneously participating in the streaming with highly peer dynamics (join and leave at will) especially flash crowd Real-time constraints Performance-demanding Large-scale and extreme peer dynamics

Motivation Flash crowd  A large increase in the number of users joining the streaming in a short period of time (e.g., during the initial few minutes of a live broadcast program)  Difficult to quickly accommodate new peers within a stringent time constraint, without significantly impacting the video streaming quality of existing and newly arrived peers Different from file sharing Challenge: Large-scale & extreme peer dynamics  Current P2P live streaming systems still suffer from potentially long startup delay & unstable streaming quality  Especially under realistic challenging scenarios such as flash crowd

Focus Cont.  Little prior study on the detailed dynamics of P2P live streaming systems during flash crowd and its impacts E.g., Hei et al. measurement on PPLive, the dynamic of user population during the annual Spring Festival Gala on Chinese New Year How to capture various effects of flash crowd in P2P live streaming systems? What are the impacts from flash crowd on user experience & behaviors, and system scale? What are the rationales behind them?

Outline System Architecture Measurement Methodology Important Results  Short Sessions under Flash Crowd  User Retry Behavior under Flash Crowd  System Scalability under Flash Crowd Summary

Some Facts of CoolStreaming System CoolStreaming  Cooperative Overlay Streaming  First released in 2004  Roxbeam Inc. received USD 30M investment, current through YahooBB, the largest video streaming portal in Japan Download2,000,000 Average online user20,000 Peak-time online user150,000 Google entries (keyword: Coolstreaming) 400,000

CoolStreaming System Architecture Stream Manager Partner Manager Member Manager BM Segments Membership manager  Maintaining partial view of the overlay: gossip Partnership manager  Establishing & maintaining TCP connections (partnership) with other nodes  Exchanging the data availability: Buffer Map (BM) Stream manager  Providing stream data to local player  Making decision where and how to retrieve stream data  Hybrid Push & Pull

Mesh-based (Data-driven) Approaches No explicit structures are constructed and maintained  e.g., Coolstreaming, PPLive Data flow is guided by the availability of data  Video stream is divided into segments of uniform length, availability of segments in the buffer of a peer is represented by a buffer map (BM)  Periodically exchange data availability info with a set of partners (partial view of the overlay) and retrieves currently unavailable data from each other  Segment scheduling algorithm determines which segments are to be fetched from which partners accordingly Overhead & delay : peers need to explore the content availability with one another, which is usually achieved with the use of gossip protocol

Measurement Methodology 3 types of status report  QoS report % of video data missing the playback deadline  Traffic report  Partner report 4 events of each session  Join event  Start subscription event  Media player ready event receives sufficient data to start playing  Leave event Each user reports its activities & internal status to the log server periodically Using HTTP, peer log compacted into parameter parts of the URL string

Log & Data Collection Real-world traces obtained from a live event broadcast in Japan Yahoo using the CoolStreaming system  A sport channel on Sept. 27, 2006 (24 hours)  Live baseball game broadcast at 18:00  Stream bit-rate is 768 Kbps  24 dedicated servers with 100 Mbps connections

How to capture flash crowd effects? Two key measures  Short session distribution Counts for those that either fail to start viewing a program or the service is disrupted during flash crowd Session duration is the time interval between a user joining and leaving the system  User retry behavior To cope with the possible service disruption often observed during flash crowd, each peer can re- connect (retry) to the program

Short Sessions under Flash Crowd Filter out normal sessions (i.e., users who successfully join the program) Focus on short sessions with the duration <= 120 sec and 240 sec No. short session increases significantly at around 18:00 when flash crowd occurs with a large number of peers joining the live broadcast program

Strong Correlation Between the Number of Short Sessions and Peer Joining Rate

What are the rationales behind these observations? Relevant factors:  User client connection fault  Insufficient uploading capacity from at least one of the parents  Poor sustainable bandwidth at beginning of the stream subscription  Long waiting time (timeout) for cumulating sufficient video content at playback buffer Newly coming peers do not have adequate content to share with others, thus initially they can only consume the uploading capacity from existing peers With partial knowledge (gossip), the delay to gather enough upload bandwidth resources among peers and the heavy resource competition could be the fundamental bottleneck

Approximate User Impatient Time In face of poor playback continuity, users either reconnect or opt to leave Compare the total downloaded bytes of a session with the expected total playback video bytes according to the session duration Extract sessions with insufficient download bytes The avg. user impatient time is between 60s to 120s

User Retry Behavior under Flash Crowd Retry rate: count the NO. peers that opt to re-join to the overlay with same IP address and port per unit time Users could have tried many times to successfully start a video session Again shows that flash crowd has significant impact on the initial joining phase User perspective: playback could be restored System perspective: amplify the join rates

System Scalability under Flash Crowd Media player ready Received sufficient data to start playing Successfully joined The gap illustrates “catch up process” Media player ready rate picks up when the flash crowd occurs and increases steadily; however, the ratio between these two rates <= 0.67 Imply that the system has capability to accommodate a sudden surge of the user arrivals (flash crowd), but up to some maximum limit

Media Player Ready Time under different time period Considerably longer during the period when the peer join rate is higher

Scale-Time Relationship System perspective:  Though there could be enough aggregate resources brought by newly coming peers, cannot be utilized immediately  It takes time for the system to exploit such resources i.e., newly coming peers (with partial view of overlay) need to find & consume existing resources to obtain adequate content for startup and contribute to others User perspective:  Cause long startup delay & disrupted streaming (thus short session, retry, impatience) Future work: System scale ??? Long  startup delay Short  continuity Amount of initial buffering

Summary Based on real-world measurement, capture flash crowd effects  The system can scale up to a limit during the flash crowd  Strong correlation between the number of short sessions and joining rate  The user behavior during flash crowd can be best captured by the number of short sessions, retries and the impatient time  Relevant rationales behind these findings

Future work Modeling to quantify and analyze flash crowd effects Correlation among initial system capacity, the user joining rate/startup delay, and system scale?  Intuitively, a larger initial system size can tolerate a higher joining rate  Challenge: how to formulate the factors and performance gaps relevant to partial knowledge (gossip)?

Based on the above study, perhaps more importantly for practical systems, how can servers help alleviate the flash crowd problem, i.e., shorten users’ startup delays, boost system scaling? Commercial systems have utilized self-deployed servers or CDN  Coolstreaming, Japan Yahoo, 24 servers in different regions that allowed users to join a program in order of seconds  PPLive is utilizing the CDN services On measurement, examine what real-world systems do and experience On technical side, derive the relationship between Expected Number of Viewers ??? Amount of Server Provisioning along with their joining behaviors Further, how servers are geographically distributed

References "Inside the New Coolstreaming: Principles, Measurements and Performance Implications,"  B. Li, S. Xie, Y. Qu, Y. Keung, C. Lin, J. Liu, and X. Zhang,  in Proc. of IEEE INFOCOM, Apr "Coolstreaming: Design, Theory and Practice,"  Susu Xie, Bo Li, Gabriel Y. Keung, and Xinyan Zhang,  in IEEE Transactions on Multimedia, 9(8): , December 2007 "An Empirical Study of the Coolstreaming+ System,"  Bo Li, Susu Xie, Gabriel Y. Keung, Jiangchuan Liu, Ion Stoica, Hui Zhang, and Xinyan Zhang,  in IEEE Journal on Selected Areas in Communications, 25(9):1-13, December 2007

Q&A Thanks !

Additional Info & Results

Comparison with the first release The initial system adopted a simple pull-based scheme  Content availability information exchange using buffer map  Per block overhead  Longer delay in retrieving the video content Implemented a hybrid pull and push mechanism  Pushed by a parent node to a child node except for the first block  Lower overhead associated with each video block transmission  Reduces the initial delay and increases the video playback quality Multiple sub-stream scheme is implemented  Enables multi-source and multi-path delivery for video streams Gossip protocol was enhanced to handle the push function Buffer management and scheduling schemes are re-designed to deal with the dissemination of multiple sub-streams

Gossip-based Dissemination Gossip protocol - used in BitTorrent  Iteration Nodes send messages to random sets of nodes Each node does similarly in every round Messages gradually flood the whole overlay  Pros: Simple, robust to random failures, decentralized  Cons: Latency trade-off  Related to Coolstreaming Updated membership content Multiple sub-streams

Multiple Sub-streams Video stream is divided into blocks Each block is assigned a sequence number An example of stream decomposition Adoption of the gossip concept from P2P file- sharing application

Buffering Synchronization Buffer  Received block firstly put into Syn. Buffer for corresponding sub-stream  Blocks with continuous sequence number will be combined Cache Buffer  Combined blocks are stored in Cache Buffer

Comparison with the 1 st release (II)

Comparison with the 1 st release (III)

Parent-children and partnership Partners are connected with TCP connections Parents are supporting video streams to children by TCP connection

System Dynamics

Peer Join and Adaptation Stream bit-rate normalized to ONE Two Sub-streams Weight of node is outgoing bandwidth Node E is newly arrival

Peer Adaptation

Peer Adaptation in Coolstreaming Inequality (1) is used to monitor the buffer status of received sub- streams for node A  If this inequality does not hold, it implies that at least one sub-stream is delayed beyond threshold value T s Inequality (2) is used to monitor the buffer status in the parents of node A  If this inequality does not hold, it implies that the parent node is considerably lagging behind in the number of blocks received when comparing to at least one of the partners, which currently is not a parent node for the given node A

User Types Distribution

Contribution Index

Conceptual Overlay Topology Source node “O” Super-peers {A, B, C, D} Moderate-peers {a} Casual-peers {b, c, d}

Event Distributions

Media Player Ready Time under different time period

Session Distribution