Scalable video distribution techniques Laurentiu Barza PLANETE project presentation: Sophia Antipolis 12 October 2000
Motivation User behaviour: –skewed access: Zipf Rule 20/80 –desire rapid access –may be willing to sacrifice access time and some interactivity for lower cost service Goal: scalable service that provides almost « true VoD » at a much lower cost
Outline Basic schemes: Server-Push Broadcast - Baseline - DeBey - Pyramid & Skyscraper - Tailor-Made Client-Pull with Multicast - Batching
Baseline Broadcast Scheme Continuos multicast of hot videos M videos K channels assign K/M channels to each video schedule video start times « pay-per-view » model
Length of Movie 3 channels/movie Baseline Broadcast
DeBey Broadcast Split a video into N equal sized segments Segment “m” is transmitted ONCE every “m” time reduce the mean transmission rate peak transmission rate very high
Time... De Bey Broadcast t0t1t2t3t4 Channels
Pyramid broadcasting Split video into N segments of lengths L1,L2,…,Ln L = L1+L2+…+Ln Segment size: L i = * L i+1 lower max access time than baseline scheme the client has to listen 2 channels simultaneous significant receiver buffering: up to 70% of video length
Pyramid Segmentation Skyscraper segmentation
Skyscraper Broadcasting use relative segment size progression: 1, 2, 2, 5, 5, 12, 12, 25, 25, 52, 52… requires less buffering than pyramid scheme requires strict synchronization among the multicast channels
Tailor-Made approach Cover all possible design dimensions –server transmission rate –start-up latency –peak client recording rate –peak client storage requirements Is a modification of de Bey: –all the segments have the same length but are transmitted continuously
Time... Taylor-Made Approach t0t1t2t3t Channels
Time... Taylor-Made Approach t0t1t2t3t Channels Client joins
Partial conclusion Proposed schemes characteristics: are server-push approach are designed for hot video vary the way they segment a video trade-off server transmission rate, client IO bandwidth and client storage and recording requirements have all non-zero start-up latency
Client-pull: Batching delay request for a video until a certain number of requests for that video arrive before the video is delivered batching is only effective for popular videos reduce server and network resource requirements start-up latency can be very high –popularity of required video –no. Of requests required to schedule a video
Controlled multicast Controlled Multicast = Batching + Optimal Patching define a patch threshold that trades off the size of the patches and the frequency in which new multicast channels are initiated
Catching Server broadcast video via dedicated multicast channels Client –immediately joins the appropriate multicast channel –requests to the server the missing first part of the video Server sends the first part to the client via a dedicated unicast channel
Multicast with caching (Mcache) Server multicasts body of a video using –object channels: multicast the body of the video –patch channels: multicast parts of the video right after the prefix Client initiates two parallel requests –the prefix from the cache –video body from the server Server: calculates schedule and inform client which channel and when to join
Conclusion Various schemes for scalable video distribution Concern only hot and popular video Emulate the native Video on Demand service while requiring much less ressources at the server Server Push vs. Client Pull models Zero latency vs. Non-zero latency schemes
Prefix(cached) Body(server) patched