Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Video Delivery Techniques. 2 Server Channels Videos are delivered to clients as a continuous stream. Server bandwidth determines the number of video.

Similar presentations

Presentation on theme: "1 Video Delivery Techniques. 2 Server Channels Videos are delivered to clients as a continuous stream. Server bandwidth determines the number of video."— Presentation transcript:

1 1 Video Delivery Techniques

2 2 Server Channels Videos are delivered to clients as a continuous stream. Server bandwidth determines the number of video streams can be supported simultaneously. Server bandwidth can be organized and managed as a collection of logical channels. These channels can be scheduled to deliver various videos.

3 3 Using Dedicated Channel Video Server Client Too Expensive ! Client Dedicated stream

4 4 Video on Demand Quiz 1.Video-on-demand technology has many applications: Electronic commerce Digital libraries Distance learning News on demand Entertainment All of these applications 2.Broadcast can be used to substantially reduce the demand on server bandwidth ? True False 3.Broadcast cannot deliver videos on demand ? True False ??

5 5 Push Technologies Broadcast technologies can deliver videos on demand. Requirement on server bandwidth is independent of the number of users the system is designed to support. If your answer to Question 3 was True, you are wrong: Less expensive & more scalable !!

6 6 Simple Periodic Broadcast Staggered Broadcast Protocol A new stream is started every interval for each video. The worst service latency is the broadcast period.

7 7 Simple Periodic Broadcast A new stream is started every interval for each video. The worst service latency is the broadcast interval. Advantage: The bandwidth requirement is proportional to the number of videos (not the number of users.) Can we do better ?

8 8 Limitation of Simple Periodic Broadcast Access latency can be improved only linearly with increases to the server bandwidth. Substantial improvement can be achieved if we allow the client to preload data

9 9 Pyramid Broadcasting – Segmentation [Viswanathan95] Each data segment D i is made times the size of D i-1, for all i. =, where B is the system bandwidth; M is the number of videos; and K is the number of server channels. opt = 2.72 (Eulers constant).

10 10 Pyramid Broadcasting Download & Playback Strategy Server bandwidth is evenly divided among the channels, each much faster than the playback rate. Client software has two loaders: –Begin downloading the first data segment at the first occurrence, and start consuming it concurrently. –Download the next data segment at the earliest possible time after beginning to consume the current data segment.

11 11 Disadvantages of Pyramid Broadcasting The channel bandwidth is substantially larger than the playback rate Huge storage space is required to buffer the preloaded data It requires substantial client bandwidth Client bandwidth is typically the most expensive component of a VOD system

12 12 Permutation-Based Pyramid Broadcasting (PPB) [Aggarwal96] PPB further partitions each logical channel in PB scheme into P subchannels. A replica of each video fragment is broadcast on P different subchannels with a uniform phase delay. Channel C i Video V 1 Video V 2 A subchannel Pause to allow the playback to catch up Begin downloading Resume downloading

13 13 Advantages and Disadvantages of PPB Requirement on client bandwidth is substantially less than in PB Storage requirement is also reduced significantly (about 50% of the video size) The synchronization is difficult to implement since the client needs to tune to an appropriate point within a broadcast

14 14 Each video is fragmented into K segments, each repeatedly broadcast on a dedicated channel at the playback rate. The sizes of the K segments have the following pattern: [1, 2, 2, 5, 5, 12, 12, 25, 25, …, W, W, …, W] Skyscraper Broadcasting [Hua97] Size of larger segments are constrained to W (width of the skyscraper). Even groupOdd group

15 15 Generating Function The broadcast series is generated using the following recursive function: 1If n = 1, 2If n = 2 or 3, 2 · f(n - 1)+1If n mod 4 = 0. f(n-1)If n mod 4 = 1, 2 · f(n - 1) + 2If n mod 4 = 2, f(n – 1)If n mod 4 = 3 f(n) =

16 16 The Odd Loader and the Even Loader download the odd groups and the even groups, respectively. The W-segments are downloaded sequentially using only one loader. As the loaders fill the buffer, the Video Player consumes the data in the buffer. Skyscraper Broadcasting Playback Procedure

17 17 Advantages of Skyscraper Broadcasting Since the first segment is very short, service latency is excellent. Since the W-segments are downloaded sequentially, buffer requirement is minimal. PyramidSkyscraper

18 18 SB Example Blue people share 2nd and 3rd fragments and 6th, 7th, 8th with Red people

19 19 Another Approach

20 20 CCA Broadcasting Server broadcasts each segment at the playback rate Clients use c loaders Each loader download its streams sequentially, e.g., i th loader is responsible for segments i, i+c, i+2c, i+3c, … Only one loader is used to download all the equal-size W-segments sequentially C = 3 (Clients have three loaders)

21 21 Advantages of CCA It has the advantages of Skyscraper Broadcasting. It can leverage client bandwidth to improve performance.

22 22 Cautious Harmonic Broadcasting (Segmentation Design) A video is partitioned into n equally-sized segments. The first channel repeatedly broadcasts the first segment S 1 at the playback rate. The second channel alternately broadcasts S 2 and S 3 repeadtedly at the playback rate. Each of the remaining segment S i is repeatedly broadcast on its dedicated channel at 1/(i–1) the playback rate.

23 23 Cautious Harmonic Broadcasting (Playback Strategy) The client can start the playback as soon as it can download the first segment. Once the client starts receiving the first segment, the client will also start receiving every other segment.

24 24 Cautious Harmonic Broadcasting Advantage: Better than SB in terms of service latency. Disadvantage: Requires about three times more receiving bandwidth compared to SB. Implementation Problem: The client must receive data from many channels simultaneously (e.g., 240 channels are required for a 2-hour video if the desired latency is 30 seconds). No practical storage subsystem can move their read heads fast enough to multiplex among so many concurrent streams.

25 25 Pagoda Broadcasting Download and Playback Strategy Each channel broadcasts data at the playback rate The client receives data from all channels simultaneously. It starts the playback as soon as it can download the first segment.

26 26 Pagoda Broadcasting Advantage & Disadvantage Advantage: Required server bandwidth is low compared to Skyscraper Broadcasting Disadvantage: Required client bandwidth is many times higher than Skyscraper Broadcasting Achieving a maximum delay of 138 seconds for a 2-hour video requires each client to have a bandwidth five times the playback rate, e.g., approximately 20 Mbps for MPEG-2 System cost is significantly more expensive

27 27 New Pagoda Broadcasting [Paris99] New Pagoda Broadcasting improves on the original Pagoda Broadcasting. Required client bandwidth remains very high Example: Achieving a maximum delay of 110 seconds for a 2-hour video requires each client to have a bandwidth five times the playback rate. Approximately 20 Mbps for MPEG-2 System cost is very expensive

28 28 Limitations of Periodic Broadcast Periodic broadcast is only good for very popular videos It is not suitable for a changing workload It can only offer near-on-demand services

29 29 Batching FCFS MQL (Maximum Queue Length First) MFQ (Maximum Factored Queue Length Still only near VoD ! Can multicast provide true VoD ?

30 30 Current Hybrid Approaches FCFS-n : First Come First Served for unpopular video and n channels are reserved for popular video. MQL-n : Maximum Queue Length policy for unpopular video and n channels are reserved for popular video. Performance is limited.

31 31 New Hybrid Approach Skyscraper Broadcasting scheme (SB) Largest Aggregated Waiting Time First (LAW) Periodic BroadcastScheduled Multicast +

32 32 LAW ( Largest Aggregated Waiting Time First) MFQ tends to MQL; loosing fairness q 1 / f 1, q 2 / f 2, q 3 / f 3, q 4 / f 4, … f 1 f 2 f 3 f 4... q 1, q 2, q 3, q 4,... Whenever a stream becomes available, schedule the video with the maximum value of S i : S i = c * m - (a i1 + a i2 + …+ a im ), where c is current time, m is total number of requests for video i, a ij is arrival time of jth request for video i. (Sum of each requests waiting time in the queue)

33 33 LAW (Example) By MFQ, q 1* t 1 = 5*( )=110, q 2* t 2 = 4*( )=112. selected By MFQ Average waiting times are 12 and 8 time units. S 1 = 128*5 - ( ) = 60 selected S 2 = 128*4 - ( ) = 32 by LAW Request for video no Time last multicast Request for video no Time last multicast Current R 11 R 12 R 13 R 14 R 15 R 21 R 22 R 23 R 24

34 34 AHA ( Adaptive Hybrid Approach) Popularity is re-evaluated periodically. If a video is popular so broadcasting by SB currently, then go Case.1. Otherwise, go Case.2. Video is popular Video is popular ? Terminate the SB broadcast after all the dependent playbacks end Mark the waiting queue as an LAW queue Return the channels to the channel pool Yes No K channels available ? Initiate the SB broadcast Yes No Case.1 Case.2 Video is popular Video is popular ? No Yes Mark the waiting queue as an LAW queue The video is assumed to require K logical channels

35 35 Performance Model 100 videos (120 min. each), Client Behavior follows - the Zipf distribution (z = 0.1 ~ 0.9) for choice of videos, - the Poisson distribution for arrival time. - popularity is changing gradually every 5 min for dynamic environment. - for waiting time, = 5 min., s = 1 min. Performance Metrics - Defection Rate, - Average access latency, - Fairness, and - Throughput.

36 36 LAW vs. MFQ

37 37 AHA vs. MFQ-SB-n

38 38 Low Latency: requests must be served immediately Challenges – conflicting goals Highly Efficient: each multicast must still be able to serve a large number of clients

39 39 Some Solutions Application level: –Piggybacking –Patching –Chaining Network level: –Caching Multicast Protocol (Range Multicast)

40 40 Piggybacking [Golubchik96] new arrivals departures +5% -5% CBA Slow down an earlier service and speed up the new one to merge them into one stream Limited efficiency due to long catch-up delay Implementation is complicated

41 41 Patching Regular Multicast Video A

42 42 Proposed Technique: Patching Regular Multicast A Video Player Buffer B Video t Patching Stream Skew point

43 43 Proposed Technique: Patching Regular Multicast A Buffer B Video 2t Skew point is absorbed by client buffer Video Player

44 44 Client Design Video Server Lr Video Player Regular Multicast Patching Multicast Data Loader Regular Stream Patching Stream Client A LrLp Video Player Client B Buffer LrLp Video Player Client C

45 45 Server Design Server must decide when to schedule a regular stream or a patching stream A r B p C p D p E r F p G p Multicast group time

46 46 Two Simple Approaches If no regular stream for the same video exists, a new regular stream is scheduled Otherwise, two policies can be used to make decision: Greedy Patching and Grace Patching

47 47 Greedy Patching Patching stream is always scheduled Video Length Shared Data Buffer Size Shared Data Buffer Size Shared Data A B A C

48 48 Grace Patching If client buffer is large enough to absorb the skew, a patching stream is scheduled; otherwise, a new regular stream is scheduled. Video Length Buffer Size Regular Stream A Shared Data B C

49 49 Local Distribution Technologies Video Server Video Server ATM or Sonet backbone network Switch Local distribution network Local distribution network Client – ADSL (Asymmetric Digital Subscriber Line): currently 8 Mbps in one direction, and eventually speeds as high as 50 Mbps – HFC (Hybrid Fiber Coax): current Mhz coax cables are replaced by 750 mhz coax cable to achieve a total of 2 Gbps

50 50 Performance Study Compared with conventional batching Maximum Factored Queue (MFQ) is used Two scenarios are studied –No defection average latency –Defection allowed average latency, defection rate, and unfairness

51 51 Simulation Parameters Request rate (requests/min) Client buffer (min of data) Server bandwidth (streams) Video length (minutes) Number of videos Parameter , ,800 90N/A 100N/A DefaultRange Video Access Skew factor0.7N/A Number of requests200,000N/A

52 52 Effect of Server Bandwidth

53 53 Effect of Client Buffer

54 54 Effect of Request Rate

55 55 Optimal Patching A r B p C p D p E r F p G p patching window Multicast group time What is the optimal patching window ?

56 56 Optimal Patching Window D is the mean total amount of data transmitted by a multicast group Minimize Server Bandwidth Requirement, D/W, under various W values Video Length Buffer Size A W

57 57 Optimal Patching Window Compute D, the mean amount of data transmitted for each multicast group Determine, the average time duration of a multicast group Server bandwidth requirement is D/ which is a function of the patching period Finding the patching period that minimize the bandwidth requirement

58 58 Candidates for Optimal Patching Window

59 59 Concluding Remarks Unlike conventional multicast, requests can be served immediately under patching Patching makes multicast more efficient by dynamically expanding the multicast tree Video streams usually deliver only the first few minutes of video data Patching is very simple and requires no specialized hardware

60 60 Patching on Internet Problem: –Current Internet does not support multicast A Solution: –Deploying an overlay of software routers on the Internet –Multicast is implemented on this overlay using only IP unicast

61 61 Content Routing Each router forwards its Find messages to other routers in a round-robin manner.

62 62 Removal of An Overlay Node Inform the child nodes to reconnect to the grandparent

63 63 Failure of Parent Node –Data stop coming from the parent –Reconnect to the server

64 64 Slow Incoming Stream Reconnect upward to the grandparent

65 65 Downward Reconnection When reconnection reaches the server, future reconnection of this link goes downward. Downward reconnection is done through a sibling node selected in a round-robin manner. When downward reconnection reaches a leave node, future reconnection of this link goes upward again.

66 66 Limitation of Patching The performance of Patching is limited by the server bandwidth. Can we scale the application beyond the physical limitation of the server ?

67 67 –Chaining Using a hierarchy of multicasts Clients multicast data to other clients in the downstream Demand on the server-bandwidth requirement is substantially improved

68 68 Chaining – Highly scalable and efficient – But implementation is a challenge Video Server disk Screen disk Screen disk Client A Client B Client C

69 69 Scheduling Multicasts Conventional Multicast I State: The video has no pending requests. Q State: The video has at least one pending request. Chaining C State: Until the first frame is dropped from the multicast tree, the tree continues to grow and the video stays in the C state.

70 70 Enhancement When resources become available, the service begins for all the pending requests except for the youngest one. As long as new requests continue to arrive, the video remains in the E state. If the arrival of the requests momentarily discontinues for an extended period of time, the video transits into the C state after initiating the service for the last pending request. E State: This strategy returns to the I state much less frequently. It is less demanding on the server bandwidth.

71 71 Advantages of Chaining Requests do not have to wait for the next multicast. –Better service latency Clients can receive data from the expanding multicast hierarchy instead of the server. –Less demanding on server bandwidth Every client that uses the service contributes its resources to the distributed environment. –Scalable

72 72 Chaining is Expensive ? Each receive end must have caching space. 56 Mbytes can cache five minutes of MPEG-1 video The additional cost can easily pay for itself in a short time.

73 73 Limitation of Chaining It only works for a collaborating environment i.e., the receiving nodes are on all the time It conserves server bandwidth, but not network bandwidth.

74 74 Another Challenge Can a multicast deliver the entire video to all the receivers who may subscribe to the multicast at different times ? If we can achieve the above capability, we would not need to multicast too frequently.

75 75 Range Multicast [Hua02] Deploying an overlay of software routers on the Internet Video data are transmitted to clients through these software routers Each router caches a prefix of the video streams passing through This buffer may be used to provide the entire video content to subsequent clients arriving within a buffer-size period

76 76 Range Multicast Group Caching Multicast Protocol (CMP) Four clients join the same server stream at different times without delay Each client sees the entire video Buffer Size: Each router can cache 10 time units of video data. Assumption: No transmission delay

77 77 Multicast Range All members of a conventional multicast group share the same play point at all time –They must join at the multicast time Members of a range multicast group can have a range of different play points –They can join at their own time Multicast Range at time 11: [0, 11]

78 78 Network Cache Management Initially, a cache chunk is free. When a free chunk is dispatched for a new stream, the chunk becomes busy. A busy chunk becomes hot if its content matches a new service request.

79 79 CMP vs. Chaining ChainingCMP Assumption: Each router has one chunk of storage space capable of caching 10 time units of video.

80 80 CMP vs. Proxy Servers Proxy servers are placed at the edge of the network to serve local users. CMP routers are located throughout the network for all users to share. Proxy servers are managed autonomously. The router caches are seen collectively as a single unit. Proxy ServersCMP

81 81 CMP vs. Proxy Servers Popular data are heavily duplicated if we cache long videos. CMP routers cache only a small leading portion of the video passing through Caching long videos is not advisable. Many data must still be obtained from the server Majority of the data are obtained from the network. Proxy ServersCMP

82 82 VCR-Like Interactivity Continuous Interactive functions –Fast forward –Fast rewind –Pause Discontinuous Interactive functions –Jump forward –Jump backward Useful for many VoD applications

83 VCR Interaction Using Client Buffer Video stream

84 84 Interaction Using Batching [Almeroth96] Requests arriving during a time slot form a multicast group Jump operations can be realized by switching to an appropriate multicast group Use an emergency stream if a destination multicast group does not exist

85 Continuous Interactivity under Batching Pause: –Stop the display –Return to normal play as in Jump Fast Forward: –Fast forward the video frames in the buffer –When the buffer is exhausted, return to normal play as in Jump Fast Rewind: –Same as in fast forward, but in reverse direction

86 SAM (Split and Merge) Protocol [Liao97] Uses 2 types of streams, S streams for normal multicast and I streams for interactivity. When a user initiates an interactive operation: –Use an I channel to interact with the video –When done, use the I channel as a patching stream to join an existing multicast –Return the I channel Advantage: Unrestricted fast forward and rewind Disadvantage: I streams require substantial bandwidth

87 87 Resuming Normal Play in SAM Use the I stream to download segments 6 and 7, and render them onto the screen At the same time, join the target multicast and cache the data, starting from segment 8, in a local buffer

88 88 Interaction with Broadcast Video The interactive techniques developed for Batching can also be used for Staggered Broadcast However, Staggered Broadcast does not perform well

89 89 Client Centric Approach (CCA) Server broadcasts each segment at the playback rate Clients use c loaders Each loader download its streams sequentially, e.g., i th loader is responsible for segments i, i+c, i+2c, i+3c, … Only one loader is used to download all the equal-size W-segments sequentially C = 3 (Clients have three loaders)

90 90 CCA is Good for Interactivity Segments in the same group are downloaded at the same time –Facilitate fast forward The last segment of a group is of the same size as the first segment of the next group –Ensure smooth continuous playback after interactivity

91 91 Broadcast-based Interactive Technique (BIT) [Hua02]

92 92 BIT Two Buffers –Normal Buffer –Interactive Buffer When Interactive Buffer is exhausted, client must resume normal play

93 93 BIT – Resume-Play Operation Three segments are being downloaded simultaneously Actual destination point is chosen from among frames at broadcast point to ensure continuous playback

94 94 BIT - User Behavior Model m x : duration of action x P x : probability to issue action x P i : probability to issue interaction m i : duration of the interaction m ff = m fr = m pause = m jf = m jb, P pause = P ff = P fb = P jf = P jb = P i /5. dr : m i /m p interaction ratio.

95 Performance Metrics Percentage of unsuccessful action –Interaction fails if the buffer fails to accommodate the operation –E.g., a long-duration fast forward pushes the play point off the Interactive Buffer Average Percentage of Completion –Measure the degree of incompleteness –E.g., if a 20-second fast forward is forced to resume normal play after 15 seconds, the Percentage of Completion is 15/20, or 75%.

96 96 BIT - Simulation Results

97 97 Support Client Heterogeneity Using multi-resolution encoding Bandwidth Adaptor HeRO Broadcasting

98 98 Multi-resolution Encoding Encode the video data as a series of layers A user can individually mould its service to fit its capacity A user keeps adding layers until it is congested, then drops the higher layer Drawback: Compromise the display quality

99 99 Bandwidth Adaptors Advantage: All clients enjoy the same quality display

100 100 Requirements for an Adaptor An adaptor dynamically transforms a given broadcast into another less demanding one The segmentation scheme must allow easy transformation of a broadcast into another CCA segmentation technique has this property

101 101 Two Segmentation Examples

102 102 Adaptation (1) Adaptor downloads from all broadcast channels simultaneously

103 103 Adaptation (2) Each sender routine retrieves data chunks from buffer, and broadcast them to the downstream For each chunk, the sender routine calls deleteChunk to decide if the chunk can be deleted from the buffer

104 104 Buffer Management insertChunk implements an As Late As Possible policy, i.e., –If another occurrence of this chunk will be available from the server before it is needed, then ignore this one, else buffer it. deleteChunk implements an As soon As Possible policy, i.e., –Determine the next time when the chunk will need to be broadcast to the downstream. –If this moment comes before the availability of the chunk at the server, then keep it in storage, else delete it.

105 105 The Adaptor Buffer Computation is not intensive. It is only performed for the first chunk of the segment, i.e., –If this initial chunk is marked for caching, so will be the rest of the segment. Same thing goes for deletion.

106 106 The start-up delay The start-up delay is the broadcast period of the first segment on the server

107 107 HeRO – Heterogeneous Receiver-Oriented Broadcasting Allows receivers of various communication capabilities to share the same periodic broadcast All receivers enjoy the same video quality Bandwidth adaptors are not used

108 108 HeRO – Data Segmentation The size of the i th segment is 2 i-1 times the size of the first segment

109 109 HeRO – Download Strategy The number of channels needed depends on the time slot of the arrival of the service request Loader i downloads segments i, i+C, i+2C, i+3C, etc. sequentially, where C is the number of loaders available. Global Period

110 110 HeRO – Regular Channels The first user can download from six channels simultaneously Request 1

111 111 HeRO – Regular Channels The second user can download from two channels simultaneously Request 2

112 112 Worst-Case for Clients with 2 loaders Worst-case latency is 11 time units The worst-cases appear because the broadcast periods coincide at the end of the global period Request 2 Coincidence of the broadcast periods 11 time units

113 113 Worst-Case for Clients with 3 loaders Worst-case latency is 5 time units The worst-cases appear because the broadcast periods coincide at the end of the global period Request 5 time units Coincidence of the broadcast periods

114 114 Observations of Worst-Cases For a client with a given bandwidth, the time slots it can start the video are not uniformly distributed over the global period. The non-uniformity varies over the global period depending on the degree of coincidence among the broadcast periods of various segments.

115 115 Observations of Worst-Cases (cont…) The worst non-uniformity occurs at the end of each global period when the broadcast periods of all segments coincide. The non-uniformity causes long service delays for clients with less bandwidth. We need to minimize this coincidence to improve the worst case.

116 116 We broadcast the last segment on one more channel, but with a time shift half its size. We now offer more possibilities to download the last segment; and above all, we eliminate every coincidence with the previous segments. Regular Group Shifted Channel Adding one more channel

117 117 Shifted Channels To reduce service latency for less capable clients, broadcast the longest segments on a second channel with a phase offset half their size. HeRO

118 118 Under a homogeneous environment, HeRO is –very competitive in service latencies compared to the best protocols to date –the most efficient protocol to save client buffer space HeRO is the first periodic broadcast technique designed to address the heterogeneity in receiver bandwidth Less capable clients enjoy the same playback quality HeRO – Experimental Results

119 119 2-Phase Service Model (2PSM) Browsing Videos in a Low Bandwidth Environment

120 120 Search Model Use similarity matching (e.g., keyword search) to look for the candidate videos. Preview some of the candidates to identify the desired video. Apply VCR-style functions to search for the video segments.

121 121 Conventional Approach Advantage: Reduce wait time 1. Download S o 2. Download S 1 while playing S Download S 2 while playing S 1... Disadvantage: Unsuitable for video libraries.

122 122 Search Techniques Use extra preview files to support the preview function. Use separate fast-forward and fast- reverse files to provide the VCR-style operations. It requires more storage space. Downloading the preview file adds delay to the service. It requires more storage space. Server can becomes a bottleneck.

123 123 Challenges How to download the preview frames for FREE ? No additional delay No additional storage requirement How to support VCR operations without VCR files ? No overhead for the server No additional storage requirement

124 124 2PSM – Preview Phase

125 125 2PSM – Playback Phase t

126 126 Remarks 1. It requires no extra files to provide the preview feature. 2. Downloading the preview frames is free. 3. It requires no extra files to support the VCR functionality. 4. Each client manages its own VCR-style interaction. Server is not involved.

Download ppt "1 Video Delivery Techniques. 2 Server Channels Videos are delivered to clients as a continuous stream. Server bandwidth determines the number of video."

Similar presentations

Ads by Google