Presentation is loading. Please wait.

Presentation is loading. Please wait.

Optimization of Data Caching and Streaming Media Kristin Martin November 24, 2008.

Similar presentations


Presentation on theme: "Optimization of Data Caching and Streaming Media Kristin Martin November 24, 2008."— Presentation transcript:

1 Optimization of Data Caching and Streaming Media Kristin Martin November 24, 2008

2 Problem - Streaming Media  Rising demand for streaming media over the Internet  Useful for:  Live video broadcasts  Distance education  Corporate telecasts  Video on demand  Interactive gaming

3 Problem - Streaming Media  Requires high network bandwidth  Delivery of data is time-sensitive  Long sessions  Clients have access to higher bandwidth connections  Want higher quality videos  Bottle-neck is being shifted upstream  Back-bone network  Server

4 Three Solutions  Unicast Suffix Batching (SBatch)  Shared Running Buffers (SRB)  Optimal Prefix Caching and Data Sharing (OPC-DS)

5 Unicast Suffix Batching (SBatch)  Batching scheme that takes advantage of the video prefix cached at the proxy to provide instantaneous playback to clients.  Designed for environments where the proxy-client path is only unicast-capable.  Proxy-assisted delivery scheme  Utilizes three reactive (on-demand) transmission schemes:  Batching (SBatch)  Patching (UPatch)  Stream Merging (MMerge)

6 SBatch Overview  Proxy intercepts client requests  If a prefix of the video is present locally, stream directly to client.  If entire video is not stored on the proxy, it contacts the server for the suffix of stream and relays to client.

7 Optimal Proxy Cache Allocation  How long of a prefix of each video should be stored at the proxy?  Can be found using dynamic programming for each video and units of the proxy cache.  Saving in transmission cost when caching a prefix of video i over no caching at proxy is denoted as:  Want to maximize savings while keeping within the size of the cache:  m i = units of prefix cached of video i  C i (v i ) = transmission cost of video i when a prefix of length v i for video i is cached  u = caching grain (unit size)  b i = mean bandwidth of video i

8 Unicast Suffix Batching (SBatch)  Transmission of the suffix from the server to the proxy is done as late as possible.  The first frame of the suffix is scheduled to arrive at time v i (length of the prefix)  Additional requests arriving during time (0,v i ] are forwarded the single incoming suffix.  Saves additional suffix transmission from the server.  This allows multiple requests to be batched together without playback startup delay.

9 Unicast Patching with Prefix Caching (UPatch)  Patches the suffix of a video.  If an additional request falls within a threshold Gi from the beginning of an existing suffix transmission  Proxy schedules a patch from the server  Otherwise, a new transmission of the entire suffix is started.

10 Multicast Patching with Prefix Caching (MPatch)  MPatch is a patching scheme which exploits prefix patching at the proxy.  Multicast transmission scheme to utilize when the proxy-client path is multicast capable.  Suffix video stream is presented to clients from the proxy via multicast.  Separate unicast channels are used to obtain missing data.

11 Problems with SBatch  Doesn’t take advantage of a multicast capable server- proxy connection, if available.  Doesn’t take advantage of history or patterns in user requests.  Requires that users be able to receive multiple streams simultaneously.

12 Shared Running Buffers (SRB)  Utilizes a server-proxy-client system.  The proxy can choose to cache an object (buffer) so that subsequent requests can be served without contacting the server.  Buffer states:  Construction State  Running State  Idle State

13 Construction State  The initial buffer is allocated upon the arrival of a request.  Size of the buffer may still be adjusted.  At the end of the Construction State, the buffer is frozen and the size cannot be adjusted.

14 Running State  After the buffer exits the Construction state, its size freezes and will serve as a running window of a streaming session.

15 Idle State  Buffer enters the Idle State when the streaming session ends.  At this point it can be reclaimed by a new request or its resources reallocated if needed for another buffer.

16 SRB Algorithm  For a new incoming request:  If the latest running buffer of the requested object has cached the prefix of that object at the proxy, request is served from the existing running buffers of that object.  If not cached, create a new running buffer.

17 Buffer Lifecycle  Buffer is currently in construction state:  If there has been only one request for that object so far: buffer enters Idle State immediately after Construction State.  If the interval between current request and previous request (waiting time) is greater than average request arrival interval, buffer shrinks to the extent that the most recent request can be served from it.  If waiting time is less than or equal to average request arrival interval, buffer maintains construction state and continues growth.  Ensures that most popular videos will be served directly from the proxy.

18 Buffer Lifecycle  When the buffer is currently in Running State:  No longer stores beginning of the object.  New requests can no longer be served completely from the running buffer.  A new buffer is allocated and the request is served from this new buffer along with any previous running buffers.  Other than the initial buffer, subsequent buffers do not have to run to the end of an object.

19 Buffer Lifecycle  Buffer is currently in the Idle State:  When a buffer runs to its end, it enters the Idle State.  At this point, it can either be reclaimed by an incoming request for that object  Or its resources can be released for use by another buffer if needed.  Buffers are replaced according to their popularity in order to make maximum use of the cache.

20 Problems with Shared Running Buffers  Prefixes are only stored at the proxy when the same object has been cached recently.  Does not take full advantage of prefix caching.  Requires clients to be able to receive multiple streams simultaneously.

21 Optimal Prefix Caching and Data Sharing (OPC-DS)  Sizes of appropriate prefix cache and interval cache are calculated according to the current request distribution.  More popular videos will have a larger prefix (perhaps whole video) cached at the proxy.  Caches frequently used data at proxies that are close to clients.

22 OPC-DS Process  A portion of the proxy cache space is used for video prefixes.  The suffix is sent to the proxy from the server and forwarded to the client.  If an additional request for the same object arrives during the time that the previous request is still receiving the prefix, an interval cache is created for the suffix.  If the prefix of an object is not cached in the proxy, entire object is requested from the server.

23 OPC-DS Prefix Caching  Similar to SBatch, OPC-DS uses a dynamic programming algorithm to determine the optimal prefix size to cache at the proxy.  However, it also considers the request rate (popularity) of each video in deciding how much resources to allocate to each prefix.

24 Video With Cached Prefix  Users 1, 2, and 3 arrive close enough together to use same cached copy of the suffix within the proxy.  To fulfill user 4’s request, since the interval time is too large, the proxy must re-request the data from the server.

25 Video Without Cached Prefix  No prefix of the video is available on the proxy.  Entire video is requested by the server.  Proxy stores a portion of this video within the interval cache according to size and resources available.  When requests for the same video arrive within a close enough interval, they can be fulfilled by the proxy with only one transmission from the server.

26 OPC-DS Benefits  When requests are very concentrated for a group of popular videos, it is guaranteed to cache the suffixes of those videos in the proxy.  When requests are disperse, it will share data using the interval cache for about half of the requests.  Client does not need to be able to receive multiple streams simultaneously or cache a large amount of data.

27 Comparisons: Average Client Channel Requirement  Average number of channels required when compared to ratio of proxy cache size to server storage capacity.

28 Comparisons: Average Client Storage Requirement  Average percentage of object client will be expected to cache locally when compared to ratio of proxy cache size to server storage capacity.

29 References  Bing Wang; Sen, S.; Adler, M.; Towsley, D., "Optimal proxy cache allocation for efficient streaming media distribution," INFOCOM 2002. Twenty-First Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE, vol.3, no., pp. 1726-1735 vol.3, 2002 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=1019426&isnumber=21923  Songqing Chen, Bo Shen, Yong Yan, Sujoy Basu, Xiaodong Zhang, "SRB: Shared Running Buffers in Proxy to Exploit Memory Locality of Multiple Streaming Media Sessions," Distributed Computing Systems, International Conference on, vol. 0, no. 0, pp. 787-794, 24th IEEE International Conference on Distributed Computing Systems (ICDCS'04), 2004.  Kaihui Li; Changqiao Xu; Yuanhai Zhang; Zhimei Wu, "Optimal Prefix Caching and Data Sharing strategy," Multimedia and Expo, 2008 IEEE International Conference on, vol., no., pp.465-468, June 23 2008-April 26 2008 URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4607472&isnumber=4607348


Download ppt "Optimization of Data Caching and Streaming Media Kristin Martin November 24, 2008."

Similar presentations


Ads by Google