Presentation is loading. Please wait.

Presentation is loading. Please wait.

Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek, Yingfei Dong, Member, IEEE, and David H. C. Du, Fellow, IEEE.

Similar presentations


Presentation on theme: "Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek, Yingfei Dong, Member, IEEE, and David H. C. Du, Fellow, IEEE."— Presentation transcript:

1 Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek, Yingfei Dong, Member, IEEE, and David H. C. Du, Fellow, IEEE

2 Outline  Abstract  Related work  Client collaboration with loopback  Loopback analytical model  Local repair mechanism enhancing reliability  Conclusion and future work

3 Abstract(1/2)  Two-level streaming architecture content delivery network(CDN) to deliver video from central server to proxy servers. Proxy server delivers video with the help of client caches.  Design Feature Loopback approach Local repair scheme

4 Abstract(2/2)  Objective Reduce the required network bandwidth Reduce load of central server Reduce cache space of a proxy Address client failure problem

5 Outline  Abstract  Related work  Client collaboration with loopback  Loopback analytical model  Local repair mechanism enhancing reliability  Conclusion and future work

6 Related work  P2Cast: A session is formed by clients arriving close in time. Application-level forwarding tree. Server peer

7 Related work  CDN-P2P hybrid architecture: Divide data into fractions A Client may receive video stream from multiple peers, A client need to cache an entire video Client needs to caches an entire video server peer

8 Outline  Abstract  Related work  Client collaboration with loopback  Loopback analytical model  Local repair mechanism enhancing reliability  Conclusion and future work

9 Basic assumption for client  Each client dynamically caches a portion of a video and storage space is limited  Client delivers only one stream at a time only during its own video playback and for a short period of time after the playback ends  Client may fail or choose to leave while delivering the video data to its peers.

10 Basic assumption for proxy  Storage space is limited.  Bandwidth is limited.  The prefix of a video is cached by proxy server.

11 Forwarding Ring(1/3)  Clients arriving close to each other in time form a forwarding ring First client receiving data from a proxy. Last client returning data to the proxy.  First client receives the video prefix from the proxy and the remaining portion of a video from the central server

12 Forwarding Ring(2/3)  Next client join on time: Streamed to the newcomer.  The frames that have been already transmitted are removed from the buffer.  If next request arrive not in time Oldest frames are passed back to the proxy and evicted from the buffer.  The late newcomer starts a new loop.

13 Forwarding Ring(3/3)  Proxy does not maintain a copy of a frame after transmitting to a client.  If the demand is high: There are few long loops containing many clients. The entire video may be cached by the clients. Proxy only need to forward one stream to each loop and receive one stream from each loop

14

15 Outline  Abstract  Related work  Client collaboration with loopback  Loopback analytical model  Local repair mechanism enhancing reliability  Conclusion and future work

16 Loopback analytical model  Analyze the resource usage at the proxy and the central server load due to a single video under a given client arrival process.  Notation definition: :buffer size at each client :arrival time of the i ’ th client. :storage space of the proxy( 0< <1)

17 Aggregate Loop Buffer Space

18 Data available locally

19 Proxy Buffer Space Utilization

20 Proxy I/O bandwidth usage

21 Central server load

22 Outline  Abstract  Related work  Client collaboration with loopback  Loopback analytical model  Local repair mechanism enhancing reliability  Conclusion and future work

23  Client failure effect: Loss data has to be obtained from central server, incurring delays. May affect succeeding clients in a loop. The higher the demand, the larger the influence of a failure on the performance  Address this issue with redundant caching schemes.  significantly reduces server load  shortens the repairing delay caused by transmitting missing data

24 Complete-local and partial-local repair

25 Additional loads saved by local repairs

26

27 Outline  Abstract  Related work  Client collaboration with loopback  Loopback analytical model  Loopback performance for multiple videos  Local repair mechanism enhancing reliability  Conclusion and future work

28 Conclusion  Loopback mechanism for exploiting client collaboration in a two-level video streaming architecture.  Improve resource usage Server Network bandwidth and I/O bandwidth Proxy Network bandwidth and I/O bandwidth Proxy storage space  Analyze the effect of client failures and developed local repair approaches

29 Future work  Allow varying amount of resources committed by each client  Each client can specify how much disk space can be utilized  According to network bandwidth, each client can decide how many clients he want to serve, and for what period of time

30 Central Server Proxy Peer


Download ppt "Loopback: Exploiting Collaborative Caches for Large-Scale Streaming Ewa Kusmierek, Yingfei Dong, Member, IEEE, and David H. C. Du, Fellow, IEEE."

Similar presentations


Ads by Google