Presentation is loading. Please wait.

Presentation is loading. Please wait.

Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff Mohammad Ali Maddah-Ali Bell Labs, Alcatel-Lucent joint work with Urs Niesen Allerton.

Similar presentations


Presentation on theme: "Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff Mohammad Ali Maddah-Ali Bell Labs, Alcatel-Lucent joint work with Urs Niesen Allerton."— Presentation transcript:

1 Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff Mohammad Ali Maddah-Ali Bell Labs, Alcatel-Lucent joint work with Urs Niesen Allerton October 2013

2 Video on Demand High temporal traffic variability Caching (prefetching) can help to smooth traffic

3 Caching (Prefetching) Placement phase: populate caches Demands are not known yet Delivery phase: reveal request, deliver content Server

4 K Users Cache Contents Shared Link Problem Setting N Files Size M Placement: - Cache arbitrary function of the files (linear, nonlinear, …)Delivery: -Requests are revealed to the server - Server sends a function of the files Question: Smallest worst-case rate R(M) needed in delivery phase? How to choose (1) caching functions, (2) delivery functions

5 Coded Caching N Files, K Users, Cache Size M Uncoded Caching Caches used to deliver content locally Local cache size matters Coded Caching [Maddah-Ali, Niesen 2012] The main gain in caching is global Global cache size matters (even though caches are isolated)

6 Centralized Coded Caching N=3 Files, K=3 Users, Cache Size M=2 A 12 A 13 A 23 B 12 B 13 B 23 C 12 C 13 C 23 A 23 ⊕ B 13 ⊕ C 12 Multicasting Opportunity between three users with different demands A 12 A 13 A 23 B 12 B 13 B 12 B 23 B 13 B 23 C 12 C 13 C 12 C 23 C 13 C 23 1/3 A 23 B 13 C 12 Maddah-Ali, Niesen, 2012 Approximately Optimum

7 Centralized Coded Caching N=3 Files, K=3 Users, Cache Size M=2 A 12 A 13 A 23 B 12 B 13 B 23 C 12 C 13 C 23 A 12 A 13 A 23 B 12 B 13 B 12 B 23 B 13 B 23 C 12 C 13 C 12 C 23 C 13 C 23 Centralized caching needs Number and identity of the users in advance In practice, it is not the case, Users may turn off Users may be asynchronous Topology may time-varying (wireless) Question: Can we achieve similar gain without such knowledge?

8 Decentralized Proposed Scheme ⊕ Prefetching: Each user caches 2/3 of the bits of each file - randomly, - uniformly, - independently. Delivery: Greedy linear encoding ⊕ ⊕ ⊕ ⊕ N=3 Files, K=3 Users, Cache Size M=2

9 Decentralized Caching

10 Centralized Prefetching: Decentralized Prefetching:

11 Coded (Centralized): [Maddah-Ali, Niesen, 2012] Coded (Decentralized) Uncoded Local Cache Gain: Proportional to local cache size Offers minor gain Global Cache Gain: Proportional to global cache size Offers gain in the order of number of users Comparison N Files, K Users, Cache Size M

12 Can We Do Better? The proposed scheme is optimum within a constant factor in rate. Theorem: Information-theoretic bound The constant gap is uniform in the problem parameters No significant gains beside local and global

13 Asynchronous Delivery Segment 1 Segment 2 Segment 3 Segment 1 Segment 2 Segment 3 Segment 1 Segment 2 Segment 3

14 Conclusion We can achieve within a constant factor of the optimum caching performance through Decentralized and uncoded prefetching Greedy and linearly coded delivery Significant improvement over uncoded caching schemes Reduction in rate up to order of number of users Papers available on arXiv: Maddah-Ali and Niesen: Fundamental Limits of Caching (Sept. 2012) Maddah-Ali and Niesen: Decentralized Coded Caching Attains Order-Optimal Memory- Rate Tradeoff ( Jan. 2013) Niesen and Maddah-Ali: Coded Caching with Nonuniform Demands (Aug. 2013)


Download ppt "Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff Mohammad Ali Maddah-Ali Bell Labs, Alcatel-Lucent joint work with Urs Niesen Allerton."

Similar presentations


Ads by Google