Presentation is loading. Please wait.

Presentation is loading. Please wait.

ICNP'061 Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta and Samir Das Computer Science Department Stony Brook University.

Similar presentations


Presentation on theme: "ICNP'061 Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta and Samir Das Computer Science Department Stony Brook University."— Presentation transcript:

1 ICNP'061 Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta and Samir Das Computer Science Department Stony Brook University

2 ICNP'062 Outline Problem Addressed, and Motivation Problem Formulation Related Work Centralized Greedy Algorithm Distributed Implementation Performance Evaluation Conclusions

3 ICNP'063 Problem Addressed In a general ad hoc network with limited memory at each node, where to cache data items, such that the total access (communication) cost is minimized?

4 ICNP'064 Motivation Ad hoc networks are resource constrained Limited bandwidth, battery energy, and memory Caching can save access (communication) cost, and thus, bandwidth and energy

5 ICNP'065 Problem Formulation Given: Network graph G(V,E) Multiple data items Access frequencies (for each node and data item) Memory constraint at each node Select data items to cache at each node under memory constraint Minimize total access cost = ∑nodes ∑data items [ (distance from node to the nearest cache for that data item) x (access frequency) ]

6 ICNP'066 Related Work Related to facility-location problem and K- median problem; No memory constraint Baev and Rajaraman 20.5-approximation algorithm for uniform-size data item For non-uniform size, no polynomial-time approximation unless P = NP We circumvent the intractability by approximating “benefit” instead of access cost

7 ICNP'067 Related Work - continued Two major empirical works on distributed caching Hara [infocom’99] Yin and Cao [Infocom’ 04] (we compare our work with theirs) Our work is the first to present a distributed caching scheme based on an approximation algorithm

8 ICNP'068 Algorithms Centralized Greedy Algorithm (CGA) Delivers a solution whose “benefit” is at least 1/2 of the optimal benefit Distributed Greedy Algorithm (DGA) Purely localized

9 ICNP'069 Centralized Greedy Algorithm (CGA) Benefit of caching a data item at a node = the reduction of total access cost i.e., (total access cost before caching) – (total access cost after caching)

10 ICNP'0610 Centralized Greedy Algorithm (CGA) CGA iteratively selects the most beneficial (data item, node to cache at) pair. I.e., we pick (at each stage) the pair that has the maximum benefit. Theorem: CGA is (1/2)–approximate for uniform data item. ¼-approximate for non-uniform size data item

11 ICNP'0611 CGA Approximation Proof Sketch G’: modified G, where each node has twice memory of that in G caches data items selected by CGA and optimal B(Optimal in G) < B(Greedy + Optimal in G’) = B(Greedy) + B(Optimal) w.r.t Greedy < B(Greedy) + B(Greedy) [Due to greedy choice] = 2 x B(Greedy)

12 ICNP'0612 Distributed Greedy Algorithm (DGA) Each node caches the most beneficial data items, where the benefit is based on “local traffic”. “Local Traffic” includes: Its own data requests Data requests to its data items Data requests forwarding to others

13 ICNP'0613 DGA: Nearest Cache Table Why do we need it? Forward requests to the nearest cache Local Benefit calculation What is it? Each nodes keeps the ID of nearest cache for each data item Entries of the form: (data item, the nearest cache) Above is on top of routing table. Maintenance – next slide

14 ICNP'0614 Maintenance of Nearest-cache Table When node i caches data D j broadcast (i, D j ) to neighbors Notify server, which keeps a list of caches On recv (i, D j ) if i is nearer than current nearest-cache of D j, update and forward

15 ICNP'0615 Maintenance of Nearest-cache Table -II i deletes D j get list of caches C j from server of D j broadcast (i, D j, C j ) to neighbors On recv (i, D j, C j ) if i is current nearest-cache for D j, update using C j and forward

16 ICNP'0616 Maintenance of Nearest-cache Table -III More details pertaining to Mobility Second-nearest cache entries (needed for benefit calculation for cache deletions) Benefit thresholds

17 ICNP'0617 Performance Evaluation CGA vs. DGA Comparison DGA vs. HybridCache Comparison

18 ICNP'0618 CGA vs. DGA Summary of simulation results: DGA performs quite close to CGA, for wide range of parameter values

19 ICNP'0619 Varying Number of Data Items and Memory Capacity – Transmission radius =5, number of nodes = 500

20 ICNP'0620 DGA vs. Yin and Cao’s work. Yin and Cao:[infocom’04] CacheData – caches passing-by data item CachePath – caches path to the nearest cache HybridCache – caches data if size is small enough, otherwise caches the path to the data Only work of a purely distributed cache placement algorithm with memory constraint

21 ICNP'0621 DGA vs. HybridCache [YC 2004] Simulation setup: Ns2, routing protocol is DSDV Random waypoint model, 100 nodes move at a speed within (0,20m/s), 2000m x 500m area Tr=250m, bandwidth=2Mbps Performance metrics: Average query delay Query success ratio Total number of messages

22 Server Model: 1000 data items, divided into two servers. Data item size: [100, 1500] bytes Data access models Random: Each node accesses 200 data items randomly from the 1000 data items Spatial: (details skipped) Naïve caching algorithm: caches any passing-by data, uses LRU for cache replacement

23 Varying query generate time on random access pattern

24 ICNP'0624 Summary of Simulation Results Both HybridCache and DGA outperform Naïve approach DGA outperforms HybridCache in all metrics Especially for frequent queries and small cache size For high mobility, DGA has slightly worse average delay, but much better query success ratio

25 ICNP'0625 Conclusions Data caching problem for multiple items under memory constraint Centralized approximation algorithm Localized distributed implementation First work to present a distributed caching scheme based on an approximation algorithm

26 ICNP'0626 Questions?

27 ICNP'0627 Varying Network Size and Transmission Radius - number of data items = 1000, each node’s memory capacity = 20 units

28 ICNP'0628 Correctness of the maintenance Nearest-cache table is correct For node k whose nearest-cache table needs to change in response to a new cache i, every intermediate nodes between k and i needs to change its table Second-nearest cache is correct For cache node k whose second-nearest cache should be changed to i in response to new cache i, there exist two distinct neighboring nodes i 1, i 2 s.t. nearest-cache node of i 1 is k and nearest-cache node of i 2 is i

29 ICNP'0629

30 ICNP'0630

31 ICNP'0631 An Example A B C D E F


Download ppt "ICNP'061 Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta and Samir Das Computer Science Department Stony Brook University."

Similar presentations


Ads by Google