Presentation is loading. Please wait.

Presentation is loading. Please wait.

ICNP'061 Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta and Samir Das Department of Computer Science Stony Brook University.

Similar presentations


Presentation on theme: "ICNP'061 Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta and Samir Das Department of Computer Science Stony Brook University."— Presentation transcript:

1 ICNP'061 Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta and Samir Das Department of Computer Science Stony Brook University

2 ICNP'062 Outline Motivation Problem Statement Algorithm and Protocol Design Performance Evaluation Conclusions and future work

3 ICNP'063 Motivation Ad hoc networks are resource constrained Bandwidth scarcity of network Battery energy, memory limitation Cache can save access/communication cost, and thus, energy and bandwidth Our work is the first to present a distributed caching implementation based on an approximation algorithm

4 ICNP'064 Problem Statement Given: Ad hoc network graph G(V,E) Multiple data items P, each stored at its server node Access frequency of each node for each data item Memory constraint of each node Goal: Select cache nodes to minimize the total access cost: ∑ i є V ∑ j є P (access frequency of i to j x distance to nearest cache of j) Under memory constraint

5 ICNP'065 Algorithm Design Outline Centralized Greedy Algorithm (CGA) Delivers a solution whose benefit is at least 1/2 of the optimal benefit (for uniform size data) Distributed Greedy Algorithm (DGA) Purely localized

6 ICNP'066 Centralized Greedy Algorithm (CGA) Benefit of caching a data item in a node: the reduction of total access cost CGA iteratively caches data items into memory pages of nodes that maximizes the benefit at each step Theorem: CGA delivers a solution whose total benefit is at least 1/2 of the optimal benefit for uniform data item 1/4 for non-uniform size data item

7 ICNP'067 Proof Sketch L: greedy solution, C: total benefit in greedy L’: optimal solution, O: total benefit in optimal G’: modified network of G, each node has twice memory capacity as that in G contains the data items selected by CGA and optimal O’: benefit for G’ = sum of the benefits of adding L and L’ in that order O < O’ = C + ∑ benefit of L’ w.r.t L < C + ∑ benefit of L’ w.r.t. {} < C + C

8 ICNP'068 Distributed Greedy Algorithm (DGA) Nearest-cache table maintains nearest cache node for each data If node caches a data, also maintains second-nearest cache Maintenance of nearest-cache and second-nearest cache and its correctness Assume distances values are available from underlying routing protocol Localized caching policy

9 ICNP'069 Maintenance of Nearest-cache Table Node i cache data D j notify server of D j (server maintains cache list C j for D j ) broadcast (i, D j ) to neighbors On recv (i, D j ) if i is nearer than current nearest-cache of D j, update and broadcast to neighbors else send it to nearest- cache of i i delete D j get C j from server of D j broadcast (i, D j, C j ) to neighbors On recv (i, D j, C j ) if i is current nearest-cache for D j, update using C j, broadcast else send it to nearest- cache of i

10 ICNP'0610 Mobility Servers periodically broadcasts cache list

11 ICNP'0611 Localized caching policy Observe local traffic and calculate the local benefit of caching or removing a data item Cache the most “beneficial” data items Local benefit/data item size for cache replacement Benefit threshold to suppress traffic

12 ICNP'0612 Performance Evaluation CGA vs. DGA Comparison DGA vs. HybridCache Comparison

13 ICNP'0613 “Supporting Cooperative caching in Ad Hoc Networks” (Yin & Cao infocom’04): CacheData – caches passing-by data item CachePath – caches path to the nearest cache HybridCache – caches data if size is small enough, otherwise caches the path to the data Only work of a purely distributed cache placement algorithm with memory constraint

14 ICNP'0614 CGA vs. DGA - Random network of 100 to 500 nodes in a 30 x 30 region Parameters: topology-related -- number of nodes, transmission radius application-related -- number of data items, number of clients problem constraint -- memory capacity Summary of simulation results: CGA performs slightly better by exploiting global info DGA performs quite close to CGA The performance difference decreases with increasing memory capacity

15 ICNP'0615 Varying Number of Data Items and Memory Capacity – Transmission radius =5, number of nodes = 500

16 ICNP'0616 Varying Network Size and Transmission Radius - number of data items = 1000, each node’s memory capacity = 20 units

17 ICNP'0617 DGA vs. HybridCache Simulation setup: Ns2, routing protocol is DSDV 2000m x 500m area Random waypoint model, 100 nodes move at a speed within (0,20m/s) Tr=250m, bandwidth=2Mbps Simulation metrics: Average query delay Query success ratio Total number of messages

18 Server Model: Two servers, 1000 data items: even-id data items in one server, odd-id the other Data size:[100, 1500] bytes Client Model: A single stream of read-only queries Data access model Spatial access pattern: access frequency depends on geographic location Random pattern: Each node accesses 200 data items randomly from the 1000 data items Naïve caching: caches any passing-by item if there is free space, uses LRU for cache replacement

19 ICNP'0619

20 ICNP'0620 Summary of Simulation Results Both HybridCache and DGA outperform Naïve approach DGA outperforms HybridCache in all metrics For frequent queries and small cache size, DGA has much better average query delay and query success ratio For high mobility, DGA has slight worse average delay, but much better query success ratio

21 ICNP'0621 Conclusions and Future work Data caching problem under memory constraint Provable approximation algorithm Feasible distributed implementation Future work: Reduce nearest-cache table size Node failure Benefit?…Mm…Game theoretical analysis?

22 ICNP'0622 Questions?

23 ICNP'0623 Correctness of the maintenance Nearest-cache table is correct For node k whose nearest-cache table needs to change in response to a new cache i, every intermediate nodes between k and i needs to change its table Second-nearest cache is correct For cache node k whose second-nearest cache should be changed to i in response to new cache i, there exist two distinct neighboring nodes i 1, i 2 s.t. nearest-cache node of i 1 is k and nearest-cache node of i 2 is i

24 ICNP'0624

25 ICNP'0625


Download ppt "ICNP'061 Benefit-based Data Caching in Ad Hoc Networks Bin Tang, Himanshu Gupta and Samir Das Department of Computer Science Stony Brook University."

Similar presentations


Ads by Google