Presentation is loading. Please wait.

Presentation is loading. Please wait.

Beneficial Caching in Mobile Ad Hoc Networks Bin Tang, Samir Das, Himanshu Gupta Computer Science Department Stony Brook University.

Similar presentations


Presentation on theme: "Beneficial Caching in Mobile Ad Hoc Networks Bin Tang, Samir Das, Himanshu Gupta Computer Science Department Stony Brook University."— Presentation transcript:

1 Beneficial Caching in Mobile Ad Hoc Networks Bin Tang, Samir Das, Himanshu Gupta Computer Science Department Stony Brook University

2 Outline Introduction: Why caching in ad hoc network? Problem formulation of cache placement problem under memory constraint Beneficial caching Centralized greedy algorithm with provable bound Distributed caching algorithm Cache routing protocol Distributed caching policy Simulation and analysis Comparison of centralized and distributed algorithm Comparison of distributed algorithm with latest existing work (Yin & Cao Infocom’04) Conclusion and future work

3 Motivation of Caching in MANET MANET Multi-hop wireless networks consisting of mobile nodes without any infrastructure support. Each node is both a host and a router Application: rescue work, battle field, outdoor assemblies… Scarce bandwidth and limited battery power/memory Wireless communication is a significant drain on the battery Our goal Develop communication-efficient caching technique with memory limitations

4 Problem formulation of Cache Placement Problem under Memory Constraint

5 General ad hoc network graph G(V,E) p data items D 1,D 2, … D p. Each D i is originally stored by a source node S i Each node has memory capacity of mi pages Node i request D j with access freq a ij, Distance between i and j is d ij Definition: A ijk indicates the j th memory page of node i is selected for caching of D k Our Goal: minimize total access cost

6 Centralized Greedy Algorithm Benefit of Variable: Let Γ denote the set of variables that have been already selected by the greedy algorithm at some stage. The benefit of A ijk with respect to Γ is defined as:

7 Theorem: Algorithm 1 returns a solution Γ whose benefit is at least as half of the optimal benefit.

8 Distributed Algorithm Cache routing protocol Cache routing table entry at node i: (D j, H j, N j, d j ) N j is the closest node to i that stores a copy of D j H j is the next hop on the shortest path to N j D j is the weighted length of the shortest path to N j Special cases: If i is the source node of D j, assume the D j will not be removed If i has cached D j, then N j is the nearest node (excluding i) that has a copy of D j

9 Distributed caching policy: Node i observes its local traffic and calculates the benefit (B ij ) of caching or removing a data item D j : B ij =  k known locally a kj d j Node i decides to cache the m i most beneficial data items

10 Performance Evaluation Comparison of centralized and distributed algorithms Parameters Number of nodes in the network Transmission radius T r Number of data items Number of clients accessing each data Memory capacity of each node Distributed and centralized algorithms perform quite closely.

11 Varying number of data Varying number of nodes and items and memory capacityTransmission Radius

12 Varying number of clients

13 Comparison of beneficial caching and cooperative caching (Yin & Cao Infocom’04) Experiment setup: Ns2 implementation Underlying routing protocol: DSDV 2000m x 500m Random waypoint model in which 100 nodes move at a speed within (0,20m/s) Tr=250m, bandwidth=2Mbps Experiment metrics: Average delay Message overhead Packet delivery ratio (PDR)

14 Server Model: Two servers: server0, server1 (to be consistent with Cao’s paper) 100 data items: even-id data items in server0, odd-id data items in server1 Data size uniformly distributed between 100 bytes and 1500 bytes Client Model: Each node generates a single stream of read-only queries Query generating time follows exponential distribution with some mean value (if the requested data does not return to the requesting node before the next query sent out, it is considered as a packet loss) Each node accesses 20 data items uniformly out of 100 data items

15 Beneficial caching: Each node maintains a cache routing table, each entry of which indicates the closest cache of each data item. It is maintained by flooding Node observes the data requests passing by and records how many times it sees for each item When some threshold number of data request is reached(100 in our experiment), each node calculates the benefit of caching Cache replacement algorithm is based on the benefit Cooperative caching (Yin & Cao infocom’04): Cache data – the data packet is cached if its size is smaller than some threshold value Cache path – the id of the requestor is cached, otherwise Requestor always caches the data packet; LRU is cache replacement policy

16

17 Experiment Analysis In static network: Ours perform much better in average dealy (3 times better), when traffic gets very heavy ( query generating time < 5s), outs are 4~5 times better Better PDR performance (100% vs. 98% in heavy traffic) Worse message overhead when traffic is light In Mobile network (max speed 20 m/s): Our delay performance is slightly better Better PDR (87% vs. 75% for most of the range) Worse message overhead (5 times worse)

18

19 Conclusions We propose and design a benefit-based caching paradigm for wireless ad hoc networks. A centralized algorithm in static network is given with provable bound under memory constraint of each node. The distributed version has a very close performance to the centralized one. Compared with the latest published work in mobile environment, our scheme performs better in some range of parameters.

20 Ongoing and future work We are currently working on mobility- based caching techniques Reduce overhead in our work


Download ppt "Beneficial Caching in Mobile Ad Hoc Networks Bin Tang, Samir Das, Himanshu Gupta Computer Science Department Stony Brook University."

Similar presentations


Ads by Google