Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Scalable Location Service for Geographic Ad Hoc Routing Jinyang Li, John Jannotti, Douglas S. J. De Couto, David R. Karger, Robert Morris Presented By.

Similar presentations


Presentation on theme: "A Scalable Location Service for Geographic Ad Hoc Routing Jinyang Li, John Jannotti, Douglas S. J. De Couto, David R. Karger, Robert Morris Presented By."— Presentation transcript:

1 A Scalable Location Service for Geographic Ad Hoc Routing Jinyang Li, John Jannotti, Douglas S. J. De Couto, David R. Karger, Robert Morris Presented By Maulik Shah Email: shah.23@wright.edu

2 Comparison between Fixed and Ad-Hoc networks z Fixed Networks yDisadvantages: 1.Large up-front investment. 2.Users maybe so sparse or dense that it might not be an economical investment. yAdvantages of the static nature : 1.Each node pre-computes routes through the topology. 2.Fixed networks embed routing hints in node addresses

3 Comparison between Fixed and Ad-Hoc networks zAd-Hoc Networks yNo prior investment is required. yNetwork nodes agree to relay each other’s packets toward their ultimate destinations. yThe authors have described a GRID that combines a cooperative infrastructure with location information to implement routing. yGRID uses geographical forwarding.

4 Overview  Motivation for Grid: scalable routing for large ad hoc networks ã metropolitan area, 1000s of nodes zProtocol Scalability: y The number of packets each node has to forward and the amount of state kept at each node grow slowly with the size of the network.  Failure of one node should not affect the reachability of many other nodes.

5 Current Routing Strategies zPro-active topology distribution (e.g. DSDV) * Table Driven Routing ã reacts slowly to mobility in large networks zOn-demand flooded queries (e.g. DSR) * Source Driven Routing ã too much protocol overhead in large networks

6 Flooding causes too much packet overhead in big networks Flooding-based on-demand routing works best in small nets. Number of nodes Avg. packets transmitted per node per second

7 Geographic Forwarding Scales Well A B C D F C’s radio range E G zA addresses a packet to G’s latitude, longitude zC only needs to know its immediate neighbors to forward packets towards G. zGeographic forwarding needs a location service. zConcept of a “Hole”. zAssume each node knows its geographic location.

8 Possible Designs for a Location Service  Flood to get a node’s location (LAR). ã excessive flooding messages  Central static location server. ã not fault tolerant ã too much load on central server ã the server might be far away for nearby nodes or inaccessible due to network partition. zEvery node acts as server for a few others. ã good for spreading load and tolerating failures.

9 Desirable Properties of a Distributed Location Service zSpread load evenly over all nodes. zDegrade gracefully as nodes fail. zQueries for nearby nodes stay local. zPer-node storage and communication costs grow slowly as the network size grows.

10 GLS’s spatial hierarchy level-0 level-1 level-2 level-3 All nodes agree on the global origin of the grid hierarchy

11 3 Servers Per Node Per Level n s s s ss s s s s s is n’s successor in that square. (Successor is the node with “least ID greater than” n ) sibling level-0 squares sibling level-1 squares sibling level-2 squares

12 Queries Search for Destination’s Successors Each query step: visit n’s successor at each level. n s s s s s s s s s3 x s2 s1 location query path

13 9 11 23 6 9 11 2 16 23 6 17 26 21 5 25 8 1 7 29 GLS Update (level 0) Base case: Each node in a level-0 square “knows” about all other nodes in the same square. location table content Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n. 19 4 3

14 GLS Update (level 1) 9 11 23 6 2 9 11 2 16 23 6 17 4 26 21 5 19 25 8 1 7 29 2 2 Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n. location table content location update 3

15 ... GLS Update (level 1) 9 11, 2 23, 2 6 9 11 16 23 6 17 4 26 21 19 25 1 3 2... 8 5 29 7... location table content 2 Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n.

16 ... 1 GLS Update (level 2) 9 23, 2 11, 2 6 9 11 16 23 6 17 4 26 21 5 19 25 7 3 29 2... 1 8 1 location table content location update 2 Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n.

17 1... GLS Query 9 23, 2 11, 2 6 9 11 2 16 23 6 17 4 26 21 5 19 25 7 3 29 2... 1 8 1 location table content query from 23 for 1

18 Challenges for GLS in a Mobile Network zOut-of-date location information in servers. zTradeoff between maintaining accurate location data and minimizing periodic location update messages. ã Adapt location update rate to node speed ãLeave forwarding pointers until updates catch up.

19 Simulation Environment zSimulations using ns with CMU’s wireless extension (IEEE 802.11) zMobility Model: ã random way-point with speed 0-10 m/s (22 mph) zArea of square universe grows with the number of nodes in the network. ã Achieve spatial reuse of the spectrum zGLS level-0 square is 250m x 250m z300 seconds per simulation

20 GLS Finds Nodes in Big Mobile Networks Failed queries are not retransmitted in this simulation Queries fail because of out-of-date information for destination nodes or intermediate servers Number of nodes query success rate Biggest network simulated: 600 nodes, 2900x2900m (4-level grid hierarchy)

21 GLS Protocol Overhead Grows Slowly Protocol packets include: GLS update, GLS query/reply Number of nodes Avg. packets transmitted per node per second

22 Average Location Table Size is Small Average location table size grows extremely slowly with the size of the network Number of nodes Avg. location table size

23 GLS is Fault Tolerant Fraction of failed nodes Query success rate Measured query performance immediately after a number of nodes crash simultaneously. (200-node-networks)

24 Performance Comparison between Grid and DSR zDSR (Dynamic Source Routing) ySource floods route request to find the destination. yQuery reply includes source route to destination. ySource uses source route to send data packets. zSimulation scenario: y2Mbps radio bandwidth yCBR sources, 4 128-byte packets/second for 20 seconds. y50% of nodes initiate over 300-second life of simulation.

25 Fraction of Data Packets Delivered Geographic forwarding is less fragile than source routing. Successfully delivered data Number of nodes DSR Grid

26 Protocol Packet Overhead DSR prone to congestion in big networks: Sources must re-flood queries to fix broken source routes These queries cause congestion Grid’s queries cause less network load. Queries are unicast, not flooded. Un-routable packets are discarded at source when query fails. Number of nodes Avg. packets transmitted per node per second DSR Grid

27 Conclusion zGLS enables routing using geographic forwarding. zGLS preserves the scalability of geographic forwarding. zCurrent work: yImplementation of Grid in Linux


Download ppt "A Scalable Location Service for Geographic Ad Hoc Routing Jinyang Li, John Jannotti, Douglas S. J. De Couto, David R. Karger, Robert Morris Presented By."

Similar presentations


Ads by Google