Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Scalable Location Service for Geographic Ad Hoc Routing Jinyang Li, John Jannotti, Douglas S. J. De Couto, David R. Karger, Robert Morris MIT Laboratory.

Similar presentations


Presentation on theme: "A Scalable Location Service for Geographic Ad Hoc Routing Jinyang Li, John Jannotti, Douglas S. J. De Couto, David R. Karger, Robert Morris MIT Laboratory."— Presentation transcript:

1 A Scalable Location Service for Geographic Ad Hoc Routing Jinyang Li, John Jannotti, Douglas S. J. De Couto, David R. Karger, Robert Morris MIT Laboratory for Computer Science

2 Overview  Motivation for Grid: scalable routing for large ad hoc networks ã metropolitan area, 1000s of nodes zProtocol Scalability: The number of packets each node has to forward and the amount of state kept at each node grow slowly with the size of the network.

3 Current Routing Strategies zTraditional scalable Internet routing ã address aggregation hampers mobility zPro-active topology distribution (e.g. DSDV) ã reacts slowly to mobility in large networks zOn-demand flooded queries (e.g. DSR) ã too much protocol overhead in large networks

4 Flooding causes too much packet overhead in big networks Flooding-based on-demand routing works best in small nets. Can we route without global topology knowledge? Number of nodes Avg. packets transmitted per node per second

5 Geographic Forwarding Scales Well A B C D F C’s radio range E G zA addresses a packet to G’s latitude, longitude zC only needs to know its immediate neighbors to forward packets towards G. zGeographic forwarding needs a location service! zAssume each node knows its geographic location.

6 Possible Designs for a Location Service  Flood to get a node’s location (LAR, DREAM). ã excessive flooding messages  Central static location server. ã not fault tolerant ã too much load on central server and nearby nodes ã the server might be far away for nearby nodes or inaccessible due to network partition. zEvery node acts as server for a few others. ã good for spreading load and tolerating failures.

7 Desirable Properties of a Distributed Location Service zSpread load evenly over all nodes. zDegrade gracefully as nodes fail. zQueries for nearby nodes stay local. zPer-node storage and communication costs grow slowly as the network size grows.

8 GLS’s spatial hierarchy level-0 level-1 level-2 level-3 All nodes agree on the global origin of the grid hierarchy

9 3 Servers Per Node Per Level n s s s ss s s s s s is n’s successor in that square. (Successor is the node with “least ID greater than” n ) sibling level-0 squares sibling level-1 squares sibling level-2 squares

10 Queries Search for Destination’s Successors Each query step: visit n’s successor at each level. n s s s s s s s s s3 x s2 s1 location query path

11 9 11 23 6 9 11 2 16 23 6 17 26 21 5 25 8 1 7 29 GLS Update (level 0) Base case: Each node in a level-0 square “knows” about all other nodes in the same square. location table content Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n. 19 4 3

12 GLS Update (level 1) 9 11 23 6 2 9 11 2 16 23 6 17 4 26 21 5 19 25 8 1 7 29 2 2 Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n. location table content location update 3

13 ... GLS Update (level 1) 9 11, 2 23, 2 6 9 11 16 23 6 17 4 26 21 19 25 1 3 2... 8 5 29 7... location table content 2 Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n.

14 ... 1 GLS Update (level 2) 9 23, 2 11, 2 6 9 11 16 23 6 17 4 26 21 5 19 25 7 3 29 2... 1 8 1 location table content location update 2 Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n.

15 1... GLS Query 9 23, 2 11, 2 6 9 11 2 16 23 6 17 4 26 21 5 19 25 7 3 29 2... 1 8 1 location table content query from 23 for 1

16 Challenges for GLS in a Mobile Network zOut-of-date location information in servers. zTradeoff between maintaining accurate location data and minimizing periodic location update messages. ã Adapt location update rate to node speed ã Update distant servers less frequently than nearby servers. ã Leave forwarding pointers until updates catch up.

17 Performance Analysis zHow well does GLS cope with mobility? zHow scalable is GLS? zHow well does GLS handle node failures? zHow local are the queries for nearby nodes?

18 Simulation Environment zSimulations using ns with CMU’s wireless extension (IEEE 802.11) zMobility Model: ã random way-point with speed 0-10 m/s (22 mph) zArea of square universe grows with the number of nodes in the network. ã Achieve spatial reuse of the spectrum zGLS level-0 square is 250m x 250m z300 seconds per simulation

19 GLS Finds Nodes in Big Mobile Networks Failed queries are not retransmitted in this simulation Queries fail because of out-of-date information for destination nodes or intermediate servers Number of nodes query success rate Biggest network simulated: 600 nodes, 2900x2900m (4-level grid hierarchy)

20 GLS Protocol Overhead Grows Slowly Protocol packets include: GLS update, GLS query/reply Number of nodes Avg. packets transmitted per node per second

21 Average Location Table Size is Small Average location table size grows extremely slowly with the size of the network Number of nodes Avg. location table size

22 Non-uniform Location Table Size 27 11 2 16 23 6 1826 21 25 8 1 7 3 The complete Grid hierarchy of level 3 Simulated universe Possible solution: dynamically adjust square boundaries Node 3 is to be the location server for all other nodes 1,2,6,7,8 11,16,18, 21,23,25

23 GLS is Fault Tolerant Fraction of failed nodes Query success rate Measured query performance immediately after a number of nodes crash simultaneously. (200-node-networks)

24 Query Path Length is proportional to the distance between source and destination Hop count between source and destination Ratio of query path to src/dst distance

25 Performance Comparison between Grid and DSR zDSR (Dynamic Source Routing) ySource floods route request to find the destination. yQuery reply includes source route to destination. ySource uses source route to send data packets. zSimulation scenario: y2Mbps radio bandwidth yCBR sources, 4 128-byte packets/second for 20 seconds. y50% of nodes initiate over 300-second life of simulation.

26 Fraction of Data Packets Delivered Geographic forwarding is less fragile than source routing. Why does DSR have trouble with > 300 nodes? Successfully delivered data Number of nodes DSR Grid

27 Protocol Packet Overhead DSR prone to congestion in big networks: Sources must re-flood queries to fix broken source routes These queries cause congestion Grid’s queries cause less network load. Queries are unicast, not flooded. Un-routable packets are discarded at source when query fails. Number of nodes Avg. packets transmitted per node per second DSR Grid

28 Conclusion zGLS enables routing using geographic forwarding. zGLS preserves the scalability of geographic forwarding. zCurrent work: yImplementation of Grid in Linux http://pdos.lcs.mit.edu/grid


Download ppt "A Scalable Location Service for Geographic Ad Hoc Routing Jinyang Li, John Jannotti, Douglas S. J. De Couto, David R. Karger, Robert Morris MIT Laboratory."

Similar presentations


Ads by Google