Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scalable Location Service for Geographic Ad Hoc Routing

Similar presentations

Presentation on theme: "Scalable Location Service for Geographic Ad Hoc Routing"— Presentation transcript:

1 Scalable Location Service for Geographic Ad Hoc Routing
Acknowledgments to Robert Morris for slides

2 Ad-Hoc Nets: The Dream Robert Liuba
Nodes forward each others’ packets. No infrastructure; easy to deploy; fault tolerant. Short hops are good for power and spectrum. Can it be made to work?

3 Ad-Hoc Nets: The Reality
Avg. packets transmitted per node per second On-demand routing handles mobility really well and have shown to work well in small networks. We simulated the DSR, which is the best known on-demand ad hoc routing protocols for networks with reasonable size and see how it scales. This is the number of packets each node has to forward which grows rapidly. when network gets bigger. When the network size exceeds 300, so many packets are being transmitted in the networks that a lot of them get dropped Number of nodes Flooding-based on-demand routing works best in small nets. Can we route without global topology knowledge?

4 Geographic Forwarding Scales Well
Assume each node knows its geographic location. C’s radio range A D F C G B E In the network of DSR, nodes do not know their geographic location, if nodes could get to know their geographic location via GPS, could we do it better? Yes! In radio networks, it’s very likely that nodes that are closer geographically also tend to be close in the actual routing path. Geographic forwarding makes use of this characteristics of mobile networks to route packets. In the following example, … We can easily see geographic forwarding is very scalable. But how does A know G’s current location in the first place? There needs to be some sort of location service in order for geographic forwarding to be useful. A addresses a packet to G’s latitude, longitude C needs to know its immediate neighbors to forward to G Geographic forwarding needs a location service!

5 Possible Designs for a Location Service
Flood to get a node’s location (LAR, DREAM). excessive flooding messages Central static location server. not fault tolerant too much load on central server and nearby nodes the server might be far away / partitioned Every node acts as server for a few others. good for spreading load and tolerating failures. Mention that location server for hashed ID solution could also be far away or inaccessible.

6 Desirable Properties of a Distributed Location Service
Spread load evenly over all nodes. Degrade gracefully as nodes fail. Queries for nearby nodes stay local. Per-node storage and communication costs grow slowly as the network size grows. Mention the transition to next slide: W e are working on a location service protocol called GLS that satisfies all the listed properties here.

7 Grid Location Service (GLS) Overview
H L B D J A G “D?” I F K C Interesting point: how D chooses servers in a way that other nodes can predict. Each node has a few servers that know its location. 1. Node D sends location updates to its servers (B, H, K). 2. Node J sends a query for D to one of D’s close servers.

8 Grid Node Identifiers Each Grid node has a unique identifier.
Identifiers are numbers. Perhaps a hash of the node’s IP address. Identifier X is the “successor” of Y if X is the smallest identifier greater than Y.

9 GLS’s spatial hierarchy
level-0 level-1 level-2 level-3 All nodes agree on the global origin of the grid hierarchy

10 3 Servers Per Node Per Level
sibling level-0 squares sibling level-1 squares sibling level-2 squares This graph shows the distribution of location servers for a particular node n. N will be sending location updates to its servers. N has a location server in each of the 3 sibling squares of different levels. Mention there are lots of servers in areas close to node n which helps location queries from node closer by to hit nearby location servers. Mention fault tolerant: the fact that there are more than one location servers in grid ensures that even if a single location server for n fails, n won’t be unreachable. Mention the number of location server for each node scales as the log of the area of the network. s is n’s successor in that square. (Successor is the node with “least ID greater than” n )

11 Queries Search for Destination’s Successors
x s2 s1 Each query step: visit n’s successor at surrounding level. in order to for x to know node n’s current location. X sends out a location query for n and this query packets will visit …. Since n has been sending queries to nodes with closest ID in increasing levels of squares. Query will finally reach a node that n has previously sent updates to. Robert’s comments: Consistent hashing: x and n agree on who n’s server is. Use of squares allows n to move (within limits) without changing servers. Use of IDs allows servers to move (within limits) and still be useful. location query path

12 GLS Update (level 0) Invariant (for all levels):
9 11 23 6 Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n. 11 1 2 3 9 23 29 16 6 7 Base case: Each node in a level-0 square “knows” about all other nodes in the same square. 17 5 26 25 4 21 8 location table content 19

13 GLS Update (level 1) Invariant (for all levels):
9 Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n. 11 1 2 11 2 3 9 6 23 2 29 16 23 2 6 7 17 5 26 25 4 21 8 location table content location update 19

14 GLS Update (level 1) Invariant (for all levels):
9 ... Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n. ... 11 1 2 ... 11, 2 3 9 6 ... 23 2 29 16 23, 2 ... 6 7 ... ... ... 17 ... 5 26 25 ... ... ... 21 4 8 ... location table content 19

15 GLS Update (level 2) Invariant (for all levels):
9 ... Invariant (for all levels): For node n in a square, n’s successor in each sibling square “knows” about n. ... 1 11 1 1 2 ... 11, 2 3 9 6 ... 23 2 29 16 23, 2 ... 6 7 ... ... ... 17 ... 5 26 25 ... ... ... location table content location update 21 4 8 ... 19

16 GLS Query 11 1 1 2 3 9 23 29 16 6 7 17 5 26 25 location table content
... 1 9 ... 11 1 1 2 ... 11, 2 3 9 6 ... 23 2 29 16 23, 2 ... 6 7 ... ... ... 17 ... 5 26 25 location table content ... ... ... 21 4 8 query from 23 for 1 ... 19

17 Challenges for GLS in a Mobile Network
Slow updates risk out-of-date information. Packets dropped because we can’t find dest. Aggressive updates risk congestion. Update packets leave no bandwidth for data.

18 Performance Analysis How well does GLS cope with mobility?
How scalable is GLS?

19 Simulation Environment
Simulations using ns2 with IEEE Random way-point mobility: 0 to 10m/s (22mph) Area grows with the number of nodes Achieve spatial reuse of the spectrum GLS level-0 square is 250m x 250m 300 seconds per simulation

20 GLS Finds Nodes in Big Mobile Networks
Number of nodes query success rate Biggest network simulated: 600 nodes, 2900x2900m (4-level grid hierarchy) Re-iterate what the query is. Mention there is no retransmission for failed GLS queries. Failed queries are not retransmitted in this simulation Queries fail because of out-of-date information for destination nodes or intermediate servers

21 GLS Protocol Overhead Grows Slowly
Avg. packets transmitted per node per second What is protocol packet overhead? GLS update query reply, HELLO measured in the number of packets forwarded as well as initiated. The y-axis shows the average number of packets EACH node has to forward/send per second. Number of nodes Protocol packets include: GLS update, GLS query/reply

22 Fraction of Data Packets Delivered
Successfully delivered data Number of nodes DSR Grid Why did DSR fail to deliver most of the packets when network exceeds 400 nodes? See next slide! Geographic forwarding is less fragile than source routing. DSR queries use too much b/w with > 300 nodes.

23 Grid Deployment Deployed 16 nodes with partial software.
12 fixed relays. 4 Compaq iPaq handhelds. Linux, radios. Aiming for campus-wide deployment.

24 Grid Summary Grid routing protocols are
Self-configuring. Easy to deploy. Scalable.

25 Rumor Routing in Sensor Networks
David Braginsky and Deborah Estrin LECS – UCLA Acknowledgments to Sugata Hazarika for the slides

26 Sensor Data Routing Several sensor based applications
Some require frequent querying Or unsure where events occur Pull mechanism (google map sensors) Some require frequent event broadcast Push mechanism (earthquake monitoring)

27 Issues Query flooding Event Flooding Both provide shortest delay paths
Expensive for high query/event ratio Allows for optimal reverse path setup Gossiping scheme possible to reduce overhead Event Flooding Expensive for low query/event ratio Both provide shortest delay paths But can have high energy cost

28 Rumor Routing Designed for query/event ratios between query and event flooding Motivation Sometimes a non-optimal route is satisfactory Advantages Tunable best effort delivery Tunable for a range of query/event ratios Disadvantages Optimal parameters depend heavily on topology Does not guarantee delivery

29 Rumor Routing

30 Basis for Algorithm Observation: Two lines in a bounded rectangle have a 69% chance of intersecting Create a set of straight line gradients from event, then send query along a random straight line from source. Thought: Can this bound be proved for a random walk . What is this bound if the line is not really straight? Event Source

31 Creating Paths Nodes having observed an event send out agents which leave routing info to the event as state in nodes Agents attempt to travel in a straight line If an agent crosses a path to another event, it begins to build the path to both Agent also optimizes paths if they find shorter ones.

32 Algorithm Basics All nodes maintain a neighbor list.
Nodes also maintain a event table When it observes event: event is added with distance 0. Agents Packets that carry event info across network. Aggregate events as they go.

33 Agents

34 Agent Path Agent tries to travel “somewhat” straight
Maintains a list of recently seen nodes. When it arrives at a node adds the node’s neighbors to the list. For the next tries to find a node not in the recently seen list. Avoids loops

35 Forwarding Queries Forwarding Rules:
If a node has seen the query before, it is sent to a random neighbor If a node has a route to the event, forward to neighbor along the route Otherwise, forward to random neighbor using straightening algorithm

36 Simulation Results Assume that undelivered queries are flooded
Wide range of parameters allow for energy saving over either of the naïve alternatives Optimal parameters depend on network topology, query/event distribution and frequency Algorithm was very sensitive to event distribution

37 Fault Tolerance After agents propagated paths to events, some nodes were disabled. Delivery probability degraded linearly up to 20% node failure, then dropped sharply Both random and clustered failure were simulated with similar results

38 Some Thoughts The effect of event distribution on the results is not clear. The straightening algorithm used is essentially only a random walk … can something better be done. The tuning of parameters for different network sizes and different node densities is not clear. There are no clear guidelines for parameter tuning, only simulation results in a particular environment.

39 Questions?

40 Let’s Talk about Projects
me description Deadline: Feb 22, Thursday.

41 Groups Brian, Ashwin, Roman: senor network on maps
Pradeep, Thilee: model based suppression Ola, Soji, Tom: DTN Michael, Kunal: ?? TingYu, Gary, Yuanchi: intrusion detection in SN Ian, William: DTN gossip Wayne, Tray: beam overlap not harmful Deepak, Karthik, Boyeum: ?? Tong: DTN ?? Shawn, Simrat: ??

42 Thanks!

43 Can we analyze The inherent concept looks powerful.
Even though not presented in this way … this algorithm is just an example of gossip routing. There are two types of gossip, gossip of events and gossip of queries. With the same gossip probability = 1/number of neighbors. (change this, would that help) It maybe possible to find the probability of intersection of these two. That might lead to a set of techniques for parameter estimation, or an optimal setting.

44 Other similar algos. Content based pub/sub .
Both the subscription and notification meet inside the network. Can we borrow some ideas from wired networks DHT DHTs can also be used to locate events. Underlying routing is the problem. DHT over DSR or AODV may not be suitable.

45 Future Work Network dynamics Realistic environment
Non-localized Events Asynchronous Events Self-tuning algorithm dynamics

46 In Progress: Power-Saving Backbone
Most nodes “sleep,” waking every second to poll for packets. “W” nodes stay awake to form low-delay backbone. Works well with high node densities. Algorithm: wake only to connect two neighbors.

47 In Progress: Geographic “Hole” Avoidance
F D G C ? H A B Node C has no neighbor closer than it to H. There’s a “hole” between C and H. Use right-hand rule to traverse perimeter? Pick a random intermediate point?

Download ppt "Scalable Location Service for Geographic Ad Hoc Routing"

Similar presentations

Ads by Google