Presentation is loading. Please wait.

Presentation is loading. Please wait.

RelayCast: Scalable Multicast Routing in Delay Tolerant Networks

Similar presentations


Presentation on theme: "RelayCast: Scalable Multicast Routing in Delay Tolerant Networks"— Presentation transcript:

1 RelayCast: Scalable Multicast Routing in Delay Tolerant Networks
Uichin Lee, Soon Young Oh, Kang-Won Lee*, Mario Gerla This paper is about.. *

2 Bluetooth-based pocket switched networking
DTN Multicast Routing Delay tolerant networking: Suitable for non-interactive, delay tolerant apps Ranging from connected wireless nets to wireless mobile nets with disruptions (delay tolerant networks) Provides reliable data multicast even with disruptions DTN multicast routing methods: Tree/mesh (+ mobility), ferry/mule, epidemic dissemination DTN multicast questions: Throughput/delay/buffer bounds? Focus: dissemination; upper bound of all cases S F R1 Ferry/mule Disrupted nodes Delay tolerant networking is suitable for non-interactive and delay tolerant apps. It can be supported in various environments ranging from connected wireless nets to wireless mobile nets with disruptions, which are generally referred as DTNs. In this talk, we consider a particular type of a delay tolerant network. That is a wireless ad hoc network with sparse connectivity. This can be useful for delay tolerant applications such as reading s, or downloading files while commuting. One of the key services of DTN is to provide reliable multicast. For instance, in a tactical wireless network, we want to reliably disseminate situation awareness data to a group of soldiers even with disruptions. To realize this, various methods have been proposed. If the network is intermittently connected, we can maintain routing structure like tree or mesh just as conventional multicast routing, and disrupted users can pull data from the structure through mobility. If there is a ferry or data mule that is a special type of node with predicted mobility patterns, we can use it to deliver data to remote users. Finally, we can purely exploit mobility to deliver data as in dissemination. The fundamental questions that we would like to answer is what are the throughput/delay bounds of delay tolerant multicast? And we’d like to compare the results with those of ad hoc wireless networks. Moreover, DTN uses carry and forward data delivery, so the size of buffer is very important. We’d like to know what’s the buffer requirement to achieve a certain throughput bound. In the talk, we focus on dissemination to answer these questions, because dissemination allows us to find the upper bounds of all the other cases. S R2 R3 R1 R4 mobility Disrupted node Tree Mesh S R1 R2 Dissemination mobility Figure from Internet City Remote village Connecting remote villagers Access Point Bluetooth-based pocket switched networking

3 DTN Model Pair-wise inter-contact time: interval between two contact points Common assumption: exponential inter-contact time Random direction, random waypoint, etc. Real world traces also have “exponential” tails [Karagiannis07] Exponential inter-contact time  Inter-contact rate: λ ~ speed x radio range [Groenevelt05] Assumption: n nodes in 1x1 unit area; radio range: O(1/√n) and speed: O(1/√n)  meeting rate: λ=O(1/n) T2 Contact points between two nodes: i and j T3 T1 Tic(1) T1 Tic(2) T2 Tic(3) T3 time Tic(n): pair-wise inter-contact time For analysis, we model DTN using pair-wise inter-contact time, which is a time interval between two contact points. Consider these two nodes. They encounter at time T1, and the interval is T1; and they encounter at time T2 again, and the interval is T2-T1, and so on. The common assumption is that inter-contact time distribution follows exponential distribution. This is true for most random mobility models like random direction. Also, researchers recently discovered that most real world traces also have exponential tails. If the inter-contact time follows exponential, Groenevelt showed that inter-contact rate can be represented as a speed and radio range product. So, we can model an arbitrary DTN using a single parameter lambda. In our analysis, we assume that n nodes are deployed in 1by1 unit square, and radio range and speed are smaller than 1/sqrt(n); so the meeting rate is O(1/n).

4 2-Hop Relay: DTN Unicast Routing
Each source has a random destination (n source-destination pairs) 2-hop relay protocol: 1. Source sends a packet to a relay node 2. Relay node delivers a packet to the corresponding receiver 2-hop Relay by Grossglauser and Tse Let’s first talk a look at 2-hop relay, proposed by Grossglauser and Tse. In 2-hop relay, each source has a random destination, and there are n source-destination pairs. A source pushes packets to relay nodes, and relay nodes deliver packets to the destination through mobility. Source Relay Destination Source is also mobile

5 2-Hop Relay: Throughput/Delay
Throughput is determined by aggregate meeting rate [Src  relay nodes], [Dest  relay nodes] 2-hop relay throughput: Θ(nλ) G&T’s results: Θ(nλ)=Θ(1) for λ=1/n (i.e., speed=radio=1/√n) 2-hop relay delay: Θ(1/λ) Avg. time for a relay to meet a dest (~exp dist!): 1/λ Ex) For λ=1/n, avg. delay is Θ(n) (Neely&Modiano) Using our model, we can easily find the throughput and delay. If we take a look at the ants, we can see that they can carry the bread crumbs almost at the rate they encounter the food source. Likewise, we see that throughput is mainly determined by the aggregated meeting rate between source and relay nodes and destination and relay nodes. In both cases, the aggregate meeting rate is nlambda. So the throughput of 2-hop relay is nlambda. G&T’s result is a special case when the meeting rate is given as 1/n. Now, average delay is simply the average time for a relay node to encounter a destination node; that is simply the average meeting time of 1/lambda Food (=source) Ants (=relays) Source Destination Agg. meeting rate: nλ Agg. meeting rate: nλ Avg. delay = Avg. inter-contact time Source to Relay Relay to Destination

6 RelayCast: DTN Multicast Routing
2-hop relay based multicast: 1. Source sends a packet to a relay node 2. Relay node delivers the packet to ALL multicast receivers D2 D3 We then extended 2-hop relay to support DTN multicast. Because each source has multiple destinations, in RelayCast, a relay node delivers a packet to all the multicast receivers. So, an incoming packet is replicated in each destination queue, and each packet is delivered independently. D1 Source Relay Destinations NB. Source & Destinations are also mobile RelayCast: 2-hop relay based multicast

7 RelayCast: Throughput Analysis
RelayCast throughput: Θ(nλ/nx) ns srcs, each of which associated with nd random dests Multiple srcs may choose the same node as a dest Avg. # of competing sources per receiver: nx S1 S2 R D1 nx competing sources Relay nx=ns*nd/n ns srcs dests D1 D2 D3 To D2 To D3 To D1 λ nx srcs to D1 λ/nx nx srcs to D2 nx srcs to D3 Relay Node in RelayCast Now let’s calculate the throughput of RelayCast. Because each source has nd random destinations, the key is multiple sources may choose the same node as a dest. For instance, S1 chooses D1 to D3, and S2 chooses D1, D2, and D4. Here, S1 and S2 chose D1 as one of their destinations. These two sources are competing each other because they try to send packets to D1 through the relay nodes. In general, there are ns sources and each source chooses a random target with probability nd/n, on average we see that ns*nd/n sources choose the same target. We define the average number of competing sources as nx. Now, if we take a closer look at the relay node, we can see that they are actually competing for the relay bandwidth. As you can see, relay node can deliver packets to D1 at the rate of lambda that is a meeting rate. But because there are nx competing sources, the relay bandwidth to the same destination is equally shared by these competing sources. Therefore, RelayCast throughput is the aggregate meeting rate of nlambda divided by nx. S1 D1 D1 S2 D2 S3 D3 S4 D4 Each src chooses 3 rand dests

8 RelayCast: Delay Analysis
Relay node delivers a packet to ALL destinations nx competing srcs per dest: individual rate is split to λ/nx RelayCast avg. delay: Θ(nx/λ(log nd+γ)) where γ = Euler constant D1 D2 D3 To D2 To D3 To D1 λ nx srcs to D1 λ/nx nx srcs to D2 nx srcs to D3 nd nd-1 1 ndλ/nx λ/nx Markov Chain for pkt delivery * State = # of remaining nodes (nd-1)λ/nx Destinations are also mobile D2 D3 In RelayCast, relay node delivers a packet to all destinations. So, the delay is defined as the time for a relay node to deliver the packet to all destinations. For a given destination, we know that there are nx competing sources for the relay bandwidth of lambda. So, for a given destination, there are nx sub queues, and the service rate of each sub-queue is lambda/nx. In other words, the pair-wise contact rate is now reduced to lambda/nx. We know that a relay node should deliver a packet to all nd destinations. To find the delay, we can draw a markov chain where each state represents the number of nodes that do not have the packet. If there are nd such nodes, the rate is simply ndlambda/nx, and when there are nd-1 remaining targets, the rate becomes nd-1 * lambda /nx, and so on. The average delay is the average time to reach state 0. That is simply the summation of the average values of these exponential random variables. So, the delay of RelayCast is nx/lamba * log nd. To D1 D1 λ/nx S1 S2 S3 nx competing srcs to D1 nx sub-queues to D1 D1 Avg. delay = ∑nx/kλ = nx/λ∑1/k = nx/λ (log nd + γ)

9 RelayCast: Buffer Requirement
Little’s law: buffer = (rate) x (delay) Buffer per source = Θ(nnd) Avg. sub-queue length: λ/nx*nx/λ = Θ(1) by Little’s law Each src has nd dest: packet is replicated to nd copies Per src buffer at a relay = Θ(nd)  n relays: buffer = Θ(nnd) Buffer upper bound per source = Θ(n2) To D1 D1 S1 S2 S3 λ/nx nx competing srcs to D1 nx sub-queues to D1 To nd dest nx srcs to D1 λ/nx λ λ D1 Buffer requirement of RelayCast can be calculated using Little’s law. Buffer size is equal to rate times delay. Again, the relay node has this queue structure. For any destination, we see that there are nx sub-queues. Now for source S1, by little’s law, the average sub-queue length to D1 is simply λ/nx*nx/λ that is theta(1). Each incoming packet has to replicated nd times, we see that per source buffer at a relay node is Θ(nd). There are n such relay nodes, and buffer per source is Θ(n*nd). From this, we also see that the buffer upper bound per source is Θ(n2). λ/nx λ To D1 λ nx srcs to D2 D2 λ/nx λ To D2 λ nx srcs to D3 D3 To D3 Relay Node

10 Comparison with Previous Results
Assumptions; n fixed, and r = √logn/n for G&K; r=1/√n for 2-hop relay Throughput scaling with ns= Θ(n); nx = nsnd/n = nd  RelayCast = Θ(1/ nd) Better throughput than conventional multi-hop multicast (w/ r=√logn/n) Grossglauser & Tse 2001 Delay Tolerant Networks RelayCast: Delay Tolerant Networks Gupta & Kumar 2001 The capacity scaling behavior of an ad hoc wireless network can be summarized as follows. Here, x-axis shows the number of multicast receivers per source, and y-axis shows the per node throughput when the number of sources is n. When the number of m-cast receivers is theta(1), we have Gupta&Kumar’s results for connected static wireless networks, and Grossglauser and Tse’s results for delay tolerant networks. As the number of multicast receivers per source increases, the throughput also decreases, as described by Shakkottai and Li. When the number of multicast receivers is greater than n/logn, multicast becomes a network wide flooding and regardless of the number of multicast receivers the throughput remains the same. In a delay tolerant network, RelayCast shows this scaling behavior. This clearly shows that RelayCast has better scaling behavior than conventional multi-hop multicast in a connected wireless network. Per node throughput with ns= Θ(n) Shakkottai, Liu, Srikant Li, Tang, & Frieder 2007 Tavli Keshavarz-Haddad, Ribeiro, Riedi 2006 # of multicast receivers (nd) per source

11 Simulation Results RelayCast throughput with varying # of relay nodes
DTN with fixed λ: throughput linearly increases RelayCast throughput = Θ(nλ) for nsnd ≤ n As # node increases, interference comes in; throughput is tapered off Number of relay nodes in the network Throughput per node (Kbps) QualNet v3.9.5 Network: 5000mx5000m Random waypoint 802.11b: 2Mbps 250m radio range Traffic : ns=1, nd=# of relay nodes To validate our model, we perform several simulations and measured the throughput and delay. First, we measure the throughput by varying the number of relay nodes in the network. Our model predicts that for a DTN with fixed lambda, throughput linearly increases. In the figure, x-axis shows the number of relay nodes in the network and y-axis shows the per node throughput. The throughput linearly increases in the beginning, but as the number of nodes increases, interference comes in and throughput is tapered off.

12 Simulation Results Comparison with conventional multicast protocol
RelayCast is scalable; ODMRP’s throughput decreases significantly, as # sources increases But delay has significantly increased; RelayCast ~ 2000s vs. ODMRP < 1s Replication reduces delay at the cost of throughput decrement Per node throughput (kbps) We compare RelayCast with ODMRP that is a conventional multi-hop multicast routing protocol. X-axis shows the number of sources, and y-axis shows the per node throughput. In the simulations, each source has 5 destinations. The graph shows that as the number of multicast sources increases, the throughput of ODMRP sharply decreases. For example, when the number of sources is 20, the throughput is less than 10kbps, whereas RelayCast can sustain several 100kbps. The caveat is this throughput improvement comes at the cost of delay increment. The delay of RelayCast is about 2000s whereas that of ODMRP is less than 1s. By replicating packets to multiple relay nodes, we can reduce the average delay, but also this comes at the cost of throughput decrement as seen in the figure. If we keep increasing the replication, it will become more or like ODMRP. The details of throughput/delay trade-offs are in the paper. From the results, we can also notice that DTN routing can be exploited for multicast congestion control if the application is delay tolerant. Traffic: 5 rand dest per src Total 100 nodes in the net Number of sources

13 Simulation Results Average delay with varying # of receivers
RelayCast delay = Θ(nx/λ(log nd+γ)) Delay increases as # of receivers increases Number of multicast receivers Average delay (s) 20m/s We showed that relaycast delay is a function of number of multicast receivers. Here, x-axis shows the number of multicast receivers and y-axis shows the average delay. The graph confirms that the average delay increases with the number of multicast receivers. We also see that higher speed has lower average delay, because the meeting rate lambda is a function of network size and speed. 30m/s

14 Conclusion RelayCast:
Provides reliable multicast even with disruption Achieves the maximum throughput bound of DTN multicast routing DTN routing protocol design and comparison must consider throughput/delay/buffer trade-offs Future work Analysis of other DTN routing strategies Impact of correlated motion patterns: i.e., power-law head and exponential tail inter-contact time distribution We showed that RelayCast provides reliable multicast even with disruptions, and also achieves the maximum throughput bound of DTN multicast routing. One insight is that DTN routing protocol design must consider the throughput/delay/buffer trade-offs. Also, when we compare different DTN routing protocol, we should consider these trade-offs systematically. As a part of future work, we’re currently analyzing other DTN routing strategies. We are also studying the impact of correlated motion pattern where the inter-contact time has two-phase distribution of power-law head and exponential tail.


Download ppt "RelayCast: Scalable Multicast Routing in Delay Tolerant Networks"

Similar presentations


Ads by Google