Presentation is loading. Please wait.

Presentation is loading. Please wait.

Incrementally Improving Lookup Latency in Distributed Hash Table Systems Hui Zhang 1, Ashish Goel 2, Ramesh Govindan 1 1 University of Southern California.

Similar presentations


Presentation on theme: "Incrementally Improving Lookup Latency in Distributed Hash Table Systems Hui Zhang 1, Ashish Goel 2, Ramesh Govindan 1 1 University of Southern California."— Presentation transcript:

1 Incrementally Improving Lookup Latency in Distributed Hash Table Systems Hui Zhang 1, Ashish Goel 2, Ramesh Govindan 1 1 University of Southern California 2 Stanford University

2 6/14/2003Sigmetrics'032 Outline Latency stretch problem in Distributed Hash Table (DHT) systems, with Chord as an example Two “latency stretch” theorems Lookup-Parasitic Random Sampling (LPRS) Simulation & Internet measurement results Conclusion & future work

3 6/14/2003Sigmetrics'033 DHT systems A new class of peer-to-peer routing infrastructures  CAN, Chord, Pastry, Tapestry, etc. Support a hash table-like functionality on Internet- like scale  a global key space: each data item is a key in the space, and each node is responsible for a portion of the key space.  given a key, map it onto a node. Our research results apply to frugal DHT systems.  The search space for the key decreases by a constant factor after each lookup hop.  Examples: Chord, Pastry, Tapestry.

4 6/14/2003Sigmetrics'034 Chord – key space A Chord network with 8 nodes and 8-bit key space 0 32 64 96 128 160 192 224 256 Network node 0 256 Data 120

5 6/14/2003Sigmetrics'035 Chord – routing table setup A Chord network with N(=8) nodes and m(=8)-bit key space Network node 255 0 64 128 32 192 96 160224 [1,2) Range 1 [2,4) Range 2 [4,8) Range 3 [8,16) Range 4 [16,32) Range 5 [32,64) Range 6 [64,128) Range 7 [128,256) Range 8 Data Pointer In node i’s routing table: One entry is created to point to to the first node in its jth ranges [i+2 j-1, i+2 j ), 1  j  m.

6 6/14/2003Sigmetrics'036 Latency stretch in Chord Network node 0 64 128 32 192 96 160224 120 Data 255 Overlay routing physical link U.S.AChina 0 128 64 96 A Chord network with N(=8) nodes and m(=8)-bit key space

7 6/14/2003Sigmetrics'037 Latency stretch [Ratnasamy et al. 2001] = latency for each lookup on the overlay topology average latency on the underlying topology In Chord,  (logN) hops per lookup in average   (logN) stretch in original Chord.  Could Chord do better, e.g., O(1) stretch, without much change?

8 6/14/2003Sigmetrics'038 Our contributions Theory  Latency expansion characteristic of the underlying network topology decides latency optimization in frugal DHT systems.  Exponential latency expansion: bad news.  Power-law latency expansion: good news. System  Lookup-Parasitic Random Sample (LPRS), an incremental latency optimization technique.  Achieve O(1) stretch under power-law latency topologies. Internet measurement.  The Internet router-level topology resembles power-law latency expansion.

9 6/14/2003Sigmetrics'039 Latency expansion Let N u (x) denote the number of nodes in the network G that are within latency x of node u. - power-law latency expansion: N u (x) grows (i.e. ``expands'‘) proportionally to x d, for all nodes u.  Examples: ring (d=1), mesh (d=2). - exponential latency expansion: N u (x) grows proportionally to  x for some constant  > 1.  Examples: random graphs.

10 6/14/2003Sigmetrics'0310 “Latency-stretch” theorem - I “Bad news” Theorem If the underlying topology G is drawn from a family of graphs with exponential latency expansion, then the expected latency of Chord is  (LlogN), where L is the expected latency between pairs of nodes in G. Chord node The worse-case scenario: equal-distance Physical link Router

11 6/14/2003Sigmetrics'0311 “Latency-stretch” theorem - II “Good news” Theorem If (1) the underlying topology G is drawn from a family of graphs with d-power-law latency expansion, and (2) for each node u in the Chord network, it samples (log N) d nodes in each range with uniform randomness and keeps the pointer to the nearest node for future routing, then the expected latency of a request is O(L), where L is the expected latency between pairs of nodes in G.

12 6/14/2003Sigmetrics'0312 Two remaining questions How does each node efficiently achieve (log N) d samples from each range? Do real networks have power-law latency expansion characteristic?

13 6/14/2003Sigmetrics'0313 Uniform sampling in terms of ranges For a routing request with  (log N) hops, final node t will be a random node in  (log N) different ranges Node x: the node at hop x Node 0: the request initiator Node t: the request terminator routing path Node 0 node t is in its range-d Node 1 node t is in its range-x (< d) Node 2 node t is in its range-y (< x) Node t

14 6/14/2003Sigmetrics'0314 Lookup-Parasitic Random Sampling 1.Recursive lookup. 2.Each intermediate hop appends its IP address to the lookup message. 3.When the lookup reaches its target, the target informs each listed hop of its identity. 4.Each intermediate hop then sends one (or a small number) of pings to get a reasonable estimate of the latency to the target, and update its routing table accordingly.

15 6/14/2003Sigmetrics'0315 LPRS-Chord: convergence time Convergence Time

16 6/14/2003Sigmetrics'0316 LPRS-Chord: topology with power-law expansion Ring Stretch (at time 2logN)

17 6/14/2003Sigmetrics'0317 What’s the latency expansion characteristic of Internet? on the end-user level on the router level

18 6/14/2003Sigmetrics'0318 Internet router-level topology: latency measurement Approximate link latency by geographical latency - assign geo-locations to nodes using Geotrack [Padmanabhan2001]. A large router-level topology dataset - 320,735 nodes, mapped to 603 distinct cities all over the world. - 92,824 node pairs are sampled to tractably compute the latency expansion of this large topology.

19 6/14/2003Sigmetrics'0319 Internet router-level topology: latency expansion

20 6/14/2003Sigmetrics'0320 LPRS-Chord on router-level topology Stretch on the router-level subgraphs (at time 2logN)

21 6/14/2003Sigmetrics'0321 Conclusion LPRS has significant practical applicability as a general latency reduction technique for frugal DHT systems. Future work - Studying the interaction of LPRS scheme with the dynamics of P2P systems.

22 6/14/2003Sigmetrics'0322 Thank you!

23 6/14/2003Sigmetrics'0323 Backup slides

24 6/14/2003Sigmetrics'0324 A simple random sampling solution Network node 0 A Chord network with m-bit key space 2 m -1 Pointer 2 m-1 2 m-2 Distance measurement

25 6/14/2003Sigmetrics'0325 A simple random sampling solution Network node 0 A Chord network with m-bit key space 2 m -1 Pointer 2 m-1 2 m-2 Distance measurement

26 6/14/2003Sigmetrics'0326 Term definition (II) Range - for a given node in a Chord overlay with ID j, its i-th range R i (j) is the interval [j+2 i-1, j+2 i ) on the key space, where 1  i  m. Frugal routing 1. after each hop, the search space for the target reduces by a constant factor, and 2. If w is an intermediate node in the route, v is the destination, and v  R i (w), then the node after w in the route depends only on w and i.

27 6/14/2003Sigmetrics'0327 LPRS-Chord: simulation methodology Phase 1. N nodes join the network one-by-one. Phase 2. each node on average inserts four documents into the network. Phase 3. each node generates, on average 3logN data requests one-by-one. - LPRS actions are enabled only in Phase 3 - Performance measurement begins at Phase 3

28 6/14/2003Sigmetrics'0328 Comparison of 5 sampling strategies: definitions Consider a lookup that is initiated by node x 0, then forwarded to node x 1, x 2,..., and finally reaches the request terminator, node x n : 1. Node x i samples node x n, 0  i < n 2. Node x n samples nodes x 0, …, x n-1 3. Node x i samples node x i-1, 1  i  n 4. Node x 0 samples nodes x n 5. Node x i samples node x 0, 0 < i  n

29 6/14/2003Sigmetrics'0329 Comparison of 5 sampling strategies: simulation result

30 6/14/2003Sigmetrics'0330 Zipf-ian document popularity

31 6/14/2003Sigmetrics'0331 Impact of skewed request distributions


Download ppt "Incrementally Improving Lookup Latency in Distributed Hash Table Systems Hui Zhang 1, Ashish Goel 2, Ramesh Govindan 1 1 University of Southern California."

Similar presentations


Ads by Google