Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exploiting Route Redundancy via Structured Peer to Peer Overlays Ben Y. Zhao, Ling Huang, Jeremy Stribling, Anthony D. Joseph, and John D. Kubiatowicz.

Similar presentations


Presentation on theme: "Exploiting Route Redundancy via Structured Peer to Peer Overlays Ben Y. Zhao, Ling Huang, Jeremy Stribling, Anthony D. Joseph, and John D. Kubiatowicz."— Presentation transcript:

1 Exploiting Route Redundancy via Structured Peer to Peer Overlays Ben Y. Zhao, Ling Huang, Jeremy Stribling, Anthony D. Joseph, and John D. Kubiatowicz University of California, Berkeley ICNP 2003

2 November 7, 2003 ravenben@eecs.berkeley.edu Challenges Facing Network Applications Network connectivity is not reliable  Disconnections frequent in the wide-area Internet  IP-level repair is slow Wide-area: BGP  3 mins Local-area: IS-IS  5 seconds Next generation network applications  Mostly wide-area  Streaming media, VoIP, B2B transactions  Low tolerance of delay, jitter and faults  Our work: transparent resilient routing infrastructure that adapts to faults in not seconds, but milliseconds

3 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Talk Overview Motivation Why structured routing Structured Peer to Peer overlays Mechanisms and policy Evaluation Summary

4 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Routing in “Mesh-like” Networks Previous work has shown reasons for long convergence [Labovitz00, Labovitz01] MinRouteAdver timer  Necessary to aggregate updates from all neighbors Commonly set to 30 seconds  Contributes to lower bound of BGP convergence time Internet becoming more mesh-like [Kaat99,labovitz99]  Worsens BGP convergence behavior Question  Can convergence be faster in context of structured routing?

5 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Resilient Overlay Networks (MIT) Fully connected mesh Allows each node full knowledge of network  Fast, independent calculation of routes  Nodes can construct any path, maximum flexibility Cost of flexibility  Protocol needs to choose the “right” route/nodes  Per node O(n) state Monitors n - 1 paths  O(n 2 ) total path monitoring is expensive S D

6 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Leveraging Structured Peer-to-Peer Overlays Key based routing (IPTPS 03)  Large sparse ID space N (160 bits: 0 – 2 160 )  Nodes in overlay network have nodeIDs  N  Given some key k  N, overlay deterministically maps k to its root node (live node in the network)  route message to root (k) 0 root(k) k source Distributed Hashtables (DHT) is interface on KBR  Key is leveraging underlying routing mesh

7 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Proximity Neighbor Selection PNS = network aware overlay construction  Within routing constraints, choose neighbors closest in network distance (latency)  Generally reduces # of IP hops Important for routing  Reduce latency  Reduce susceptibility to faults Less IP links = smaller chance of link/router failure  Reduce overall network bandwidth utilization We use Tapestry to demonstrate our design  P2P protocol with PNS overlay construction  Topology-unaware P2P protocols will likely perform worse

8 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu v v v v vv v v v v v v v O V E R L A Y System Architecture Locate nearby overlay proxy Establish overlay path to destination host Overlay traffic routes traffic resiliently Internet

9 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu B Traffic Tunneling Legacy Node A Legacy Node B Proxy register Structured Peer to Peer Overlay put (hash(B), P’(B)) P’(B) get (hash(B)) P’(B) A, B are IP addresses put (hash(A), P’(A)) P’(A) = A P’(B) = B Store mapping from end host IP to its proxy’s overlay ID Similar to approach in Internet Indirection Infrastructure (I3)

10 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Tradeoffs of Tunneling via P2P Less neighbor paths to monitor per node: O(log(n))  Large reduction in probing bandwidth: O(n)  O(log(n))  Increase probing frequency  Faster fault detection with low bandwidth consumption Actively maintain path redundancy  Manageable for “small” # of paths  Redirect traffic immediately when a failure is detected  Eliminate on-the-fly calculation of new routes  Restore redundancy when a path fails End result  Fast fault detection + precomputed paths = increased responsiveness to faults Cons  Overlay imposes routing stretch (more IP hops), generally < 2

11 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Some Details Efficient fault detection  Use soft-state to periodically probe log(n) neighbor paths  “Small” number of routes  reduced bandwidth  Exponentially weighted moving average in link quality estimation Avoid route flapping due to short term loss artifacts Loss rate L n = (1 -  )  L n-1 +    p p = instantaneous loss rate,  = hysteresis factor Maintaining backup paths  Each hop has flexible routing constraint Create and store backup routes at node insertion  Restore redundancy via “intelligent” gossip after failures  Simple policies to choose among redundant paths

12 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu First Reachable Link Selection (FRLS) Use estimated loss results to choose shortest “usable” path Sort next hop paths by latency Use shortest path with minimal quality > T Correlated failures  Reduce with intelligent topology construction  Key is to leverage redundancy available 2046 1111 22812530 229922742286 2225

13 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Evaluation Metrics for evaluation  How much routing resiliency can we exploit?  How fast can we adapt to faults?  What is the overhead of routing around a failure? Proportional increase in end to end latency Proportional increase in end to end bandwidth used Experimental platforms  Event-based simulations on transit stub topologies Data collected over different 5000-node topologies  PlanetLab measurements Microbenchmarks on responsiveness Bandwidth measurements from 200+ node overlays Multiple virtual nodes run per physical machine

14 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Exploiting Route Redundancy (Sim) Simulation of Tapestry, 2 backup paths per routing entry Transit-stub topology shown, results from TIER and AS graphs similar

15 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Responsiveness to Faults (PlanetLab) Response time increases linearly with probe period Minimum link quality threshold T = 70%, 20 runs per data point 300 660

16 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Link Probing Bandwidth (Planetlab) Medium sized routing overlays incur low probing bandwidth Bandwidth increases logarithmically with overlay size

17 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Related Work Redirection overlays  Detour (IEEE Micro 99)  Resilient Overlay Networks (SOSP 01)  Internet Indirection Infrastructure (SIGCOMM 02)  Secure Overlay Services (SIGCOMM 02) Topology estimation techniques  Adaptive probing (IPTPS 03)  Peer-based shared estimation (Zhuang 03)  Internet tomography (Chen 03)  Routing underlay (SIGCOMM 03) Structured peer-to-peer overlays  Tapestry, Pastry, Chord, CAN, Kademlia, Skipnet, Viceroy, Symphony, Koorde, Bamboo, X-Ring…

18 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Conclusion Benefits of structure outweigh costs  Structured routing lowers path maintenance costs Allows “caching” of backup paths for quick failover  Can no longer construct arbitrary paths Structured routing with low redundancy gets very close to ideal in connectivity Incur low routing stretch Fast enough for highly interactive applications  300ms beacon period  response time < 700ms  On overlay networks of 300 nodes, b/w cost is 7KB/s Future work  Deploying a public routing and proxy service on PlanetLab  Examine impact of Network aware topology construction Loss sensitive probing techniques

19 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Questions… Related websites:  Tapestry http://www.cs.berkeley.edu/~ravenben/tapestry  Pastry http://research.microsoft.com/~antr/pastry  Chord http://lcs.mit.edu/chord Acknowledgements  Thanks to Dennis Geels and Sean Rhea for their work on the BMark benchmark suite

20 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Backup Slides

21 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Another Perspective on Reachability Portion of all pair- wise paths where no failure-free paths remain Portion of all paths where IP and FRLS both route successfully A path exists, but neither IP nor FRLS can locate the path FRLS finds path, where short-term IP routing fails

22 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Constrained Multicast Used only when all paths are below quality threshold Send duplicate messages on multiple paths Leverage route convergence  Assign unique message IDs  Mark duplicates  Keep moving window of IDs  Recognize and drop duplicates Limitations  Assumes loss not from congestion  Ideal for local area routing 2046 1111 22812530 229922742286 2225 ???

23 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Latency Overhead of Misrouting

24 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Bandwidth Cost of Constrained Multicast

25 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Challenge 2: Tunneling Application Traffic Basic idea  Tunnel “legacy” traffic via overlay proxies  Should be protocol independent Desired properties  Traffic redirection transparent to end hosts  Stable mapping between a proxy and its legacy nodes  Incremental deployment Details  Making a connection to D Determine if D is reachable from overlay If so, retrieve D’s overlay ID from storage in overlay Redirect traffic through overlay to D’s overlay ID  Assign legacy nodes IDs in the overlay Routing to IDs automatically reach respective proxies

26 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Assigning Overlay Identifiers A proxy close by in network assigns a legacy node a non-random overlay ID Assign closest possible ID to proxy’s ID  such that: root (ID) = proxy  Chord/Tapestry: ID = proxyID – 1 Stable ID mapping  Probability of a new node “hijacking” legacy node very low  1 million nodes, 1000 nodes / proxy, 160 bit names, P h < 10 -40 0 source node ID proxy destination node ID

27 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Unused Slides

28 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Summary Highly adaptive to link failures  Fast adaptation time w/ low monitoring BW cost 300ms beacon period, 7KB/s (300 overlay nodes), response time < 700ms  Low latency and BW overhead for misrouting Limitation: less useful for congested IP links  Constrained multicast can exacerbate congestion  Can utilize more physically-aware overlay construction  Issue: using loss as congestion indicator Can fix with more intelligent protocols such as XCP

29 ICNP 2003 November 7, 2003 ravenben@eecs.berkeley.edu Possible Approaches Modify basic IP protocols  BGP, OSPF, IS-IS  Millions of deployed routers  Significant deployment challenge “Helper” infrastructures to assist protocols  Manipulate input to improve performance without modification Deploy intelligent protocols above IP layer  Overlay approach taken by many  Relies on basic point to point IP routing


Download ppt "Exploiting Route Redundancy via Structured Peer to Peer Overlays Ben Y. Zhao, Ling Huang, Jeremy Stribling, Anthony D. Joseph, and John D. Kubiatowicz."

Similar presentations


Ads by Google