EECS 262a Advanced Topics in Computer Systems Lecture 21 Chord/Tapestry November 7 th, 2012 John Kubiatowicz and Anthony D. Joseph Electrical Engineering.

Slides:



Advertisements
Similar presentations
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Advertisements

Peer to Peer and Distributed Hash Tables
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK
CHORD – peer to peer lookup protocol Shankar Karthik Vaithianathan & Aravind Sivaraman University of Central Florida.
Chord A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert MorrisDavid, Liben-Nowell, David R. Karger, M. Frans Kaashoek,
CHORD: A Peer-to-Peer Lookup Service CHORD: A Peer-to-Peer Lookup Service Ion StoicaRobert Morris David R. Karger M. Frans Kaashoek Hari Balakrishnan Presented.
Ion Stoica, Robert Morris, David Liben-Nowell, David R. Karger, M
Chord: A scalable peer-to- peer lookup service for Internet applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashock, Hari Balakrishnan.
Xiaowei Yang CompSci 356: Computer Network Architectures Lecture 22: Overlay Networks Xiaowei Yang
Robert Morris, M. Frans Kaashoek, David Karger, Hari Balakrishnan, Ion Stoica, David Liben-Nowell, Frank Dabek Chord: A scalable peer-to-peer look-up.
Robert Morris, M. Frans Kaashoek, David Karger, Hari Balakrishnan, Ion Stoica, David Liben-Nowell, Frank Dabek Chord: A scalable peer-to-peer look-up protocol.
Distributed Hash Tables: Chord Brad Karp (with many slides contributed by Robert Morris) UCL Computer Science CS M038 / GZ06 27 th January, 2009.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion StoicaRobert Morris David Liben-NowellDavid R. Karger M. Frans KaashoekFrank.
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Presented by Elisavet Kozyri. A distributed application architecture that partitions tasks or work loads between peers Main actions: Find the owner of.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems Antony Rowstron and Peter Druschel Proc. of the 18th IFIP/ACM.
1 Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan.
Topics in Reliable Distributed Systems Lecture 2, Fall Dr. Idit Keidar.
Handling Churn in a DHT USENIX Annual Technical Conference June 29, 2004 Sean Rhea, Dennis Geels, Timothy Roscoe, and John Kubiatowicz UC Berkeley and.
Introduction to Peer-to-Peer (P2P) Systems Gabi Kliot - Computer Science Department, Technion Concurrent and Distributed Computing Course 28/06/2006 The.
Positive Feedback Loops in DHTs or Be Careful How You Simulate January 13, 2004 Sean Rhea, Dennis Geels, Timothy Roscoe, and John Kubiatowicz From “Handling.
Spring 2003CS 4611 Peer-to-Peer Networks Outline Survey Self-organizing overlay network File system on top of P2P network Contributions from Peter Druschel.
Distributed Lookup Systems
University of Oregon Slides from Gotz and Wehrle + Chord paper
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Peer-to-Peer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan.
Structure Overlay Networks and Chord Presentation by Todd Gardner Figures from: Ion Stoica, Robert Morris, David Liben- Nowell, David R. Karger, M. Frans.
Topics in Reliable Distributed Systems Fall Dr. Idit Keidar.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
CITRIS Poster Supporting Wide-area Applications Complexities of global deployment  Network unreliability.
Decentralized Location Services CS273 Guest Lecture April 24, 2001 Ben Y. Zhao.
Peer To Peer Distributed Systems Pete Keleher. Why Distributed Systems? l Aggregate resources! –memory –disk –CPU cycles l Proximity to physical stuff.
Or, Providing Scalable, Decentralized Location and Routing Network Services Tapestry: Fault-tolerant Wide-area Application Infrastructure Motivation and.
1 Peer-to-Peer Networks Outline Survey Self-organizing overlay network File system on top of P2P network Contributions from Peter Druschel.
Tapestry: A Resilient Global-scale Overlay for Service Deployment Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
Tapestry An off-the-wall routing protocol? Presented by Peter, Erik, and Morten.
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
CSE 461 University of Washington1 Topic Peer-to-peer content delivery – Runs without dedicated infrastructure – BitTorrent as an example Peer.
Tapestry GTK Devaroy (07CS1012) Kintali Bala Kishan (07CS1024) G Rahul (07CS3009)
Wide-Area Cooperative Storage with CFS Robert Morris Frank Dabek, M. Frans Kaashoek, David Karger, Ion Stoica MIT and Berkeley.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
Tapestry:A Resilient Global- Scale Overlay for Service Deployment Zhao, Huang, Stribling, Rhea, Joseph, Kubiatowicz Presented by Rebecca Longmuir.
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Presentation 1 By: Hitesh Chheda 2/2/2010. Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT Laboratory for Computer Science.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Dr. Yingwu Zhu.
1 More on Plaxton routing There are n nodes, and log B n digits in the id, where B = 2 b The neighbor table of each node consists of - primary neighbors.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan Presented.
SIGCOMM 2001 Lecture slides by Dr. Yingwu Zhu Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Paper Survey of DHT Distributed Hash Table. Usages Directory service  Very little amount of information, such as URI, metadata, … Storage  Data, such.
Peer to Peer A Survey and comparison of peer-to-peer overlay network schemes And so on… Chulhyun Park
Tapestry: A Resilient Global-scale Overlay for Service Deployment 1 Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
1 Secure Peer-to-Peer File Sharing Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan MIT Laboratory.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Distributed Hash.
Peer to Peer Network Design Discovery and Routing algorithms
Algorithms and Techniques in Structured Scalable Peer-to-Peer Networks
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter : Lee Youn Do Oct 5, 2005 Ben Y.Zhao, John Kubiatowicz, and Anthony.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
CS 347Notes081 CS 347: Parallel and Distributed Data Management Notes 08: P2P Systems.
Nick McKeown CS244 Lecture 17 Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications [Stoica et al 2001]
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Peer-to-peer Systems All slides © IG.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
Peer-to-Peer Information Systems Week 12: Naming
Ion Stoica, Robert Morris, David Liben-Nowell, David R. Karger, M
John Kubiatowicz Electrical Engineering and Computer Sciences
MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference
Peer-to-Peer Information Systems Week 12: Naming
#02 Peer to Peer Networking
Presentation transcript:

EECS 262a Advanced Topics in Computer Systems Lecture 21 Chord/Tapestry November 7 th, 2012 John Kubiatowicz and Anthony D. Joseph Electrical Engineering and Computer Sciences University of California, Berkeley

11/7/20122cs262a-S12 Lecture-21 Today’s Papers Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications, Ion Stoica, Robert Morris, David Liben-Nowell, David R. Karger, M. Frans Kaashoek, Frank Dabek, Hari Balakrishnan, Appears in Proceedings of the IEEE/ACM Transactions on Networking, Vol. 11, No. 1, pp , February 2003Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Tapestry: A Resilient Global-scale Overlay for Service Deployment, Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John D. Kubiatowicz. Appears in IEEE Journal on Selected Areas in Communications, Vol 22, No. 1, January 2004Tapestry: A Resilient Global-scale Overlay for Service Deployment Today: Peer-to-Peer Networks Thoughts?

11/7/20123cs262a-S12 Lecture-21 Peer-to-Peer: Fully equivalent components Peer-to-Peer has many interacting components –View system as a set of equivalent nodes »“All nodes are created equal” –Any structure on system must be self-organizing »Not based on physical characteristics, location, or ownership

11/7/20124cs262a-S12 Lecture-21 Research Community View of Peer-to-Peer Old View: –A bunch of flakey high-school students stealing music New View: –A philosophy of systems design at extreme scale –Probabilistic design when it is appropriate –New techniques aimed at unreliable components –A rethinking (and recasting) of distributed algorithms –Use of Physical, Biological, and Game-Theoretic techniques to achieve guarantees

11/7/20125cs262a-S12 Lecture-21 Early 2000: Why the hype??? File Sharing: Napster (+Gnutella, KaZaa, etc) –Is this peer-to-peer? Hard to say. –Suddenly people could contribute to active global network »High coolness factor –Served a high-demand niche: online jukebox Anonymity/Privacy/Anarchy: FreeNet, Publis, etc –Libertarian dream of freedom from the man »(ISPs? Other 3-letter agencies) –Extremely valid concern of Censorship/Privacy –In search of copyright violators, RIAA challenging rights to privacy Computing: The Grid –Scavenge numerous free cycles of the world to do work most visible version of this Management: Businesses –Businesses have discovered extreme distributed computing –Does P2P mean “self-configuring” from equivalent resources? –Bound up in “Autonomic Computing Initiative”?

11/7/20126cs262a-S12 Lecture-21 The lookup problem Internet (CyberSpace!) N1N1 N2N2 N3N3 N6N6 N5N5 N4N4 Publisher Key=“title” Value=MP3 data… Client Lookup(“title”) ?

11/7/20127cs262a-S12 Lecture-21 Centralized lookup (Napster) Client Lookup(“title”) N6N6 N9N9 N7N7 DB N8N8 N3N3 N2N2 N1N1 SetLoc(“title”,N4) Simple, but O( N ) state and a single point of failure Key=“title” Value=MP3 data… N4N4

11/7/20128cs262a-S12 Lecture-21 Flooded queries (Gnutella) N4N4 Client N6N6 N9N9 N7N7 N8N8 N3N3 N2N2 N1N1 Robust, but worst case O( N ) messages per lookup Key=“title” Value=MP3 data… Lookup(“title”) N5N5

11/7/20129cs262a-S12 Lecture-21 Routed queries (Freenet, Chord, Tapestry, etc.) N4N4 Client N6N6 N9N9 N7N7 N8N8 N3N3 N2N2 N1N1 Lookup(“title”) Key=“title” Value=MP3 data… N5N5 Can be O(log N ) messages per lookup (or even O(1)) Potentially complex routing state and maintenance.

11/7/201210cs262a-S12 Lecture-21 Chord IDs Key identifier = 160-bit SHA-1(key) Node identifier = 160-bit SHA-1(IP address) Both are uniformly distributed Both exist in the same ID space How to map key IDs to node IDs?

11/7/201211cs262a-S12 Lecture-21 Consistent hashing [Karger 97] N32 N90 N105 K80 K20 K5 Circular 160-bit ID space Key 5 Node 105 A key is stored at its successor: node with next higher ID

11/7/201212cs262a-S12 Lecture-21 “N90 has K80” Basic lookup N32 N90 N105 N60 N10 N120 K80 “Where is key 80?”

11/7/201213cs262a-S12 Lecture-21 Simple lookup algorithm Lookup(my-id, key-id) n = my successor if my-id < n < key-id call Lookup(id) on node n // next hop else return my successor // done Correctness depends only on successors

11/7/201214cs262a-S12 Lecture-21 “Finger table” allows log(N)-time lookups N80 ½ ¼ 1/8 1/16 1/32 1/64 1/128

11/7/201215cs262a-S12 Lecture-21 Finger i points to successor of n+2 i N80 ½ ¼ 1/8 1/16 1/32 1/64 1/ N120

11/7/201216cs262a-S12 Lecture-21 Lookup with fingers Lookup(my-id, key-id) look in local finger table for highest node n s.t. my-id < n < key-id if n exists call Lookup(id) on node n // next hop else return my successor // done

11/7/201217cs262a-S12 Lecture-21 Lookups take O( log(N) ) hops N32 N10 N5 N20 N110 N99 N80 N60 Lookup(K19) K19

11/7/201218cs262a-S12 Lecture-21 Joining: linked list insert N36 N40 N25 1. Lookup(36) K30 K38

11/7/201219cs262a-S12 Lecture-21 Join (2) N36 N40 N25 2. N36 sets its own successor pointer K30 K38

11/7/201220cs262a-S12 Lecture-21 Join (3) N36 N40 N25 3. Copy keys from N40 to N36 K30 K38 K30

11/7/201221cs262a-S12 Lecture-21 Join (4) N36 N40 N25 4. Set N25’s successor pointer Update finger pointers in the background Correct successors produce correct lookups K30 K38 K30

11/7/201222cs262a-S12 Lecture-21 Failures might cause incorrect lookup N120 N113 N102 N80 N85 N80 doesn’t know correct successor, so incorrect lookup N10 Lookup(90)

11/7/201223cs262a-S12 Lecture-21 Solution: successor lists Each node knows r immediate successors –After failure, will know first live successor –Correct successors guarantee correct lookups –Guarantee is with some probability For many systems, talk about “leaf set” –The leaf set is a set of nodes around the “root” node that can handle all of the data/queries that the root nodes might handle When node fails: –Leaf set can handle queries for dead node –Leaf set queried to retreat missing data –Leaf set used to reconstruct new leaf set

11/7/201224cs262a-S12 Lecture-21 Lookup with Leaf Set 0… 10… 110… 111… Lookup ID Source Response Assign IDs to nodes –Map hash values to node with closest ID Leaf set is successors and predecessors –All that’s needed for correctness Routing table matches successively longer prefixes –Allows efficient lookups

11/7/201225cs262a-S12 Lecture-21 Is this a good paper? What were the authors’ goals? What about the evaluation/metrics? Did they convince you that this was a good system/approach? Were there any red-flags? What mistakes did they make? Does the system/approach meet the “Test of Time” challenge? How would you review this paper today?

11/7/201226cs262a-S12 Lecture-21 Decentralized Object Location and Routing: DOLR The core of Tapestry Routes messages to endpoints –Both Nodes and Objects Virtualizes resources –objects are known by name, not location

11/7/201227cs262a-S12 Lecture-21 Routing to Data, not endpoints! Decentralized Object Location and Routing GUID1 DOLR GUID1 GUID2

11/7/201228cs262a-S12 Lecture-21 DOLR Identifiers ID Space for both nodes and endpoints (objects) : 160-bit values with a globally defined radix (e.g. hexadecimal to give 40-digit IDs) Each node is randomly assigned a nodeID Each endpoint is assigned a Globally Unique IDentifier (GUID) from the same ID space Typically done using SHA-1 Applications can also have IDs (application specific), which are used to select an appropriate process on each node for delivery

11/7/201229cs262a-S12 Lecture-21 DOLR API PublishObject(O G, A id ) UnpublishObject(O G, A id ) RouteToObject(O G, A id ) RouteToNode(N, A id, Exact)

11/7/201230cs262a-S12 Lecture-21 Node State Each node stores a neighbor map similar to Pastry –Each level stores neighbors that match a prefix up to a certain position in the ID –Invariant: If there is a hole in the routing table, there is no such node in the network For redundancy, backup neighbor links are stored –Currently 2 Each node also stores backpointers that point to nodes that point to it Creates a routing mesh of neighbors

11/7/201231cs262a-S12 Lecture-21 Routing Mesh

11/7/201232cs262a-S12 Lecture-21 Routing Every ID is mapped to a root An ID’s root is either the node where nodeID = ID or the “closest” node to which that ID routes Uses prefix routing (like Pastry) –Lookup for 42AD: 4*** => 42** => 42A* => 42AD If there is an empty neighbor entry, then use surrogate routing –Route to the next highest (if no entry for 42**, try 43**)

11/7/201233cs262a-S12 Lecture NodeID 0xEF34 NodeID 0xEF31 NodeID 0xEFBA NodeID 0x0921 NodeID 0xE932 NodeID 0xEF37 NodeID 0xE324 NodeID 0xEF97 NodeID 0xEF32 NodeID 0xFF37 NodeID 0xE555 NodeID 0xE530 NodeID 0xEF44 NodeID 0x0999 NodeID 0x099F NodeID 0xE399 NodeID 0xEF40 NodeID 0xEF34 Basic Tapestry Mesh Incremental Prefix-based Routing

11/7/201234cs262a-S12 Lecture-21 Object Publication A node sends a publish message towards the root of the object At each hop, nodes store pointers to the source node –Data remains at source. Exploit locality without replication (such as in Pastry, Freenet) –With replicas, the pointers are stored in sorted order of network latency Soft State – must periodically republish

11/7/201235cs262a-S12 Lecture-21 Object Location Client sends message towards object’s root Each hop checks its list of pointers –If there is a match, the message is forwarded directly to the object’s location –Else, the message is routed towards the object’s root Because pointers are sorted by proximity, each object lookup is directed to the closest copy of the data

11/7/201236cs262a-S12 Lecture-21 Use of Mesh for Object Location

11/7/201237cs262a-S12 Lecture-21 Node Insertions A insertion for new node N must accomplish the following: –All nodes that have null entries for N need to be alerted of N’s presence »Acknowledged mulitcast from the “root” node of N’s ID to visit all nodes with the common prefix –N may become the new root for some objects. Move those pointers during the mulitcast –N must build its routing table »All nodes contacted during mulitcast contact N and become its neighbor set »Iterative nearest neighbor search based on neighbor set –Nodes near N might want to use N in their routing tables as an optimization »Also done during iterative search

11/7/201238cs262a-S12 Lecture-21 Node Deletions Voluntary –Backpointer nodes are notified, which fix their routing tables and republish objects Involuntary –Periodic heartbeats: detection of failed link initiates mesh repair (to clean up routing tables) –Soft state publishing: object pointers go away if not republished (to clean up object pointers) Discussion Point: Node insertions/deletions + heartbeats + soft state republishing = network overhead. Is it acceptable? What are the tradeoffs?

11/7/201239cs262a-S12 Lecture-21 Tapestry Architecture TCP, UDP Connection Mgmt Tier 0/1: Routing, Object Location deliver(), forward(), route(), etc. OceanStore, etc Prototype implemented using Java

11/7/201240cs262a-S12 Lecture-21 Experimental Results (I) 3 environments –Local cluster, PlanetLab, Simulator Micro-benchmarks on local cluster –Message processing overhead »Proportional to processor speed - Can utilize Moore’s Law –Message throughput »Optimal size is 4KB

11/7/201241cs262a-S12 Lecture-21 Experimental Results (II) Routing/Object location tests –Routing overhead (PlanetLab) »About twice as long to route through overlay vs IP –Object location/optimization (PlanetLab/Simulator) »Object pointers significantly help routing to close objects Network Dynamics –Node insertion overhead (PlanetLab) »Sublinear latency to stabilization »O(LogN) bandwidth consumption –Node failures, joins, churn (PlanetLab/Simulator) »Brief dip in lookup success rate followed by quick return to near 100% success rate »Churn lookup rate near 100%

11/7/201242cs262a-S12 Lecture-21 Object Location with Tapestry RDP (Relative Delay Penalty) –Under 2 in the wide area –More trouble in local area – (why?) Optimizations: –More pointers (in neighbors, etc) –Detect wide-area links and make sure that pointers on exit nodes to wide area

11/7/201243cs262a-S12 Lecture-21 Stability under extreme circumstances (May 2003: 1.5 TB over 4 hours) DOLR Model generalizes to many simultaneous apps

11/7/201244cs262a-S12 Lecture-21 Possibilities for DOLR? Original Tapestry –Could be used to route to data or endpoints with locality (not routing to IP addresses) –Self adapting to changes in underlying system Pastry –Similarities to Tapestry, now in n th generation release –Need to build locality layer for true DOLR Bamboo –Similar to Pastry – very stable under churn Other peer-to-peer options –Coral: nice stable system with course-grained locality –Chord: very simple system with locality optimizations

11/7/201245cs262a-S12 Lecture-21 Is this a good paper? What were the authors’ goals? What about the evaluation/metrics? Did they convince you that this was a good system/approach? Were there any red-flags? What mistakes did they make? Does the system/approach meet the “Test of Time” challenge? How would you review this paper today?

11/7/201246cs262a-S12 Lecture-21 Final topic: Churn (Optional Bamboo paper) AuthorsSystems ObservedSession Time SGG02Gnutella, Napster 50% < 60 minutes CLL02Gnutella, Napster 31% < 10 minutes SW02FastTrack 50% < 1 minute BSV03Overnet 50% < 60 minutes GDS03Kazaa 50% < 2.4 minutes Chord is a “scalable protocol for lookup in a dynamic peer-to-peer system with frequent node arrivals and departures” -- Stoica et al., 2001

11/7/201247cs262a-S12 Lecture-21 A Simple lookup Test Start up 1,000 DHT nodes on ModelNet network –Emulates a 10,000-node, AS-level topology –Unlike simulations, models cross traffic and packet loss –Unlike PlanetLab, gives reproducible results Churn nodes at some rate –Poisson arrival of new nodes –Random node departs on every new arrival –Exponentially distributed session times Each node does 1 lookup every 10 seconds –Log results, process them after test

11/7/201248cs262a-S12 Lecture-21 Early Test Results Tapestry had trouble under this level of stress –Worked great in simulations, but not as well on more realistic network –Despite sharing almost all code between the two! Problem was not limited to Tapestry consider Chord:

11/7/201249cs262a-S12 Lecture-21 Handling Churn in a DHT Forget about comparing different impls. –Too many differing factors –Hard to isolate effects of any one feature Implement all relevant features in one DHT –Using Bamboo (similar to Pastry) Isolate important issues in handling churn 1.Recovering from failures 2.Routing around suspected failures 3.Proximity neighbor selection

11/7/201250cs262a-S12 Lecture-21 Reactive Recovery: The obvious technique For correctness, maintain leaf set during churn –Also routing table, but not needed for correctness The Basics –Ping new nodes before adding them –Periodically ping neighbors –Remove nodes that don’t respond Simple algorithm –After every change in leaf set, send to all neighbors –Called reactive recovery

11/7/201251cs262a-S12 Lecture-21 The Problem With Reactive Recovery Under churn, many pings and change messages –If bandwidth limited, interfere with each other –Lots of dropped pings looks like a failure Respond to failure by sending more messages –Probability of drop goes up –We have a positive feedback cycle (squelch) Can break cycle two ways 1.Limit probability of “false suspicions of failure” 2.Recovery periodically

11/7/201252cs262a-S12 Lecture-21 Periodic Recovery Periodically send whole leaf set to a random member –Breaks feedback loop –Converges in O(log N) Back off period on message loss –Makes a negative feedback cycle (damping)

11/7/201253cs262a-S12 Lecture-21 Conclusions/Recommendations Avoid positive feedback cycles in recovery –Beware of “false suspicions of failure” –Recover periodically rather than reactively Route around potential failures early –Don’t wait to conclude definite failure –TCP-style timeouts quickest for recursive routing –Virtual-coordinate-based timeouts not prohibitive PNS can be cheap and effective –Only need simple random sampling