Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 CS 525 Advanced Distributed Systems Spring 2014 Indranil Gupta (Indy) Lecture 5 Peer to Peer Systems II February 4, 2014 All Slides © IG.

Similar presentations


Presentation on theme: "1 CS 525 Advanced Distributed Systems Spring 2014 Indranil Gupta (Indy) Lecture 5 Peer to Peer Systems II February 4, 2014 All Slides © IG."— Presentation transcript:

1 1 CS 525 Advanced Distributed Systems Spring 2014 Indranil Gupta (Indy) Lecture 5 Peer to Peer Systems II February 4, 2014 All Slides © IG

2 2 Systems used widely in practice but with no provable properties Non-academic P2P systems e.g., Gnutella, Napster, Grokster, BitTorrent (previous lecture) Systems with provable properties but with less wider usage Academic P2P systems e.g., Chord, Pastry, Kelips (this lecture) Two types of P2P Systems

3 3 DHT=Distributed Hash Table A hash table allows you to insert, lookup and delete objects with keys A distributed hash table allows you to do the same in a distributed setting (objects=files) Performance Concerns: –Load balancing –Fault-tolerance –Efficiency of lookups and inserts –Locality Napster, Gnutella, FastTrack are all DHTs (sort of) So is Chord, a structured peer to peer system that we study next

4 4 Comparative Performance MemoryLookup Latency #Messages for a lookup NapsterO(1) (O(N)@server) O(1) GnutellaO(N)

5 5 Comparative Performance MemoryLookup Latency #Messages for a lookup NapsterO(1) (O(N)@server) O(1) GnutellaO(N) ChordO(log(N))

6 6 Chord Developers: I. Stoica, D. Karger, F. Kaashoek, H. Balakrishnan, R. Morris, Berkeley and MIT Intelligent choice of neighbors to reduce latency and message cost of routing (lookups/inserts) Uses Consistent Hashing on node’s (peer’s) address –SHA-1(ip_address,port)  160 bit string –Truncated to m bits –Called peer id (number between 0 and ) –Not unique but id conflicts very unlikely –Can then map peers to one of logical points on a circle

7 7 Ring of peers N80 N112 N96 N16 0 Say m=7 N32 N45 6 nodes

8 8 Peer pointers (1): successors N80 0 Say m=7 N32 N45 N112 N96 N16 (similarly predecessors)

9 9 Peer pointers (2): finger tables N80 80 + 2 0 80 + 2 1 80 + 2 2 80 + 2 3 80 + 2 4 80 + 2 5 80 + 2 6 0 Say m=7 N32 N45 ith entry at peer with id n is first peer with id >= N112 N96 N16 i ft[i] 0 96 1 96 2 96 3 96 4 96 5 112 6 16 Finger Table at N80

10 10 What about the files? Filenames also mapped using same consistent hash function –SHA-1(filename)  160 bit string (key) –File is stored at first peer with id greater than its key (mod ) File cnn.com/index.html that maps to key K42 is stored at first peer with id greater than 42 –Note that we are considering a different file-sharing application here : cooperative web caching –The same discussion applies to any other file sharing application, including that of mp3 files.

11 11 Mapping Files N80 0 Say m=7 N32 N45 File with key K42 stored here N112 N96 N16

12 12 Search N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here Who has cnn.com/index.html ? (hashes to K42) N112 N96 N16

13 13 Search N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here At node n, send query for key k to largest successor/finger entry <= k if none exist, send query to successor(n) N112 N96 N16 Who has cnn.com/index.html ? (hashes to K42)

14 14 Search N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here At node n, send query for key k to largest successor/finger entry <= k if none exist, send query to successor(n) All “arrows” are RPCs N112 N96 N16 Who has cnn.com/index.html ? (hashes to K42)

15 15 Analysis Search takes O(log(N)) time Proof –(intuition): at each step, distance between query and peer- with-file reduces by a factor of at least 2 (why?) Takes at most m steps: is at most a constant multiplicative factor above N, lookup is O(log(N)) –(intuition): after log(N) forwardings, distance to key is at most (why?) Number of node identifiers in a range of is O(log(N)) with high probability (why? SHA-1!) So using successors in that range will be ok Here Next hop Key

16 16 Analysis (contd.) O(log(N)) search time holds for file insertions too (in general for routing to any key) –“Routing” can thus be used as a building block for All operations: insert, lookup, delete O(log(N)) time true only if finger and successor entries correct When might these entries be wrong? –When you have failures

17 17 Search under peer failures N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here X X X Lookup fails (N16 does not know N45) N112 N96 N16 Who has cnn.com/index.html ? (hashes to K42)

18 18 Search under peer failures N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here X One solution: maintain r multiple successor entries In case of failure, use successor entries N112 N96 N16 Who has cnn.com/index.html ? (hashes to K42)

19 19 Search under peer failures Choosing r=2log(N) suffices to maintain lookup correctness w.h.p.(i.e., ring connected) –Say 50% of nodes fail –Pr(at given node, at least one successor alive)= –Pr(above is true at all alive nodes)=

20 20 Search under peer failures (2) N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here X X Lookup fails (N45 is dead) N112 N96 N16 Who has cnn.com/index.html ? (hashes to K42)

21 21 Search under peer failures (2) N80 0 Say m=7 N32 N45 File cnn.com/index.html with key K42 stored here X One solution: replicate file/key at r successors and predecessors N112 N96 N16 K42 replicated Who has cnn.com/index.html ? (hashes to K42)

22 22 Need to deal with dynamic changes Peers fail New peers join Peers leave –P2P systems have a high rate of churn (node join, leave and failure) 25% per hour in Overnet (eDonkey) 100% per hour in Gnutella Lower in managed clusters, e.g., CSIL Common feature in all distributed systems, including wide-area (e.g., PlanetLab), clusters (e.g., Emulab), clouds (e.g., Cirrus), etc. So, all the time, need to:  Need to update successors and fingers, and copy keys

23 23 New peers joining N80 0 Say m=7 N32 N45 N112 N96 N16 N40 Introducer directs N40 to N45 (and N32) N32 updates successor to N40 N40 initializes successor to N45, and inits fingers from it N40 periodically talks to neighbors to update finger table Stabilization Protocol (followed by all nodes)

24 24 New peers joining (2) N80 0 Say m=7 N32 N45 N112 N96 N16 N40 N40 may need to copy some files/keys from N45 (files with fileid between 32 and 40) K34,K38

25 25 New peers joining (3) A new peer affects O(log(N)) other finger entries in the system, on average [Why?] Number of messages per peer join= O(log(N)*log(N)) Similar set of operations for dealing with peers leaving –For dealing with failures, also need failure detectors (we’ll see these later in the course!)

26 26 Experimental Results Sigcomm 01 paper had results from simulation of a C++ prototype SOSP 01 paper had more results from a 12- node Internet testbed deployment We’ll touch briefly on the first set 10000 peer system

27 27 Solution: Each real node pretends to be r multiple virtual nodes smaller load variation, lookup cost is O(log(N*r)= O(log(N)+log(r)) Load Balancing Large variance

28 28 Lookups Average Messages per Lookup Number of Nodes log, as expected

29 29 Stabilization Protocol Concurrent peer joins, leaves, failures might cause loopiness of pointers, and failure of lookups –Chord peers periodically run a stabilization algorithm that checks and updates pointers and keys –Ensures non-loopiness of fingers, eventual success of lookups and O(log(N)) lookups w.h.p. –[TechReport on Chord webpage] defines weak and strong notions of stability –Each stabilization round at a peer involves a constant number of messages –Strong stability takes stabilization rounds (!)

30 30 Churn When nodes are constantly joining, leaving, failing –Significant effect to consider: traces from the Overnet system show hourly peer turnover rates (churn) could be 25-100% of total number of nodes in system –Leads to excessive (unnecessary) key copying (remember that keys are replicated) –Stabilization algorithm may need to consume more bandwidth to keep up –Main issue is that files are replicated, while it might be sufficient to replicate only meta information about files –Alternatives Introduce a level of indirection (any p2p system) Replicate metadata more, e.g., Kelips (later in this lecture)

31 31 Fault-tolerance 500 nodes (avg. path len=5) Stabilization runs every 30 s 1 joins&fails every 10 s (3 fails/stabilization round) => 6% lookups fail Numbers look fine here, but does stabilization scale with N? What about concurrent joins & leaves?

32 Pastry – another DHT From Microsoft Research Assigns ids to nodes, just like Chord (think of a ring) Leaf Set - Each node knows its successor(s) and predecessor(s) Routing tables based prefix matching –But select the closest (in network RTT) peers among those that have the same prefix Routing is thus based on prefix matching, and is thus log(N) –And hops are short (in the underlying network) 32

33 33

34 34 Wrap-up Notes Memory: O(log(N)) successor pointer, m finger/routing entries Indirection: store a pointer instead of the actual file Do not handle partitions (can you suggest a possible solution?)

35 35 Summary of Chord and Pastry Chord and Pastry protocols –More structured than Gnutella –Black box lookup algorithms –Churn handling can get complex –O(log(N)) memory and lookup cost Can we do better?

36 Kelips – 1 hop Lookup K “affinity groups” –K ~ √ N Each node hashed to a group Node’s neighbors –(Almost) all other nodes in its own affinity group –One contact node per foreign affinity group File can be stored at any (few) node(s) Each filename hashed to a group –All nodes in the group replicate –Affinity group does not store files Lookup = 1 hop (or a few) Memory cost O( √ N) low 1.93 MB for 100K nodes, 10M files 36 … Affinity Group # 0 # 1 # k-1 129 30 15 160 76 18 167 PennyLane.mp3 hashes to k-1 Everyone in this group stores

37 Summary – P2P Range of tradeoffs available –Memory vs. lookup cost vs. background bandwidth (to keep neighbors fresh) 37

38 Announcements No office hours today (talk to me now) Next week Thu: Student-led presentations start –Please Read instructions on the website! Next week Thu: Reviews start –Please Read instructions on the website! 38

39 39 Backup Slides

40 40 Wrap-up Notes (3) Virtual Nodes good for load balancing, but –Effect on peer traffic? –Result of churn? Current status of project: –File systems (CFS,Ivy) built on top of Chord –DNS lookup service built on top of Chord –Internet Indirection Infrastructure (I3) project at UCB –Spawned research on many interesting issues about p2p systems http://www.pdos.lcs.mit.edu/chord/


Download ppt "1 CS 525 Advanced Distributed Systems Spring 2014 Indranil Gupta (Indy) Lecture 5 Peer to Peer Systems II February 4, 2014 All Slides © IG."

Similar presentations


Ads by Google