Presentation is loading. Please wait.

Presentation is loading. Please wait.

P EER - TO -P EER N ETWORKS The contents of this presentation is mostly taken from the course slides of Dr. Chunming Qiao, University Of Buffalo.

Similar presentations


Presentation on theme: "P EER - TO -P EER N ETWORKS The contents of this presentation is mostly taken from the course slides of Dr. Chunming Qiao, University Of Buffalo."— Presentation transcript:

1 P EER - TO -P EER N ETWORKS The contents of this presentation is mostly taken from the course slides of Dr. Chunming Qiao, University Of Buffalo

2 W HAT IS P2P SYSTEMS ? Clay Shirkey: P2P refers to applications that take advantage of resources (storage, cycles, content, human presence) available at the edges of the internet The “litmus test:” Does it allow for variable connectivity and temporary network addresses? Does it give the nodes at the edges of the network significant autonomy? P2P Working Group ( A Standardization Effort ): P2P computing is: The sharing of computer resources and services by direct exchange between systems. Peer-to-peer computing takes advantage of existing computing power and networking connectivity, allowing economical clients to leverage their collective power to benefit the entire enterprise. 2

3 W HAT IS P2P SYSTEMS ? Multiple sites (at edge) Distributed resources Sites are autonomous (different owners) Sites are both clients and servers (“servent”) Sites have equal functionality 3

4 P2P BENEFITS Efficient use of resources Scalability: Consumers of resources also donate resources Aggregate resources grow naturally with utilization Reliability Replicas Geographic distribution No single point of failure Ease of administration Nodes self organize No need to deploy servers to satisfy demand Built-in fault tolerance, replication, and load balancing 4

5 N APSTER was used primarily for file sharing NOT a pure P2P network=> hybrid system Ways of action: Client sends server the query, server ask everyone and responds to client Client gets list of clients from server All Clients send ID’s of the data they hold to the server and when client asks for data, server responds with specific addresses peer downloads directly from other peer(s) 5

6 N APSTER Further services: Chat program, instant messaging service, tracking program,… Centralized system Single point of failure => limited fault tolerance Limited scalability (server farms with load balancing) Query is fast and upper bound for duration can be given 6

7 N APSTER 7 central DB Query 2. Response 3. Download Request 4. File Peer

8 G NUTELLA pure peer-to-peer very simple protocol no routing "intelligence" Constrained broadcast Life-time of packets limited by TTL (typically set to 7) Packets have unique ids to detect loops 8

9 G NUTELLA - PING/PONG Ping 1 Known Hosts: 2 3,4,5 6,7,8 Pong 2 Pong 4 Pong 3 Pong 5Pong 3,4,5 Pong 6,7,8 Pong 6 Pong 7 Pong 8 Pong 6,7,8 Query/Response analogous

10 F REE RIDING File sharing networks rely on users sharing data Two types of free riding Downloading but not sharing any data Not sharing any interesting data On Gnutella 15% of users contribute 94% of content 63% of users never responded to a query Didn’t have “interesting” data 10

11 G NUTELLA : SUMMARY Hit rates are high High fault tolerance Adopts well and dynamically to changing peer populations High network traffic No estimates on duration of queries No probability for successful queries Topology is unknown => algorithm cannot exploit it Free riding is a problem A significant portion of Gnutella peers are free riders Free riders are distributed evenly across domains Often hosts share files nobody is interested in 11

12 G NUTELLA DISCUSSION Search types: Any possible string comparison Scalability Search very poor with respect to number of messages Probably search time O(logn) due to small world property Updates excellent: nothing to do Routing information: low cost Robustness High, since many paths are explored Autonomy: Storage: no restriction, peers store the keys of their files Routing: peers are target of all kind of requests Global knowledge None required 12

13 I M ESH, K AZAA Hybrid of centralized Napster and decentralized Gnutella Super-peers act as local search hubs Each super-peer is similar to a Napster server for a small portion of the network Super-peers are automatically chosen by the system based on their capacities (storage, bandwidth, etc.) and availability (connection time) Users upload their list of files to a super-peer Super-peers periodically exchange file lists Queries are sent to a super-peer for files of interest 13

14 S TRUCTURED O VERLAY N ETWORKS / DHT S 14 Keys of Values Keys of Nodes Set of Nodes … Chord, Pastry, Tapestry, CAN, Kademlia, P-Grid, Viceroy Node Identifier Value Identifier Hashing Common Identifier Space Connect The nodes Smartly

15 T HE P RINCIPLE O F D ISTRIBUTED H ASH T ABLES 15 A dynamic distribution of a hash table onto a set of cooperating nodes KeyValue 1Algorithms 9Routing 11DS 12Peer-to-Peer 21Networks 22Grids Basic service: lookup operation Key resolution from any node Each node has a routing table Pointers to some other nodes Typically, a constant or a logarithmic number of pointers node A node D node B node C → Node D : lookup(9)

16 DHT D ESIRABLE P ROPERTIES Keys mapped evenly to all nodes in the network Each node maintains information about only a few other nodes Messages can be routed to a node efficiently Node arrival/departures only affect a few nodes 16

17 C HORD [MIT] consistent hashing (SHA-1) assigns each node and object an m-bit ID IDs are ordered in an ID circle ranging from 0 – (2 m -1). New nodes assume slots in ID circle according to their ID Key k is assigned to first node whose ID ≥ k  successor(k) 17

18 C ONSISTENT H ASHING - S UCCESSOR N ODES identifier circle identifier node X key successor(1) = 1 successor(2) = 3successor(6) = 0

19 C ONSISTENT H ASHING – J OIN AND D EPARTURE When a node n joins the network, certain keys previously assigned to n’s successor now become assigned to n. When node n leaves the network, all of its assigned keys are reassigned to n’s successor. 19

20 C ONSISTENT H ASHING – N ODE J OIN keys

21 C ONSISTENT H ASHING – N ODE D EP keys

22 S CALABLE K EY L OCATION – F INGER T ABLES To accelerate lookups, Chord maintains additional routing information. This additional information is not essential for correctness, which is achieved as long as each node knows its correct successor. Each node n’ maintains a routing table with up to m entries (which is in fact the number of bits in identifiers), called finger table. The i th entry in the table at node n contains the identity of the first node s that succeeds n by at least 2 i-1 on the identifier circle. s = successor(n+2 i-1 ). s is called the i th finger of node n, denoted by n.finger(i) 22

23 S CALABLE K EY L OCATION – F INGER T ABLES finger table startsucc. keys finger table startsucc. keys finger table startsucc. keys For For For.

24 C HORD KEY LOCATION Lookup in finger table the furthest node that precedes key -> O(log n) hops 24

25 N ODE J OINS AND S TABILIZATIONS The most important thing is the successor pointer. If the successor pointer is ensured to be up to date, which is sufficient to guarantee correctness of lookups, then finger table can always be verified. Each node runs a “stabilization” protocol periodically in the background to update successor pointer and finger table. 25

26 N ODE J OINS AND S TABILIZATIONS “Stabilization” protocol contains 6 functions: create() join() stabilize() notify() fix_fingers() check_predecessor() When node n first starts, it calls n.join(n’), where n’ is any known Chord node. The join() function asks n’ to find the immediate successor of n. 26

27 N ODE J OINS – STABILIZE () Each time node n runs stabilize(), it asks its successor for the it’s predecessor p, and decides whether p should be n’s successor instead. stabilize() notifies node n’s successor of n’s existence, giving the successor the chance to change its predecessor to n. The successor does this only if it knows of no closer predecessor than n. 27

28 N ODE J OINS – J OIN AND S TABILIZATION 28 npnp succ(n p ) = n s nsns n pred(n s ) = n p n joins  predecessor = nil  n acquires n s as successor via some n’ n runs stabilize  n notifies n s being the new predecessor  n s acquires n as its predecessor n p runs stabilize  n p asks n s for its predecessor (now n)  n p acquires n as its successor  n p notifies n  n will acquire n p as its predecessor all predecessor and successor pointers are now correct fingers still need to be fixed, but old fingers will still work nil pred(n s ) = n succ(n p ) = n

29 N ODE F AILURES Key step in failure recovery is maintaining correct successor pointers To help achieve this, each node maintains a successor-list of its r nearest successors on the ring If node n notices that its successor has failed, it replaces it with the first live entry in the list Successor lists are stabilized as follows: node n reconciles its list with its successor s by copying s’s successor list, removing its last entry, and prepending s to it. If node n notices that its successor has failed, it replaces it with the first live entry in its successor list and reconciles its successor list with its new successor. 29

30 CAN - M OTIVATION Primary scalability issue in peer-to-peer systems is the indexing scheme used to locate the peer containing the desired content Content-Addressable Network (CAN) is a scalable indexing mechanism Also a central issue in large scale storage management systems 30

31 CAN - B ASIC D ESIGN Basic Idea: A virtual d-dimensional Coordinate space Each node owns a Zone in the virtual space Data is stored as (key, value) pair Hash(key) --> a point P in the virtual space (key, value) pair is stored on the node within whose Zone the point P locates 31

32 A N E XAMPLE OF CAN 32 1

33 A N E XAMPLE OF CAN ( CONT ) 33 12

34 A N E XAMPLE OF CAN ( CONT )

35 A N E XAMPLE OF CAN ( CONT )

36 A N E XAMPLE OF CAN ( CONT ) 36

37 A N E XAMPLE OF CAN ( CONT ) 37 I

38 A N E XAMPLE OF CAN ( CONT ) 38 node I::insert(K,V) I

39 A N E XAMPLE OF CAN ( CONT ) 39 (1) a = h x (K) x = a node I::insert(K,V) I

40 A N E XAMPLE OF CAN ( CONT ) 40 (1) a = h x (K) b = h y (K) x = a y = b node I::insert(K,V) I

41 A N E XAMPLE OF CAN ( CONT ) 41 (1) a = h x (K) b = h y (K) (2) route(K,V) -> (a,b) node I::insert(K,V) I

42 A N E XAMPLE OF CAN ( CONT ) Oct Anh Le + Tuong Nguyen (2) route(K,V) -> (a,b) (3) (a,b) stores (K,V) (K,V) node I::insert(K,V) I (1) a = h x (K) b = h y (K)

43 A N E XAMPLE OF CAN ( CONT ) 43 (2) route “retrieve(K)” to (a,b) (K,V) (1) a = h x (K) b = h y (K) node J::retrieve(K) J

44 I MPORTANT T HING …. 44 Important note: Data stored in CAN is addressable by name (ie key) not by location (ie IP address.)

45 C ONCLUSION ABOUT CAN Support basic hash table operations on key-value pairs (K,V): insert, search, delete CAN is composed of individual nodes Each node stores a chunk (zone) of the hash table A subset of the (K,V) pairs in the table Each node stores state information about neighbor zones 45

46 R EFERENCE Kien A. Hua, Duc A. Tran, and Tai Do, “ZIGZAG: An Efficient Peer-to-Peer Scheme for Media Streaming”, INFOCOM RATNASAMY, S., FRANCIS, P., HANDLEY, M., KARP, R., AND SHENKER, S. A scalable content-addressable network. In Proc. ACM SIGCOMM (San Diego, CA, August 2001) Mayank Bawa, Brian F. Cooper, Arturo Crespo, Neil Daswani, Prasanna Ganesan, Hector Garcia-Molina, Sepandar Kamvar, Sergio Marti, Mario Schlosser, Qi Sun, Patrick Vinograd, Beverly Yang” Peer-to-Peer Research at Stanford” Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, and Hari Balakrishnan, Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications, ACM SIGCOMM 2001Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Prasanna Ganesan, Qixiang Sun, and Hector Garcia-Molina, YAPPERS: A Peer-to-Peer Lookup Service over Arbitrary Topology, INFOCOM 2003.YAPPERS: A Peer-to-Peer Lookup Service over Arbitrary Topology 46


Download ppt "P EER - TO -P EER N ETWORKS The contents of this presentation is mostly taken from the course slides of Dr. Chunming Qiao, University Of Buffalo."

Similar presentations


Ads by Google