Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.

Slides:



Advertisements
Similar presentations
CAN 1.Distributed Hash Tables a)DHT recap b)Uses c)Example – CAN.
Advertisements

Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Peer to Peer and Distributed Hash Tables
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Scalable Content-Addressable Network Lintao Liu
Kademlia: A Peer-to-peer Information System Based on the XOR Metric Petar Mayamounkov David Mazières A few slides are taken from the authors’ original.
SKIP GRAPHS Slides adapted from the original slides by James Aspnes Gauri Shah.
PDPTA03, Las Vegas, June S-Chord: Using Symmetry to Improve Lookup Efficiency in Chord Valentin Mesaros 1, Bruno Carton 2, and Peter Van Roy 1 1.
The Chord P2P Network Some slides have been borowed from the original presentation by the authors.
Chord: A scalable peer-to- peer lookup service for Internet applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashock, Hari Balakrishnan.
P2P Network Structured Networks: Distributed Hash Tables Pedro García López Universitat Rovira I Virgili
Xiaowei Yang CompSci 356: Computer Network Architectures Lecture 22: Overlay Networks Xiaowei Yang
Distributed Hash Tables CPE 401 / 601 Computer Network Systems Modified from Ashwin Bharambe and Robert Morris.
Peer-to-Peer Structured Overlay Networks
1 Accessing nearby copies of replicated objects Greg Plaxton, Rajmohan Rajaraman, Andrea Richa SPAA 1997.
Presented by Elisavet Kozyri. A distributed application architecture that partitions tasks or work loads between peers Main actions: Find the owner of.
The Impact of DHT Routing Geometry on Resilience and Proximity New DHTs constantly proposed –CAN, Chord, Pastry, Tapestry, Plaxton, Viceroy, Kademlia,
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
Topics in Reliable Distributed Systems Lecture 2, Fall Dr. Idit Keidar.
Looking Up Data in P2P Systems Hari Balakrishnan M.Frans Kaashoek David Karger Robert Morris Ion Stoica.
Fault-tolerant Routing in Peer-to-Peer Systems James Aspnes Zoë Diamadi Gauri Shah Yale University PODC 2002.
Distributed Lookup Systems
Aggregating Information in Peer-to-Peer Systems for Improved Join and Leave Distributed Computing Group Keno Albrecht Ruedi Arnold Michael Gähwiler Roger.
SkipNet: A Scaleable Overlay Network With Practical Locality Properties Presented by Rachel Rubin CS294-4: Peer-to-Peer Systems By Nicholas Harvey, Michael.
Topics in Reliable Distributed Systems Fall Dr. Idit Keidar.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
Decentralized Location Services CS273 Guest Lecture April 24, 2001 Ben Y. Zhao.
P2P Course, Structured systems 1 Skip Net (9/11/05)
P2P Course, Structured systems 1 Introduction (26/10/05)
File Sharing : Hash/Lookup Yossi Shasho (HW in last slide) Based on Chord: A Scalable Peer-to-peer Lookup Service for Internet ApplicationsChord: A Scalable.
Pastry: Scalable, decentralized object location and routing for large-scale peer-to-peer systems (Antony Rowstron and Peter Druschel) Shariq Rizvi First.
Structured P2P Network Group14: Qiwei Zhang; Shi Yan; Dawei Ouyang; Boyu Sun.
1 Koorde: A Simple Degree Optimal DHT Frans Kaashoek, David Karger MIT Brought to you by the IRIS project.
1 A scalable Content- Addressable Network Sylvia Rathnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker Pirammanayagam Manickavasagam.
Roger ZimmermannCOMPSAC 2004, September 30 Spatial Data Query Support in Peer-to-Peer Systems Roger Zimmermann, Wei-Shinn Ku, and Haojun Wang Computer.
Symmetric Replication in Structured Peer-to-Peer Systems Ali Ghodsi, Luc Onana Alima, Seif Haridi.
Other Structured P2P Systems CAN, BATON Lecture 4 1.
1 PASTRY. 2 Pastry paper “ Pastry: Scalable, decentralized object location and routing for large- scale peer-to-peer systems ” by Antony Rowstron (Microsoft.
DHTs and Peer-to-Peer Systems Supplemental Slides Aditya Akella 03/21/2007.
Using the Small-World Model to Improve Freenet Performance Hui Zhang Ashish Goel Ramesh Govindan USC.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
Content Addressable Network CAN. The CAN is essentially a distributed Internet-scale hash table that maps file names to their location in the network.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Distributed Hash Tables CPE 401 / 601 Computer Network Systems Modified from Ashwin Bharambe and Robert Morris.
Scalable Content- Addressable Networks Prepared by Kuhan Paramsothy March 5, 2007.
Chord Advanced issues. Analysis Theorem. Search takes O (log N) time (Note that in general, 2 m may be much larger than N) Proof. After log N forwarding.
Lecture 12 Distributed Hash Tables CPE 401/601 Computer Network Systems slides are modified from Jennifer Rexford.
1 Distributed Hash Table CS780-3 Lecture Notes In courtesy of Heng Yin.
Idit Keidar, Principles of Reliable Distributed Systems, Technion EE, Spring Principles of Reliable Distributed Systems Lecture 2: Distributed Hash.
Chord Advanced issues. Analysis Search takes O(log(N)) time –Proof 1 (intuition): At each step, distance between query and peer hosting the object reduces.
Peer to Peer Network Design Discovery and Routing algorithms
Peer-to-Peer Networks 03 CAN (Content Addressable Network) Christian Schindelhauer Technical Faculty Computer-Networks and Telematics University of Freiburg.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
Two Peer-to-Peer Networking Approaches Ken Calvert Net Seminar, 23 October 2001 Note: Many slides “borrowed” from S. Ratnasamy’s Qualifying Exam talk.
CS694 - DHT1 Distributed Hash Table Systems Hui Zhang University of Southern California.
Incrementally Improving Lookup Latency in Distributed Hash Table Systems Hui Zhang 1, Ashish Goel 2, Ramesh Govindan 1 1 University of Southern California.
CS 425 / ECE 428 Distributed Systems Fall 2015 Indranil Gupta (Indy) Peer-to-peer Systems All slides © IG.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
The Chord P2P Network Some slides taken from the original presentation by the authors.
The Chord P2P Network Some slides have been borrowed from the original presentation by the authors.
Distributed Hash Tables
Distributed Hash Tables
Accessing nearby copies of replicated objects
SKIP GRAPHS James Aspnes Gauri Shah SODA 2003.
DHT Routing Geometries and Chord
Chord Advanced issues.
MIT LCS Proceedings of the 2001 ACM SIGCOMM Conference
P2P: Distributed Hash Tables
Kademlia: A Peer-to-peer Information System Based on the XOR Metric
Presentation transcript:

Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys and node IDs 3. Each node keeps contact information to O(log n) other nodes 4. Provide a lookup algorithm, which maps a key to a node  i.e., find the node, whose ID is closest to a given key  the metric identifies closest node uniquely 5. Store and retrieve a key/value pair at the node whose ID is closest to the key (ro alternative functionality)

DHT: Comparison axes Efficiency  Lookup, insertion, deletion Size of Routing Table  How much state information maintained on each peer  O(n), O(log N) or O(1)  Tradeoff: Larger state  higher maintenance cost (but faster lookup) Flexibility of Routing  Rigid routing table Requires higher maintenance costs – need to detect and fix failures immediately Complicates recovery Preclude proximity-based routing

Chord: MIT Overlay:  Peers are organized in a ring  Successor peer  is stored on the successor of k Routing:  Finger Table: More info for near part of the ring Large jumps, then shorter jumps Resembling a binary search

Chord: Characteristics Efficient directory operations  insertion, deletion, lookup Analysis  O(logN) routing table size  O(logN) logic steps to reach the successor of a key k  O(log 2 N) for peer joining and leaving High maintenance cost  Node join/leave induces state change on other nodes Rigidity of Routing Table ( in original proposal ) :  For a given network, there is only one optimal/ideal state  Unique, and deterministic

CAN: Berkeley Overlay:  A virtual d-dimensional Coordinate space  Each peer is responsible for a zone Routing:  Each peer maintains info about neighbors  Greedy algorithm for routing  Routing table d information Joining  chose random point, split its zone Performance Analysis:  Expected: (d/4)(n 1/d ) steps for lookup d: dimension

Pastry: Rice Circular namespace Routing Table:  Peer p, ID: IDp  For each prefix of IDp, keep a set of peers who shares the prefix and the next digit is different from each other. Routing:  Plaxton algorithm  Choose a peer whose ID shares the longest prefix with target ID  Choose a peer whose ID is numerically closest to target ID Exploit the locality Similar analysis properties with Chord

Symphony: Stanford Distributed Hashing in a Small World Like Chord:  Overlay structure: ring  Key ID space partitioning Unlike Chord:  Routing Table Link to immediate neighbor (replicate for reliability) k long distance links for jumping (not replicated)  Long distance links are built in a probabilistic way  Neighbors are selected using a Probability Distribution Function (pdf)  Exploit the characteristics of a small-world network  Dynamically estimate the current system size

Symphony: Performance Each node has k = O(1) long distance links  Lookup: Expected path length: O(1/k * log 2 N) hops  Join & leave Expected: O(log 2 N) messages Comparing with Chord:  Discard the strong requirements on the routing table (finger table) Idea that has been incorporated in Chord in a different way.

Kademlia: NYU Overlay:  Node Position: shortest unique prefix  Service: Locate closest nodes to a desired ID Routing:  “based on XOR metric”  keep k nodes for each sub-tree which shares the root as the sub- trees where p resides. Share the prefix with p Magnitude of distance (XOR) k: replication parameter (e.g. 20)

Kademlia: Comparing with Chord:  Like Chord: achieving similar performance Routing table size: O(logN) Lookup: O(logN) Lower node join/leave cost Deterministic  Unlike (the original) Chord: Flexible Routing Table  Given a topology, there are more than one routing table  Symmetric routing

Skip Graphs: Yale Based on “skip list” [1990]  A randomized balanced tree structure organized as a tower of increasingly sparse linked lists  All nodes join the link list of level 0  For other levels, each node joins with a fixed probability p  Each node has 2/(1-p) pointers  Average search time: O(log N) (same for insert, delete)

Skip Graph: Skip List is not suitable for P2P environment  No redundancy, Hotspot problem  Vulnerable to failure and contention Skip Graph: Extension of Skip List  Level 0 link list builds a Chord ring  Multiple (max 2 i ) lists for level i (i = 1, … logn)  Each node participate in all levels, but different lists  Membership vector m(x): decide which list to join  Every node sees its own skip list

Skip Graph: Performance:  Since the membership vector is random, the performance analysis is also probabilistic.  Expected Lookup cost: O(log n) messages (conclusion from skip list)  Insertion: same as search, but more complicated due to the concurrent join Overall about skip graph:  Probabilistic (like Symphony)  Routing table is flexible  Given the same participating node set, no fixed network structure