Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker A Scalable, Content- Addressable Network ACIRI U.C.Berkeley Tahoe Networks 1.

Slides:



Advertisements
Similar presentations
Topology-Aware Overlay Construction and Server Selection Sylvia Ratnasamy Mark Handley Richard Karp Scott Shenker Infocom 2002.
Advertisements

CAN 1.Distributed Hash Tables a)DHT recap b)Uses c)Example – CAN.
P2P data retrieval DHT (Distributed Hash Tables) Partially based on Hellerstein’s presentation at VLDB2004.
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Peer to Peer and Distributed Hash Tables
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Scalable Content-Addressable Network Lintao Liu
Topologically-Aware Overlay Construction and Server Selection Sylvia Ratnasamy, Mark Handly, Richard Karp and Scott Shenker Presented by Shreeram Sahasrabudhe.
Xiaoli Zhang P-Grid: A self-organizing access structure for P2P information systems Karl Aberer Department of Communication Systems Swiss Federal Institute.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT.
Massively Distributed Database Systems Distributed Hash Spring 2014 Ki-Joune Li Pusan National University.
Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Schenker Presented by Greg Nims.
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Presented by Elisavet Kozyri. A distributed application architecture that partitions tasks or work loads between peers Main actions: Find the owner of.
A Scalable Content Addressable Network (CAN)
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker A Scalable, Content- Addressable Network (CAN) ACIRI U.C.Berkeley Tahoe Networks.
A Scalable Content Addressable Network Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, and Scott Shenker Presented by: Ilya Mirsky, Alex.
Spring 2003CS 4611 Peer-to-Peer Networks Outline Survey Self-organizing overlay network File system on top of P2P network Contributions from Peter Druschel.
Chord and CFS Philip Skov Knudsen Niels Teglsbo Jensen Mads Lundemann
A Scalable Content-Addressable Network Authors: S. Ratnasamy, P. Francis, M. Handley, R. Karp, S. Shenker University of California, Berkeley Presenter:
Distributed Lookup Systems
A Scalable Content- Addressable Network Sections: 3.1 and 3.2 Καραγιάννης Αναστάσιος Α.Μ. 74.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek and Hari alakrishnan.
1 A Scalable Content- Addressable Network S. Ratnasamy, P. Francis, M. Handley, R. Karp, S. Shenker Proceedings of ACM SIGCOMM ’01 Sections: 3.5 & 3.7.
Content Addressable Networks. CAN Associate with each node and item a unique id in a d-dimensional space Goals –Scales to hundreds of thousands of nodes.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
Wide-area cooperative storage with CFS
1 Peer-to-Peer Networks Outline Survey Self-organizing overlay network File system on top of P2P network Contributions from Peter Druschel.
Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker A Scalable, Content- Addressable Network ACIRI U.C.Berkeley Tahoe Networks 1.
A Scalable Content-Addressable Network
Structured P2P Network Group14: Qiwei Zhang; Shi Yan; Dawei Ouyang; Boyu Sun.
Storage management and caching in PAST PRESENTED BY BASKAR RETHINASABAPATHI 1.
1 A scalable Content- Addressable Network Sylvia Rathnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker Pirammanayagam Manickavasagam.
Peer-to-Peer Networks and Distributed Hash Tables 2006.
CONTENT ADDRESSABLE NETWORK Sylvia Ratsanamy, Mark Handley Paul Francis, Richard Karp Scott Shenker.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
Applied Research Laboratory David E. Taylor A Scalable Content-Addressable Network Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker.
Sylvia Ratnasamy (UC Berkley Dissertation 2002) Paul Francis Mark Handley Richard Karp Scott Shenker A Scalable, Content Addressable Network Slides by.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
1 Distributed Hash Tables (DHTs) Lars Jørgen Lillehovde Jo Grimstad Bang Distributed Hash Tables (DHTs)
Vincent Matossian September 21st 2001 ECE 579 An Overview of Decentralized Discovery mechanisms.
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Content Addressable Network CAN. The CAN is essentially a distributed Internet-scale hash table that maps file names to their location in the network.
1 Slides from Richard Yang with minor modification Peer-to-Peer Systems: DHT and Swarming.
A Scalable Content-Addressable Network (CAN) Seminar “Peer-to-peer Information Systems” Speaker Vladimir Eske Advisor Dr. Ralf Schenkel November 2003.
1 Structured P2P overlay. 2 Outline Introduction Chord  I. Stoica, R. Morris and D. Karger,“Chord: A Scalable Peer-to-peer Lookup Service for Internet.
Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Content Addressable Networks CAN is a distributed infrastructure, that provides hash table-like functionality on Internet-like scales. Keys hashed into.
Scalable Content- Addressable Networks Prepared by Kuhan Paramsothy March 5, 2007.
P2P Group Meeting (ICS/FORTH) Monday, 28 March, 2005 A Scalable Content-Addressable Network Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp,
1 Secure Peer-to-Peer File Sharing Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan MIT Laboratory.
1 Distributed Hash Table CS780-3 Lecture Notes In courtesy of Heng Yin.
Plethora: Infrastructure and System Design. Introduction Peer-to-Peer (P2P) networks: –Self-organizing distributed systems –Nodes receive and provide.
1. Efficient Peer-to-Peer Lookup Based on a Distributed Trie 2. Complex Queries in DHT-based Peer-to-Peer Networks Lintao Liu 5/21/2002.
Topologically-Aware Overlay Construction and Sever Selection Sylvia Ratnasamy, Mark Handley, Richard Karp, Scott Shenker.
Peer-to-Peer Networks 03 CAN (Content Addressable Network) Christian Schindelhauer Technical Faculty Computer-Networks and Telematics University of Freiburg.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
Two Peer-to-Peer Networking Approaches Ken Calvert Net Seminar, 23 October 2001 Note: Many slides “borrowed” from S. Ratnasamy’s Qualifying Exam talk.
1 Distributed Hash Tables and Structured P2P Systems Ningfang Mi September 27, 2004.
P2P Search COP P2P Search Techniques Centralized P2P systems  e.g. Napster, Decentralized & unstructured P2P systems  e.g. Gnutella.
CSCI 599: Beyond Web Browsers Professor Shahram Ghandeharizadeh Computer Science Department Los Angeles, CA
Large Scale Sharing Marco F. Duarte COMP 520: Distributed Systems September 19, 2004.
A Scalable content-addressable network
CONTENT ADDRESSABLE NETWORK
A Scalable, Content-Addressable Network
Peer-to-Peer Networks and Distributed Hash Tables
A Scalable Content Addressable Network
A Scalable, Content-Addressable Network
Presentation transcript:

Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker A Scalable, Content- Addressable Network ACIRI U.C.Berkeley Tahoe Networks ,231 1

Outline Introduction Design Evalution Ongoing Work

Internet-scale hash tables Hash tables – essential building block in software systems Internet-scale distributed hash tables – equally valuable to large-scale distributed systems?

Hash tables – essential building block in software systems Internet-scale distributed hash tables – equally valuable to large-scale distributed systems? peer-to-peer systems –Napster, Gnutella, Groove, FreeNet, MojoNation… large-scale storage management systems –Publius, OceanStore, PAST, Farsite, CFS... mirroring on the Web Internet-scale hash tables

Content-Addressable Network (CAN) CAN: Internet-scale hash table Interface –insert(key,value) –value = retrieve(key)

Content-Addressable Network (CAN) CAN: Internet-scale hash table Interface –insert(key,value) –value = retrieve(key) Properties –scalable –operationally simple –good performance

Content-Addressable Network (CAN) CAN: Internet-scale hash table Interface –insert(key,value) –value = retrieve(key) Properties –scalable –operationally simple –good performance Related systems: Chord/Pastry/Tapestry/Buzz/Plaxton...

Problem Scope 4 Design a system that provides the interface 3 scalability 3 robustness 3 performance 5 security 6 Application-specific, higher level primitives 5 keyword searching 5 mutable content 5 anonymity

Outline Introduction Design Evalution Ongoing Work

K V CAN: basic idea K V

CAN: basic idea insert (K 1,V 1 ) K V

CAN: basic idea insert (K 1,V 1 ) K V

CAN: basic idea (K 1,V 1 ) K V

CAN: basic idea retrieve (K 1 ) K V

CAN: solution virtual Cartesian coordinate space entire space is partitioned amongst all the nodes –every node “owns” a zone in the overall space abstraction –can store data at “points” in the space –can route from one “point” to another point = node that owns the enclosing zone

CAN: simple example 1

12

1 2 3

I

node I::insert(K,V) I

(1) a = h x (K) CAN: simple example x = a node I::insert(K,V) I

(1) a = h x (K) b = h y (K) CAN: simple example x = a y = b node I::insert(K,V) I

(1) a = h x (K) b = h y (K) CAN: simple example (2) route(K,V) -> (a,b) node I::insert(K,V) I

CAN: simple example (2) route(K,V) -> (a,b) (3) (a,b) stores (K,V) (K,V) node I::insert(K,V) I (1) a = h x (K) b = h y (K)

CAN: simple example (2) route “retrieve(K)” to (a,b) (K,V) (1) a = h x (K) b = h y (K) node J::retrieve(K) J

Data stored in the CAN is addressed by name (i.e. key), not location (i.e. IP address) CAN

CAN: routing table

CAN: routing (a,b) (x,y)

A node only maintains state for its immediate neighboring nodes CAN: routing

CAN: node insertion Bootstrap node 1) Discover some node “I” already in CAN new node

CAN: node insertion I new node 1) discover some node “I” already in CAN

CAN: node insertion 2) pick random point in space I (p,q) new node

CAN: node insertion (p,q) 3) I routes to (p,q), discovers node J I J new node

CAN: node insertion new J 4) split J’s zone in half… new owns one half

Inserting a new node affects only a single other node and its immediate neighbors CAN: node insertion

CAN: node failures Need to repair the space –recover database soft-state updates use replication, rebuild database from replicas –repair routing takeover algorithm

CAN: takeover algorithm Simple failures –know your neighbor’s neighbors –when a node fails, one of its neighbors takes over its zone More complex failure modes –simultaneous failure of multiple adjacent nodes –scoped flooding to discover neighbors –hopefully, a rare event

Only the failed node’s immediate neighbors are required for recovery CAN: node failures

Design recap Basic CAN –completely distributed –self-organizing –nodes only maintain state for their immediate neighbors Additional design features –multiple, independent spaces (realities) –background load balancing algorithm –simple heuristics to improve performance

Outline Introduction Design Evalution Ongoing Work

Evaluation Scalability Low-latency Load balancing Robustness

CAN: scalability For a uniformly partitioned space with n nodes and d dimensions –per node, number of neighbors is 2d –average routing path is (dn 1/d )/4 hops –simulations show that the above results hold in practice Can scale the network without increasing per-node state Chord/Plaxton/Tapestry/Buzz –log(n) nbrs with log(n) hops

CAN: low-latency Problem –latency stretch = (CAN routing delay) (IP routing delay) –application-level routing may lead to high stretch Solution –increase dimensions –heuristics RTT-weighted routing multiple nodes per zone (peer nodes) deterministically replicate entries

CAN: low-latency #nodes Latency stretch 16K32K65K131K #dimensions = 2 w/o heuristics w/ heuristics

CAN: low-latency #nodes Latency stretch 16K32K65K131K #dimensions = 10 w/o heuristics w/ heuristics

CAN: load balancing Two pieces –Dealing with hot-spots popular (key,value) pairs nodes cache recently requested entries overloaded node replicates popular entries at neighbors –Uniform coordinate space partitioning uniformly spread (key,value) entries uniformly spread out routing load

Uniform Partitioning Added check –at join time, pick a zone –check neighboring zones –pick the largest zone and split that one

Uniform Partitioning V2V4V8V Volume Percentage of nodes w/o check w/ check V = total volume n V 16 V 8 V 4 V 2 65,000 nodes, 3 dimensions

CAN: Robustness Completely distributed –no single point of failure Not exploring database recovery Resilience of routing –can route around trouble

Routing resilience destination source

Routing resilience

destination

Routing resilience

Node X::route(D) If (X cannot make progress to D) –check if any neighbor of X can make progress –if yes, forward message to one such nbr Routing resilience

dimensions Pr(successful routing) CAN size = 16K nodes Pr(node failure) = 0.25

Routing resilience CAN size = 16K nodes #dimensions = 10 Pr(node failure) Pr(successful routing)

Outline Introduction Design Evalution Ongoing Work

Topologically-sensitive CAN construction –distributed binning

Distributed Binning Goal –bin nodes such that co-located nodes land in same bin Idea –well known set of landmark machines –each CAN node, measures its RTT to each landmark –orders the landmarks in order of increasing RTT CAN construction –place nodes from the same bin close together on the CAN

Distributed Binning –4 Landmarks (placed at 5 hops away from each other) – naïve partitioning number of nodes 256 1K4K latency Stretch K4K  w/o binning w/ binning w/o binning w/ binning #dimensions=2#dimensions=4

Ongoing Work (cont’d) Topologically-sensitive CAN construction –distributed binning CAN Security (Petros Maniatis - Stanford) –spectrum of attacks –appropriate counter-measures

Ongoing Work (cont’d) CAN Usage –Application-level Multicast (NGC 2001) –Grass-Roots Content Distribution –Distributed Databases using CANs (J.Hellerstein, S.Ratnasamy, S.Shenker, I.Stoica, S.Zhuang)

Summary CAN –an Internet-scale hash table –potential building block in Internet applications Scalability –O(d) per-node state Low-latency routing –simple heuristics help a lot Robust –decentralized, can route around trouble