1 Plaxton Routing. 2 History Greg Plaxton, Rajmohan Rajaraman, Andrea Richa. Accessing nearby copies of replicated objects, SPAA 1997 Used in several.

Slides:



Advertisements
Similar presentations
Dynamic Replica Placement for Scalable Content Delivery Yan Chen, Randy H. Katz, John D. Kubiatowicz {yanchen, randy, EECS Department.
Advertisements

Distributed Hash Tables: An Overview
Tapestry: Scalable and Fault-tolerant Routing and Location Stanford Networking Seminar October 2001 Ben Y. Zhao
Tapestry: Decentralized Routing and Location SPAM Summer 2001 Ben Y. Zhao CS Division, U. C. Berkeley.
P2P data retrieval DHT (Distributed Hash Tables) Partially based on Hellerstein’s presentation at VLDB2004.
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Peer to Peer and Distributed Hash Tables
Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK
Scalable Content-Addressable Network Lintao Liu
Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
1 Accessing nearby copies of replicated objects Greg Plaxton, Rajmohan Rajaraman, Andrea Richa SPAA 1997.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
The Oceanstore Regenerative Wide-area Location Mechanism Ben Zhao John Kubiatowicz Anthony Joseph Endeavor Retreat, June 2000.
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
1 Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications Robert Morris Ion Stoica, David Karger, M. Frans Kaashoek, Hari Balakrishnan.
P2P: Advanced Topics Filesystems over DHTs and P2P research Vyas Sekar.
Spring 2003CS 4611 Peer-to-Peer Networks Outline Survey Self-organizing overlay network File system on top of P2P network Contributions from Peter Druschel.
Distributed Lookup Systems
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter: Chunyuan Liao March 6, 2002 Ben Y.Zhao, John Kubiatowicz, and.
Distributed Object Location in a Dynamic Network Kirsten Hildrum, John D. Kubiatowicz, Satish Rao and Ben Y. Zhao.
Weaving a Tapestry Distributed Algorithms for Secure Node Integration, Routing and Fault Handling Ben Y. Zhao (John Kubiatowicz, Anthony Joseph) Fault-tolerant.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
1 Spring Semester 2007, Dept. of Computer Science, Technion Internet Networking recitation #5 Mobile Ad-Hoc Networks TBRPF.
Locality Optimizations in Tapestry Jeremy Stribling Joint work with: Kris Hildrum Ben Y. Zhao Anthony D. Joseph John D. Kubiatowicz Sahara/OceanStore Winter.
Decentralized Location Services CS273 Guest Lecture April 24, 2001 Ben Y. Zhao.
Or, Providing High Availability and Adaptability in a Decentralized System Tapestry: Fault-resilient Wide-area Location and Routing Issues Facing Wide-area.
Wide-area cooperative storage with CFS
Or, Providing Scalable, Decentralized Location and Routing Network Services Tapestry: Fault-tolerant Wide-area Application Infrastructure Motivation and.
P2P Course, Structured systems 1 Skip Net (9/11/05)
1/17/01 Changing the Tapestry— Inserting and Deleting Nodes Kris Hildrum, UC Berkeley Joint work with John Kubiatowicz, Satish.
1 Peer-to-Peer Networks Outline Survey Self-organizing overlay network File system on top of P2P network Contributions from Peter Druschel.
Tapestry: Finding Nearby Objects in Peer-to-Peer Networks Joint with: Ling Huang Anthony Joseph Robert Krauthgamer John Kubiatowicz Satish Rao Sean Rhea.
Tapestry: A Resilient Global-scale Overlay for Service Deployment Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
“Umbrella”: A novel fixed-size DHT protocol A.D. Sotiriou.
Tapestry An off-the-wall routing protocol? Presented by Peter, Erik, and Morten.
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
1CS 6401 Peer-to-Peer Networks Outline Overview Gnutella Structured Overlays BitTorrent.
Tapestry GTK Devaroy (07CS1012) Kintali Bala Kishan (07CS1024) G Rahul (07CS3009)
1 Plaxton Routing. 2 Introduction Plaxton routing is a scalable mechanism for accessing nearby copies of objects. Plaxton mesh is a data structure that.
1 PASTRY. 2 Pastry paper “ Pastry: Scalable, decentralized object location and routing for large- scale peer-to-peer systems ” by Antony Rowstron (Microsoft.
Arnold N. Pears, CoRE Group Uppsala University 3 rd Swedish Networking Workshop Marholmen, September Why Tapestry is not Pastry Presenter.
Content Overlays (Nick Feamster). 2 Content Overlays Distributed content storage and retrieval Two primary approaches: –Structured overlay –Unstructured.
Brocade Landmark Routing on P2P Networks Gisik Kwon April 9, 2002.
Vincent Matossian September 21st 2001 ECE 579 An Overview of Decentralized Discovery mechanisms.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
1 More on Plaxton routing There are n nodes, and log B n digits in the id, where B = 2 b The neighbor table of each node consists of - primary neighbors.
1 Shape Segmentation and Applications in Sensor Networks Xianjin Xhu, Rik Sarkar, Jie Gao Department of CS, Stony Brook University INFOCOM 2007.
Tapestry: A Resilient Global-scale Overlay for Service Deployment 1 Ben Y. Zhao, Ling Huang, Jeremy Stribling, Sean C. Rhea, Anthony D. Joseph, and John.
DHT-based unicast for mobile ad hoc networks Thomas Zahn, Jochen Schiller Institute of Computer Science Freie Universitat Berlin 報告 : 羅世豪.
1 Distributed Hash Table CS780-3 Lecture Notes In courtesy of Heng Yin.
Peer to Peer Network Design Discovery and Routing algorithms
1 Presented by Jing Sun Computer Science and Engineering Department University of Conneticut.
Tapestry : An Infrastructure for Fault-tolerant Wide-area Location and Routing Presenter : Lee Youn Do Oct 5, 2005 Ben Y.Zhao, John Kubiatowicz, and Anthony.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
Energy Efficient Data Management for Wireless Sensor Networks with Data Sink Failure Hyunyoung Lee, Kyoungsook Lee, Lan Lin and Andreas Klappenecker †
Plethora: A Locality Enhancing Peer-to-Peer Network Ronaldo Alves Ferreira Advisor: Ananth Grama Co-advisor: Suresh Jagannathan Department of Computer.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
CS791Aravind Elango Maintenance-Free Global Data Storage Sean Rhea, Chris Wells, Patrick Eaten, Dennis Geels, Ben Zhao, Hakim Weatherspoon and John Kubiatowicz.
CS 268: Lecture 22 (Peer-to-Peer Networks)
Pastry Scalable, decentralized object locations and routing for large p2p systems.
OceanStore: An Architecture for Global-Scale Persistent Storage
(slides by Nick Feamster)
Accessing nearby copies of replicated objects
PASTRY.
EE 122: Peer-to-Peer (P2P) Networks
Object Location Problem: Find a close copy of an object in a large network Solution should: Find object if it exists Find a close copy of the object (no.
Dynamic Replica Placement for Scalable Content Delivery
Tapestry: Scalable and Fault-tolerant Routing and Location
A. D. Sotiriou, P. Kalliaras, N. Mitrou
Simultaneous Insertions in Tapestry
Presentation transcript:

1 Plaxton Routing

2 History Greg Plaxton, Rajmohan Rajaraman, Andrea Richa. Accessing nearby copies of replicated objects, SPAA 1997 Used in several important P2P networks like Microsoft’s Pastry and UC Berkeley’s Tapestry (Zhao, Kubiatowicz, Joseph et al.) that is also the backbone of Oceanstore, blueprint of a persistent global-scale storage system

3 Goal A set A of m objects reside in a network G. Plaxton et al. proposed an algorithm of accessing such a shared object using a nearby copy of it. Supported operations are insert, delete, read (but not write ) on shared objects. [ What is the challenge here? Think about this: if every node maintains a copy, then read is fast, but insert or delete are very expensive. Also, storage overhead grows. The algorithm must be efficient w.r.t both time and space]

4 Main idea Embed an n-node virtual height-balanced tree T into the network. Each node u maintains information about copies of the object in the subtree of T rooted at u. To read, u checks the local memory of the subtree under it. If the copy or a pointer to the object is available, then u uses that information, otherwise, it passes the request to its parent in T.

5 Plaxton tree Plaxton tree (some call it Plaxton mesh) is a data structure that allows peers to efficiently locate objects, and route to them across an arbitrarily-sized network, using a small routing map at each hop. Each node serves as a client, server and a router.

6 Object and Node names Objects and nodes acquire names independent of their location and semantic properties, in the form of random fixed-length bit-sequences represented by a common base (e.g., 40 Hex digits representing 160 bits). using the output of hashing algorithms, such as SHA-1 (leads to roughly even distribution in the name space) Assume that n is a power of 2 b, where b is a fixed positive integer. Thus, name of object B = 1032 (base 4 = 2 2, so b=2), name of object C = 3011 etc. n = 256

7 Neighbor map Level i matches i suffix entries Number of entries per level = ID base (here it is 16) Each entry is the suffix matching node with least cost. If no such entry exists, then pick the one that with highest id & largest suffix match These are all primary entries A A L2 L0 L x y1 y1’ y2y2’ c(x,y1) < c(x.y1’) c(x,y2) < c(x.y2’) Size of the table: b * log b (n) Destination =4307 Destination = 7353 Destination =

8 Neighbor map of 5642 Level i matches i suffix entries. Number of entries per level = ID base (here it is 8) Each entry is the suffix matching node with least cost. If no such entry exists, then pick the one that with highest id & largest suffix match These are all primary entries y2y2’ L0L1 L2 L3

9 Neighbor map In addition to primary entries, each level contains secondary entries. A node u is a secondary entry of node x at a level i, if (1)it is not the primary neighbor y, and (2) c(x,u) is at most d. c(x,w ) for all w with a matching suffix of size i Here d is a constant Finally, each node stores reverse neighbors for each level. Node y is a reverse neighbor of x iff x is a primary neighbor of y. All entries are statically chosen, and this needs global knowledge (-)

10 Routing Algorithm Assume that the destination node is a tree root. To route to node (xyz) –Let shared suffix = n so far –Look at level n+1 –Match the next digit of the destination id –Send the message to that node Eventually the message gets relayed to the destination.

11 Example of suffix routing Consider 2 18 namespace,  How many hops?

12 Choosing the root For each object, the root node simply stores a pointer to the server that stores the object. The root does not hold a copy of the object itself. An object’s root or surrogate node is chosen as a node whose id matches the object’s id in the largest number of trailing bit positions. In case there are multiple such nodes, choose the node with the largest such id.

13 Pointer list Each node x has a pointer list P(x): a list of triples (O,S, k), where O is the object, S is the node that holds a copy of O, and k is the upper bound of c(x,y). The pointer list is updated By insert and delete operations. Root of O (O,S’,2) (O,S,1) (O,S,2) (O,S’,1) (O,S) Server S (O,S’,3) Server S’

14 Inserting an Object To insert an object, the publishing process computes the root, and sends the message (O,S) towards the root node. At each hop along the way, the publishing process stores location information in the form of a mapping Root of O (O,S’,4) (O,S,1) (O,S,2) (O,S’,3) (O,S) Server S (O,S’,5)

15 Inserting another copy Intermediate nodes maintain the cost of access. While inserting a duplicate copy, the pointers are modified so that they direct to the closest copy Root of O (O,S’,2) (O,S,1) (O,S,2) (O,S’,1) (O,S) Server S (O,S’,3) Server S’

16 Read Client routes to Root of O, route to S when ( O,S ) found. If any intermediate node stores (O,S), then that leads to a faster discovery. Also, nodes handling requests en route communicate with their primary and secondary neighbors to check the existence of another close-by copy. Root of O (O,S’,2) (O,S,1) (O,S,2) (O,S’,1) (O,S) Server S (O,S’,3) Server S’

17 Deleting an object Removes all pointers to that copy of the object in the pointer list on the nodes leading to the root Otherwise, it uses the reverse pointers to to update the entry. Root of O (O,S’,2) (O,S,1) (O,S,2) (O,S’,1) (O,S) Server S (O,S’,3) Server S’ (O,S,3 ) (O,S,4 ) (O,S,5 )

18 Results Let C= (max{c(u,v): u,v in V} Cost of read = O(f(L(A))c(x,y)) where L is the length of the message. If there is no shared copy, then it is O(C) Cost of insert = O(C) Cost of delete = O(C log n) Auxiliary memory to store q objects = O(q log 2 n) w.h.p

19 Benefits and Limitations + Scalable solution + Existing objects are guaranteed to be found - Needs global knowledge to form the neighbor tables - The root node for each object may be a possible bottleneck.

20 Benefits and limitations + Simple fault handling > > Scalable All routing done using locally available data + Optimal routing distance