A Scalable Content-Addressable Network (CAN) Seminar “Peer-to-peer Information Systems” Speaker Vladimir Eske Advisor Dr. Ralf Schenkel November 2003.

Slides:



Advertisements
Similar presentations
CAN 1.Distributed Hash Tables a)DHT recap b)Uses c)Example – CAN.
Advertisements

Pastry Peter Druschel, Rice University Antony Rowstron, Microsoft Research UK Some slides are borrowed from the original presentation by the authors.
Scalable Content-Addressable Network Lintao Liu
Peer-to-Peer Systems Chapter 25. What is Peer-to-Peer (P2P)? Napster? Gnutella? Most people think of P2P as music sharing.
Clayton Sullivan PEER-TO-PEER NETWORKS. INTRODUCTION What is a Peer-To-Peer Network A Peer Application Overlay Network Network Architecture and System.
PDPTA03, Las Vegas, June S-Chord: Using Symmetry to Improve Lookup Efficiency in Chord Valentin Mesaros 1, Bruno Carton 2, and Peter Van Roy 1 1.
Chord: A scalable peer-to- peer lookup service for Internet applications Ion Stoica, Robert Morris, David Karger, M. Frans Kaashock, Hari Balakrishnan.
Peer-to-Peer Distributed Search. Peer-to-Peer Networks A pure peer-to-peer network is a collection of nodes or peers that: 1.Are autonomous: participants.
Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Schenker Presented by Greg Nims.
Common approach 1. Define space: assign random ID (160-bit) to each node and key 2. Define a metric topology in this space,  that is, the space of keys.
A Scalable Content Addressable Network (CAN)
Peer to Peer File Sharing Huseyin Ozgur TAN. What is Peer-to-Peer?  Every node is designed to(but may not by user choice) provide some service that helps.
Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker A Scalable, Content- Addressable Network (CAN) ACIRI U.C.Berkeley Tahoe Networks.
A Scalable Content Addressable Network Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, and Scott Shenker Presented by: Ilya Mirsky, Alex.
A Scalable Content-Addressable Network Authors: S. Ratnasamy, P. Francis, M. Handley, R. Karp, S. Shenker University of California, Berkeley Presenter:
Distributed Lookup Systems
A Scalable Content- Addressable Network Sections: 3.1 and 3.2 Καραγιάννης Αναστάσιος Α.Μ. 74.
1 A Scalable Content- Addressable Network S. Ratnasamy, P. Francis, M. Handley, R. Karp, S. Shenker Proceedings of ACM SIGCOMM ’01 Sections: 3.5 & 3.7.
Content Addressable Networks. CAN Associate with each node and item a unique id in a d-dimensional space Goals –Scales to hundreds of thousands of nodes.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
Wide-area cooperative storage with CFS
Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker A Scalable, Content- Addressable Network ACIRI U.C.Berkeley Tahoe Networks 1.
A Scalable Content-Addressable Network
ICDE A Peer-to-peer Framework for Caching Range Queries Ozgur D. Sahin Abhishek Gupta Divyakant Agrawal Amr El Abbadi Department of Computer Science.
 Structured peer to peer overlay networks are resilient – but not secure.  Even a small fraction of malicious nodes may result in failure of correct.
1CS 6401 Peer-to-Peer Networks Outline Overview Gnutella Structured Overlays BitTorrent.
Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker A Scalable, Content- Addressable Network ACIRI U.C.Berkeley Tahoe Networks 1.
Structured P2P Network Group14: Qiwei Zhang; Shi Yan; Dawei Ouyang; Boyu Sun.
P2P File Sharing Systems
INTRODUCTION TO PEER TO PEER NETWORKS Z.M. Joseph CSE 6392 – DB Exploration Spring 2006 CSE, UT Arlington.
1 A scalable Content- Addressable Network Sylvia Rathnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker Pirammanayagam Manickavasagam.
Roger ZimmermannCOMPSAC 2004, September 30 Spatial Data Query Support in Peer-to-Peer Systems Roger Zimmermann, Wei-Shinn Ku, and Haojun Wang Computer.
1 Napster & Gnutella An Overview. 2 About Napster Distributed application allowing users to search and exchange MP3 files. Written by Shawn Fanning in.
Introduction Widespread unstructured P2P network
Other Structured P2P Systems CAN, BATON Lecture 4 1.
CONTENT ADDRESSABLE NETWORK Sylvia Ratsanamy, Mark Handley Paul Francis, Richard Karp Scott Shenker.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
Applied Research Laboratory David E. Taylor A Scalable Content-Addressable Network Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp, Scott Shenker.
Sylvia Ratnasamy (UC Berkley Dissertation 2002) Paul Francis Mark Handley Richard Karp Scott Shenker A Scalable, Content Addressable Network Slides by.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
1 Peer-to-Peer Systems. 2 Introduction What is peer One that of equal standing with another Peer-to-peer A way of structure distributed applications Each.
Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Content Addressable Network CAN. The CAN is essentially a distributed Internet-scale hash table that maps file names to their location in the network.
Quantitative Evaluation of Unstructured Peer-to-Peer Architectures Fabrício Benevenuto José Ismael Jr. Jussara M. Almeida Department of Computer Science.
SIGCOMM 2001 Lecture slides by Dr. Yingwu Zhu Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
Content Addressable Networks CAN is a distributed infrastructure, that provides hash table-like functionality on Internet-like scales. Keys hashed into.
Scalable Content- Addressable Networks Prepared by Kuhan Paramsothy March 5, 2007.
P2P Group Meeting (ICS/FORTH) Monday, 28 March, 2005 A Scalable Content-Addressable Network Sylvia Ratnasamy, Paul Francis, Mark Handley, Richard Karp,
1 Secure Peer-to-Peer File Sharing Frans Kaashoek, David Karger, Robert Morris, Ion Stoica, Hari Balakrishnan MIT Laboratory.
1 Distributed Hash Table CS780-3 Lecture Notes In courtesy of Heng Yin.
Pastry Antony Rowstron and Peter Druschel Presented By David Deschenes.
Peer to Peer Network Design Discovery and Routing algorithms
BATON A Balanced Tree Structure for Peer-to-Peer Networks H. V. Jagadish, Beng Chin Ooi, Quang Hieu Vu.
Algorithms and Techniques in Structured Scalable Peer-to-Peer Networks
Peer-to-Peer Networks 03 CAN (Content Addressable Network) Christian Schindelhauer Technical Faculty Computer-Networks and Telematics University of Freiburg.
LOOKING UP DATA IN P2P SYSTEMS Hari Balakrishnan M. Frans Kaashoek David Karger Robert Morris Ion Stoica MIT LCS.
Two Peer-to-Peer Networking Approaches Ken Calvert Net Seminar, 23 October 2001 Note: Many slides “borrowed” from S. Ratnasamy’s Qualifying Exam talk.
1 Distributed Hash Tables and Structured P2P Systems Ningfang Mi September 27, 2004.
P2P Search COP P2P Search Techniques Centralized P2P systems  e.g. Napster, Decentralized & unstructured P2P systems  e.g. Gnutella.
CSCI 599: Beyond Web Browsers Professor Shahram Ghandeharizadeh Computer Science Department Los Angeles, CA
A Survey of Peer-to-Peer Content Distribution Technologies Stephanos Androutsellis-Theotokis and Diomidis Spinellis ACM Computing Surveys, December 2004.
Pastry Scalable, decentralized object locations and routing for large p2p systems.
Accessing nearby copies of replicated objects
Early Measurements of a Cluster-based Architecture for P2P Systems
A Scalable content-addressable network
CONTENT ADDRESSABLE NETWORK
A Scalable, Content-Addressable Network
A Scalable Content Addressable Network
A Scalable, Content-Addressable Network
Presentation transcript:

A Scalable Content-Addressable Network (CAN) Seminar “Peer-to-peer Information Systems” Speaker Vladimir Eske Advisor Dr. Ralf Schenkel November 2003

Content 1. Basic architecture a. Data Model b. CAN Routing c. CAN construction 2. Architecture improvements 3. Summary

What is CAN? The goal was to make a scalable peer-to-peer file distribution system Napster problem: centralized File Index Gnutella problem: File Index completely decentralized There is a single point of failure: Low data availability Non scalable : No way to decentralize it except to build a new system Network flood: Low data availability Non scalable: No way to group data CAN - Content Addressable Network

What is CAN? CAN - Distributed, Internet-Scale, Hash table. CAN provides Insertion, Lookup and Deletion operations under Key, Value pairs (K,V), e.g. file name, file address CAN is designed completely Distributed (does not require any centralized control) CAN design is Scalable, every part of the system maintains only a small amount of control state and independent of the # of parts CAN is Fault-tolerance (It provides a rooting even some part of the system is crashed) CAN features

CAN architecture 1 Hash Table works on d-dimension Cartesian coordinate space on D-torus d-values hash function hash(K)=(x 1, …, x d ) Cartesian distance Cyclical d-dimension Space. 1-cartesian space, = 0.2

CAN architecture 1 Hash Table works on d-dimension Cartesian coordinate space on D-torus d-values hash function hash(K)=(x 1, …, x d ) Cartesian distance Cyclical d-dimension Space.

CAN architecture 1 Hash Table works on d-dimension Cartesian coordinate space on D-torus d-values hash function hash(K)=(x 1, …, x d ) Cartesian distance Cyclical d-dimension Space.

CAN architecture 1 Hash Table works on d-dimension Cartesian coordinate space on D-torus d-values hash function hash(K)=(x 1, …, x d ) Cartesian distance Cyclical d-dimension Space. Zone – chunk of the entire Hash Table, a piece of Cartesian space Coordinate Zone 1-cartesian space, = 0.2

CAN architecture 1 Hash Table works on d-dimension Cartesian coordinate space on D-torus d-values hash function hash(K)=(x 1, …, x d ) Cartesian distance Cyclical d-dimension Space. Zone – chunk of the entire Hash Table, a piece of Cartesian space Coordinate Zone 1-cartesian space, = 0.2 Zone is a valid if it has a squared shape

CAN architecture 2 CAN Nodes Node is machine in the network Node is not a Peer Node stores a chunk of Index (Hash Table) Every Node owns one distinct Zone Node stores a piece of Hash Table and all objects ([K,V] pairs) which belong to its Zone All Nodes together cover the whole Space (Hash Table) Nodes own Zones

CAN architecture 3 Neighbors in CAN 2 nodes are neighbors if their zones overlap among d-1 dimensions and abut along one dimension Node knows IP addresses of all its neighbor Nodes Node knows Zone coordinates of all neighbors Node can communicate only with its neighbors

CAN architecture: Access How to get an access to CAN system 1. CAN has an associated DNS domain 2. CAN domain name is resolved by DNS domain to Bootstrap server’s IP addresses 3. Bootstrap is special CAN Node which holds only a list of several Nodes are currently in the system User scenario 1. A user wants to join the system and sends the request using CAN domain name 4. The user chooses one of them and establishes a connection. 2. DNS domain redirects it to one of Bootstraps 3. A Bootstrap sends a list of Nodes to the user

CAN architecture: Access How to get an access to CAN system 1. CAN has an associated DNS domain 2. CAN domain name is resolved by DNS domain to Bootstrap server’s IP addresses 3. Bootstrap is special CAN Node which holds only a list of several Nodes are currently in the system User scenario 1. A user wants to join the system and sends the request using CAN domain name 4. The user chooses one of them and establishes a connection. 2. DNS domain redirects it to one of Bootstraps 3. A Bootstrap sends a list of Nodes to the user 3 level access algorithm reduces the failure probability. DNS domain just redirect all requests Many Bootstraps Many Nodes in the Bootstrap list

CAN: routing algorithm 1. Start from some Node 2. P = hash value of the Key 3.Greedy forwarding Current Node: 1.Checks whether it or its neighbors contain the point P 2.IF NOT a.Orders the neighbors by Cartesian distance between them and the point P b.Forward the search request to the closest one c.Repeat step 1 3. OTHERWISE The answer (Key, Value) pair is sent to the user

CAN: routing algorithm 1. Start from some Node 2. P - hash value of the Key 3.Greedy forwarding Current Node: 1.Checks whether it or its neighbors contain the point P 2.IF NOT a.Orders the neighbors by Cartesian distance between them and the point P b.Forwards the search request to the closest one c.Repeat step 1 3. OTHERWISE The answer (Key, Value) pair is sent to the user

CAN: routing algorithm 1. Start from some Node 2. P - hash value of the Key 3.Greedy forwarding Current Node: 1.Checks whether it or its neighbors contain the point P 2.IF NOT a.Orders the neighbors by Cartesian distance between them and the point P b.Forwards the search request to the closest one c.Repeat step 1 3. OTHERWISE The answer (Key, Value) pair is sent to the user

CAN: routing algorithm 1. Start from some Node 2. P = hash value of the Key 3.Greedy forwarding Current Node: 1.Checks whether it or its neighbors contain the point P 2.IF NOT a.Orders the neighbors by Cartesian distance between them and the point P b.Forwards the search request to the closest one c.Repeat step 1 3. OTHERWISE The answer (Key, Value) pair is sent to the user

CAN: routing algorithm 1. Start from some Node 2. P = hash value of the Key 3.Greedy forwarding Current Node: 1.Checks whether it or its neighbors contain the point P 2.IF NOT a.Orders the neighbors by Cartesian distance between them and the point P b.Forwards the search request to the closest one c.Repeat step 1 3. OTHERWISE The answer (Key, Value) pair is sent to the user

CAN: routing algorithm 1. Start from some Node 2. P = hash value of the Key 3.Greedy forwarding Current Node: 1.Checks whether it or its neighbors contain the point P 2.IF NOT a.Orders the neighbors by Cartesian distance between them and the point P b.Forwards the search request to the closest one c.Repeat step 1 3. OTHERWISE The answer (Key, Value) pair is sent to the user

CAN: routing algorithm 1. Start from some Node 2. P = hash value of the Key 3.Greedy forwarding Current Node: 1.Checks whether it or its neighbors contain the point P 2.IF NOT a.Orders the neighbors by Cartesian distance between them and the point P b.Forwards the search request to the closest one c.Repeat step 1 3. OTHERWISE The answer (Key, Value) pair is sent to the user

CAN: routing algorithm 1. Start from some Node 2. P = hash value of the Key 3.Greedy forwarding Current Node: 1.Checks whether it or its neighbors contain the point P 2.IF NOT a.Orders the neighbors by Cartesian distance between them and the point P b.Forwards the search request to the closest one c.Repeat step 1 3. OTHERWISE The answer (Key, Value) pair is sent to the user

CAN: routing algorithm 1. Start from some Node 2. P = hash value of the Key 3.Greedy forwarding Current Node: 1.Checks whether it or its neighbors contain the point P 2.IF NOT a.Orders the neighbors by Cartesian distance between them and the point P b.Forwards the search request to the closest one c.Repeat step 1 3. OTHERWISE The answer (Key, Value) pair is sent to the user

CAN: routing algorithm Average path length is average # hops should be done to reach a destination node In the case when: 1. All Zones have the same volume 2. There is not any crashed Node Total path length = 0 * * 2d + 2 * 4d + 3 * 6d + 4 * 7d + 5 * 6d + 6 * 4d + 7 * 2d + 8 * 1

CAN: routing algorithm Average path length is average # should be done to reach a destination node In the case when: 1. All Zones have the same volume 2. There is not any crashed Node Total path length = 0 * * 2d + 2 * 4d + 3 * 6d + 4 * 7d + 5 * 6d + 6 * 4d + 7 * 2d + 8 * 1

CAN: routing algorithm Average path length is average # should be done to reach a destination node In the case when: 1. All Zones have the same volume 2. There is not any crashed Node Total path length = 0 * * 2d + 2 * 4d + 3 * 6d + 4 * 7d + 5 * 6d + 6 * 4d + 7 * 2d + 8 * 1

CAN: routing algorithm Fault tolerance routing 1. Start from some Node 2. P = hash value of the Key 3. Greedy forwarding a.Before sending the request, the current node checks for neighbor’s availability b.The request is sent to the best available node

CAN: routing algorithm Fault tolerance routing 1. Start from some Node 2. P = hash value of the Key 3. Greedy forwarding a.Before sending the request, the current node checks for neighbor’s availability b.The request is sent to the best available node

CAN: routing algorithm Fault tolerance routing 1. Start from some Node 2. P = hash value of the Key 3. Greedy forwarding a.Before sending the request, the current node checks for neighbor’s availability b.The request is sent to the best available node

CAN: routing algorithm Fault tolerance routing 1. Start from some Node 2. P = hash value of the Key 3. Greedy forwarding a.Before sending the request, the current node checks for neighbor’s availability b.The request is sent to the best available node

CAN: routing algorithm Fault tolerance routing 1. Start from some Node 2. P = hash value of the Key 3. Greedy forwarding a.Before sending the request, the current node checks for neighbor’s availability b.The request is sent to the best available node The destination Node will be reached If there exists at least one path

CAN construction: New Node arrival 1 1. Finding an access point New Node, a server in internet wants to join the system and shares a piece of Hash Table. 1. New Node needs to get an access to the CAN 2. The system should allocate a piece of Hash Table to the New Node 3. New Node should start working in the system: provide routing New Node uses the basic algorithm described later: Sends a request to the CAN domain name Gets a IP address of one of the Node currently in the system Connects to this Node

CAN construction: New Node arrival 2 2. Finding a Zone 1. Randomly choose a point P 2. JOIN request is sent to the P -owner node 3. The request is forwarded via CAN routing 4. Desired node (P-owner) splits its Zone in half One half is assigned to the New Node Another half stays with Old Node 6. Hash table contents associated with New Node’s Zone are moved from Old Node to the New Node 5. Zone is split along only one dimension: The greatest dim. with the lowest order

CAN construction: New Node arrival 2 2. Finding a Zone 1. Randomly choose a point P 2. JOIN request is sent to the P -owner node 3. The request is forwarded via CAN routing 4. Desired node (P-owner) splits its Zone in half One half is assigned to the New Node Another half stays with Old Node 6. Hash table contents associated with New Node’s Zone are moved from Old Node to the New Node 5. Zone is split along only one dimension: The greatest dim. with the lowest order

CAN construction: New Node arrival 2 2. Finding a Zone 1. Randomly choose a point P 2. JOIN request is sent to the P -owner node 3. The request is forwarded via CAN routing 4. Desired node (P-owner) splits its Zone in half One half is assigned to the New Node Another half stays with Old Node 6. Hash table contents associated with New Node’s Zone are moved from Old Node to the New Node 5. Zone is split along only one dimension: The greatest dim. with the lowest order

CAN construction: New Node arrival 2 2. Finding a Zone 1. Randomly choose a point P 2. JOIN request is sent to the P -owner node 3. The request is forwarded via CAN routing 4. Desired node (P-owner) splits its Zone in half One half is assigned to the New Node Another half stays with Old Node 6. Hash table contents associated with New Node’s Zone are moved from Old Node to the New Node 5. Zone is split among only one dimension: The greatest dim. with the lowest order

CAN construction: New Node arrival 2 2. Finding a Zone 1. Randomly choose a point P 2. JOIN request is sent to the P -owner node 3. The request is forwarded via CAN routing 4. Desired node (P-owner) splits its Zone in half One half is assigned to the New Node Another half stays with Old Node 6. Hash table contents associated with New Node’s Zone are moved from Old Node to the New Node 5. Zone is split along only one dimension: The greatest dim. with the lowest order

CAN construction: New Node arrival 3 3. Joining the routing 1. New Node gets a list of neighbors from Old Node (old owner of the split Zone) 2. Old Node refreshes its list of neighbors: Removes the lost neighbors Adds New Node 3. All neighbors get a message to update their neighbor lists: Remove Old Node Add New Node

CAN construction: New Node arrival 3 3. Joining the routing 1. New Node gets a list of neighbors from Old Node (old owner of the split Zone) 2. Old Node refreshes its list of neighbors: Removes the lost neighbors Adds New Node 3. All neighbors get a message to update their neighbor lists: Remove Old Node Add New Node

CAN construction: New Node arrival 3 3. Joining the routing 1. New Node gets a list of neighbors from Old Node (old owner of the split Zone) 2. Old Node refreshes its list of neighbors: Removes the lost neighbors Adds New Node 3. All neighbors get a message to update their neighbor lists: Remove Old Node Add New Node

CAN construction: New Node arrival 3 3. Joining the routing 1. New Node gets a list of neighbors from Old Node (old owner of the split Zone) 2. Old Node refreshes its list of neighbors: Removes the lost neighbors Adds New Node 3. All neighbors get a message to update their neighbor lists: Remove Old Node Add New Node

CAN construction: Node departure 1 Node departure b. Otherwise one of the neighbors handles two different zones a. If Zone of one of the neighbors can be merged with departing Node’s Zone to produce a valid Zone. This neighbors handles merged Zone

CAN construction: Node departure 1 2. Node departure b. Otherwise one of the neighbors handles two different zones a. If Zone of one of the neighbors can be merged with departing Node’s Zone to produce a valid Zone. This neighbors handles merged Zone

CAN construction: Node departure 1 1. Node departure b. Otherwise one of the neighbors handles two different zones a. If Zone of one of the neighbors can be merged with departing Node’s Zone to produce a valid Zone. This neighbors handles merged Zone In both cases (a and b): 1.Data from departing Node is moved to the receiving Node 2.The receiving Node should update its neighbor list 3.All their neighbors are notified about changes and should update their neighbor lists

CAN construction: Node departure 2 Node is crashed 1. Periodically every node sends a message to all its neighbors 2. If Node does not receive from one of its neighbors a message for period of time t it starts a TAKEOVER mechanism 3. It sends a takeover message to each neighbor of the crashed Node, the neighbor which did not send a periodical message 4. Neighbors receive a message and compare its own Zone with the Zone of the sender. If it has a smaller Zone it sends a new takeover message to all crashed Node neighbors. 5. The crashed Node’s Zone is handled by the Node which does not get an answer on its message for period of time t Data stored on the crashed Node are unavailable until source owner refreshes the CAN state.

CAN problems Main problems: 1. Routing Latency a. Path Latency - avg. # of hops per path b. Hop Latency - avg. real hop duration 2. Increasing fault tolerance 3. Increasing data availability Basic CAN architecture archives: 1.Scalability, State of distribution 2.Increasing data availability (Napster, Gnutella)

Content 1. Basic architecture a. Data Model b. CAN Routing c. CAN construction 2. Architecture improvements 3. Summary a. Path Latency Improvement b. Hop Latency Improvement c. Mixed approaches d. Construction Improvement

Path latency Improvements 1 Realities: multiple coordinate spaces Maintain multiple (R) coordinate spaces with each Node Every Node contains different Zones in different Realities, all zones are chosen randomly Contents of hash table replicated on every reality Each coordinate Space is called Reality All Realities have  The same # of Zones  The same data  The same hash function

Path latency Improvements 2 The extended routing Algorithm for Realities b. The request is forwarded in the best Reality a. Every Node on the path checks in which of its realities a distance to the destination is the closest one 1. The destination Zone are the same for all realities 2. Each Zone can be own by many Nodes 3. For routing is applied a basic algorithm with following extensions:

Path latency Improvements 2 The extended routing Algorithm for Realities b. The request is forwarded in the best Reality a. Every Node on the path checks in which of its realities a distance to the destination is the closest one 1. The destination Zone are the same for all realities 2. Each Zone can be own by many Nodes 3. For routing is applied a basic algorithm with following extensions:

Path latency Improvements 2 The extended routing Algorithm for Realities b. The request is forwarded in the best Reality a. Every Node on the path checks in which of its realities a distance to the destination is the closest one 1. The destination Zone are the same for all realities 2. Each Zone can be own by many Nodes 3. For routing is applied a basic algorithm with following extensions:

Path latency Improvements 3 Multi-dimensioned Coordinates Spaces Average path length is the # of dimensions d increases the average path Length decreases n = 1000, equal zones dAvg. path length

Multiple Dimensions vs. Multiple Realities Path latency Improvements 4 Multiple Dimensions Multiple Realities Average # of neighbors O(d)O(r*d) Size of data store increasing none r times Data availability increasing none O(r) times Total path latency reduction strongerstrong

Hop latency improvement RTT CAN Routing Metrics 2. New Metrics: Cartesian Distance + RTT 1. RTT is Round Trip Time (ping) Expanded Node is the closest to the destination by Cartesian Distance RRT between current Node and expanded Node is minimal for all optimal Nodes number of dimensions routing without RTT (ms) per hop routing with RTT (ms) per hop

Mixed Improvement: Overloading Zones 1 Overloading coordinate zones One Zone – many Nodes MAXPEERS – max # of Nodes per Zone Every Node keeps list of its Peers The number of neighbors stays the same (O(1) in each direction) The general routing algorithm is used (from neighbor to neighbor)

Mixed Improvement: Overloading Zones 2 Extended construction algorithm New node A joins the system: 1. It discovers a Zone (owner Node B) 2. B checks: how many peers does it have 3. If less than MAXPEERS 1.A is added as a new Peer 2.A gets a list of Peers and Neighbors from B 4. Otherwise 1. Zone is split in half 2. Peer list is split in half too 3. Refresh the peer and neighbor lists

Mixed Improvement: Overloading Zones 2 Extended construction algorithm New node A joins the system: 1. It discovers a Zone (owner Node B) 2. B checks: how many peers does it have 3. If less than MAXPEERS 1.A is added as a new Peer 2.A gets a list of Peers and Neighbors from B 4. Otherwise 1. Zone is split in half 2. Peer list is split in half too 3. Refresh the peer and neighbor lists

Mixed Improvement: Overloading Zones 2 Extended construction algorithm New node A joins the system: 1. It discovers a Zone (owner Node B) 2. B checks: how many peers does it have 3. If less than MAXPEERS 1.A is added as a new Peer 2.A gets a list of Peers and Neighbors from B 4. Otherwise 1. Zone is split in half 2. Peer list is split in half too 3. Refresh the peer and neighbor lists

Mixed Improvement: Overloading Zones 2 Extended construction algorithm New node A joins the system: 1. It discovers a Zone (owner Node B) 2. B checks: how many peers does it have 3. If less than MAXPEERS 1.A is added as a new Peer 2.A gets a list of Peers and Neighbors from B 4. Otherwise 1. Zone is split in half 2. Peer list is split in half too 3. Refresh the peer and neighbor lists

Mixed Improvement: Overloading Zones 2 Periodical self updating 1. Periodically, Node gets a peer list of each its neighbors 2. Node estimates a RRT to every node in peer list 3. Node chooses the closest peer Node as a New Neighbor Node in this direction

Mixed Improvement: Overloading Zones 2 Periodical self updating Approach Benefits 1. Periodically, Node gets a peer list of each its neighbors 2. Node estimates RRT to every node in peer list 3. Node chooses the closest peer Node as New Neighbor Node in this direction Reduced Path Latency (reduced # of Zones) Reduced Hop Latency (periodical self updating) Improved fault tolerance and data availability (Hash Table Contents are replicated among several Nodes) MAXPEERS Per-hop Latency (ms)

CAN construction improvements Uniform Partitioning 1.The Node to be split compares the volume of its Zone with Zones of its Neighbors 2. The Zone with the largest volume should be split

CAN construction improvements Uniform Partitioning 1.The Node to be split compares the volume of its Zone with Zones of its Neighbors 2. The Zone with the largest volume should be split

CAN: Summary 1 Parameter “bare bones” CAN “knobs on full” CAN # of dimensions 210 MAXPEERS 04 RTT weighted routing metrics OFFON Uniform partitioning OFFON Total Improvement “bare bones” CAN uses only basic CAN architecture “knobs on full” CAN uses most of additional design features

CAN: Summary 2 Metric “bare bones”“knobs on full” Avg. Path length # of neighbors # of peers Data availability increasing none2.95 times (zones overloading) Avg. Path Latency ms135 ms

CAN: Summary 3 CAN is scalable, distributed Hash Table CAN provides: Dynamical Zone allocation Fault Tolerance Access Algorithm Stable Fault Tolerance Routing Algorithm There are many improve techniques which Increase Routing Latency Increase Data availability Increase Fault Tolerance The scalable, distributed, efficient P2P system was designed and developed

CAN: Summary 3 CAN is scalable, distributed Hash Table CAN provides: Dynamical Zone allocation Fault Tolerance Access Algorithm Stable Fault Tolerance Routing Algorithm There are many improve techniques which Increase Routing Latency Increase Data availability Increase Fault Tolerance THANK YOU The scalable, distributed, efficient P2P system was designed and developed