1 Improve search in unstructured P2P overlay. 2 Peer-to-peer Networks Peers are connected by an overlay network. Users cooperate to share files (e.g.,

Slides:



Advertisements
Similar presentations
Peer-to-Peer and Social Networks An overview of Gnutella.
Advertisements

Ion Stoica, Robert Morris, David Karger, M. Frans Kaashoek, Hari Balakrishnan MIT and Berkeley presented by Daniel Figueiredo Chord: A Scalable Peer-to-peer.
Peer to Peer and Distributed Hash Tables
Evaluation of a Scalable P2P Lookup Protocol for Internet Applications
Replication Strategies in Unstructured Peer-to-Peer Networks Edith Cohen Scott Shenker This is a modified version of the original presentation by the authors.
Efficient Search - Overview Improving Search In Peer-to-Peer Systems Presented By Jon Hess cs294-4 Fall 2003.
Improving Search in Peer-to-Peer Networks Beverly Yang Hector Garcia-Molina Presented by Shreeram Sahasrabudhe
1 Turning Heterogeneity into an Advantage in Overlay Routing Gisik Kwon Dept. of Computer Science and Engineering Arizona State University Published in.
Technion –Israel Institute of Technology Computer Networks Laboratory A Comparison of Peer-to-Peer systems by Gomon Dmitri and Kritsmer Ilya under Roi.
1 An Overview of Gnutella. 2 History The Gnutella network is a fully distributed alternative to the centralized Napster. Initial popularity of the network.
Search and Replication in Unstructured Peer-to-Peer Networks Pei Cao, Christine Lv., Edith Cohen, Kai Li and Scott Shenker ICS 2002.
Routing Indices For Peer-to-Peer Systems Svetlana Strunjas University of Cincinnati May,2002.
Routing Indices For Peer-to-Peer Systems Arturo Crespo, Hector Garcia-Molina Stanford ICDCS 2002.
P2p, Spring 05 1 Topics in Database Systems: Data Management in Peer-to-Peer Systems Routing indexes A. Crespo & H. Garcia-Molina ICDCS 02.
LightFlood: An Optimal Flooding Scheme for File Search in Unstructured P2P Systems Song Jiang, Lei Guo, and Xiaodong Zhang College of William and Mary.
Small-world Overlay P2P Network
Open Problems in Data- Sharing Peer-to-Peer Systems Neil Daswani, Hector Garcia-Molina, Beverly Yang.
P2p, Spring 05 1 Topics in Database Systems: Data Management in Peer-to-Peer Systems March 29, 2005.
1 Replication Strategies in Unstructured Peer-to-Peer Networks Edith Cohen, Scott Shenker ACM SIGCOMM Computer Communication Review, Proceedings of the.
A Trust Based Assess Control Framework for P2P File-Sharing System Speaker : Jia-Hui Huang Adviser : Kai-Wei Ke Date : 2004 / 3 / 15.
Efficient Content Location Using Interest-based Locality in Peer-to-Peer Systems Presented by: Lin Wing Kai.
Exploiting Content Localities for Efficient Search in P2P Systems Lei Guo 1 Song Jiang 2 Li Xiao 3 and Xiaodong Zhang 1 1 College of William and Mary,
Search and Replication in Unstructured Peer-to-Peer Networks Pei Cao Cisco Systems, Inc. (Joint work with Christine Lv, Edith Cohen, Kai Li and Scott Shenker)
presented by Hasan SÖZER1 Scalable P2P Search Daniel A. Menascé George Mason University.
Vassilios V. Dimakopoulos and Evaggelia Pitoura Distributed Data Management Lab Dept. of Computer Science, Univ. of Ioannina, Greece
Chord-over-Chord Overlay Sudhindra Rao Ph.D Qualifier Exam Department of ECECS.
1 CS 194: Distributed Systems Distributed Hash Tables Scott Shenker and Ion Stoica Computer Science Division Department of Electrical Engineering and Computer.
1 Seminar: Information Management in the Web Gnutella, Freenet and more: an overview of file sharing architectures Thomas Zahn.
Efficient Search in Peer to Peer Networks By: Beverly Yang Hector Garcia-Molina Presented By: Anshumaan Rajshiva Date: May 20,2002.
Searching in Unstructured Networks Joining Theory with P-P2P.
1CS 6401 Peer-to-Peer Networks Outline Overview Gnutella Structured Overlays BitTorrent.
P2P File Sharing Systems
INTRODUCTION TO PEER TO PEER NETWORKS Z.M. Joseph CSE 6392 – DB Exploration Spring 2006 CSE, UT Arlington.
Freenet. Anonymity  Napster, Gnutella, Kazaa do not provide anonymity  Users know who they are downloading from  Others know who sent a query  Freenet.
1 Napster & Gnutella An Overview. 2 About Napster Distributed application allowing users to search and exchange MP3 files. Written by Shawn Fanning in.
IR Techniques For P2P Networks1 Information Retrieval Techniques For Peer-To-Peer Networks Demetrios Zeinalipour-Yazti, Vana Kalogeraki and Dimitrios Gunopulos.
1 Unstructured P2P overlay. 2 Centralized model  e.g. Napster  global index held by central authority  direct contact between requestors and providers.
09/07/2004Peer-to-Peer Systems in Mobile Ad-hoc Networks 1 Lookup Service for Peer-to-Peer Systems in Mobile Ad-hoc Networks M. Tech Project Presentation.
Searching In Peer-To-Peer Networks Chunlin Yang. What’s P2P - Unofficial Definition All of the computers in the network are equal Each computer functions.
HERO: Online Real-time Vehicle Tracking in Shanghai Xuejia Lu 11/17/2008.
Peer to Peer Research survey TingYang Chang. Intro. Of P2P Computers of the system was known as peers which sharing data files with each other. Build.
Jonathan Walpole CSE515 - Distributed Computing Systems 1 Teaching Assistant for CSE515 Rahul Dubey.
P2p, Fall 06 1 Topics in Database Systems: Data Management in Peer-to-Peer Systems Routing indexes A. Crespo & H. Garcia-Molina ICDCS 02.
Using the Small-World Model to Improve Freenet Performance Hui Zhang Ashish Goel Ramesh Govindan USC.
Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications Xiaozhou Li COS 461: Computer Networks (precept 04/06/12) Princeton University.
Routing Indices For P-to-P Systems ICDCS Introduction Search in a P2P system –Mechanisms without an index –Mechanisms with specialized index nodes.
Replication Strategies in Unstructured Peer-to-Peer Networks Edith CohenScott Shenker Some slides are taken from the authors’ original presentation.
An IP Address Based Caching Scheme for Peer-to-Peer Networks Ronaldo Alves Ferreira Joint work with Ananth Grama and Suresh Jagannathan Department of Computer.
SIGCOMM 2001 Lecture slides by Dr. Yingwu Zhu Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications.
03/19/02Scalab Seminar Series1 Routing in Peer-to-Peer Systems Ramaswamy N.Vadivelu Scalab, ASU.
Unstructure P2P Overlay. Improving Search in Peer-to-Peer Networks ICDCS 2002 Beverly Yang Hector Garcia-Molina.
Efficient P2P Search by Exploiting Localities in Peer Community and Individual Peers A DISC’04 paper Lei Guo 1 Song Jiang 2 Li Xiao 3 and Xiaodong Zhang.
By Jonathan Drake.  The Gnutella protocol is simply not scalable  This is due to the flooding approach it currently utilizes  As the nodes increase.
LightFlood: An Efficient Flooding Scheme for File Search in Unstructured P2P Systems Song Jiang, Lei Guo, and Xiaodong Zhang College of William and Mary.
P2p, Fall 06 1 Topics in Database Systems: Data Management in Peer-to-Peer Systems Search in Unstructured P2p.
Minimizing Churn in Distributed Systems P. Brighten Godfrey, Scott Shenker, and Ion Stoica UC Berkeley SIGCOMM’06.
Aug 22, 2002Sigcomm 2002 Replication Strategies in Unstructured Peer-to-Peer Networks Edith Cohen AT&T Labs-research Scott Shenker ICIR.
1 Reading Report 3 Yin Chen 20 Feb 2004 Reference: Efficient Search in Peer-to-Peer Networks, Beverly Yang, Hector Garcia-Molina, In 22 nd Int. Conf. on.
Evaluation GUESS and Non-Forwarding Peer-to-Peer search ICDCS paper Beverly Yang Patrick Vinograd Hector Garcia-Molina Computer Science Department, Stanford.
Algorithms and Techniques in Structured Scalable Peer-to-Peer Networks
P2P Search COP6731 Advanced Database Systems. P2P Computing  Powerful personal computer Share computing resources P2P Computing  Advantages: Shared.
School of Electrical Engineering &Telecommunications UNSW Cost-effective Broadcast for Fully Decentralized Peer-to-peer Networks Marius Portmann & Aruna.
09/13/04 CDA 6506 Network Architecture and Client/Server Computing Peer-to-Peer Computing and Content Distribution Networks by Zornitza Genova Prodanoff.
Incrementally Improving Lookup Latency in Distributed Hash Table Systems Hui Zhang 1, Ashish Goel 2, Ramesh Govindan 1 1 University of Southern California.
Performance Comparison of Ad Hoc Network Routing Protocols Presented by Venkata Suresh Tamminiedi Computer Science Department Georgia State University.
Distributed Caching and Adaptive Search in Multilayer P2P Networks Chen Wang, Li Xiao, Yunhao Liu, Pei Zheng The 24th International Conference on Distributed.
Chord: A Scalable Peer-to-Peer Lookup Service for Internet Applications * CS587x Lecture Department of Computer Science Iowa State University *I. Stoica,
A Survey of Peer-to-Peer Content Distribution Technologies Stephanos Androutsellis-Theotokis and Diomidis Spinellis ACM Computing Surveys, December 2004.
Early Measurements of a Cluster-based Architecture for P2P Systems
EE 122: Peer-to-Peer (P2P) Networks
Presentation transcript:

1 Improve search in unstructured P2P overlay

2 Peer-to-peer Networks Peers are connected by an overlay network. Users cooperate to share files (e.g., music, videos, etc.)

3 (Search in) Basic P2P Architectures Centralized : central directory server. (Napster) Decentralized: search is performed by probing peers  Structured (DHTs): (Can, Chord,…) location is coupled with topology - search is routed by the query. Only exact-match queries, tightly controlled overlay.  Unstructured: (Gnutella) search is “blind” - probed peers are unrelated to query.

4 Topics Search strategies  Beverly Yang and Hector Garcia-Molina, “Improving Search in Peer-to-Peer Networks”, ICDCS 2002Improving Search in Peer-to-Peer Networks  Arturo Crespo, Hector Garcia-Molina, “Routing Indices For Peer-to-Peer Systems”, ICDCS 2002Routing Indices For Peer-to-Peer Systems Short cuts  Kunwadee Sripanidkulchai, Bruce Maggs and Hui Zhang, “Efficient Content Location Using Interest-based Locality in Peer-to-Peer Systems”, infocom 2003.Efficient Content Location Using Interest-based Locality in Peer-to-Peer Systems Replication  Edith Cohen and Scott Shenker, “Replication Strategies in Unstructured Peer-to-Peer Networks”, SIGCOMM 2002.Replication Strategies in Unstructured Peer-to-Peer Networks

5 Improving Search in Peer-to- Peer Networks ICDCS 2002 Beverly Yang Hector Garcia-Molina

6 Motivation The propose of a data-sharing P2P system is to accept queries from users, and locate and return data (or pointers to the data). Metrics  Cost Average aggregate bandwidth Average aggregate processing cost  Quality of results Number of results Satisfaction : a query is satisfied if Z (a value specified by user) or more results are returned. Time to satisfaction

7 Current Techniques Gnutella  BFS with depth limit D.  Waste bandwidth and processing resources Freenet  DFS with depth limit D.  Poor response time.

8 Broadcast policies Iterative deepening Directed BFS Local Indices

9 Iterative Deepening In system where satisfaction is the metric of choice, iterative deepening is a good technique Under policy P= { a, b, c} ;waiting time W  A source node S first initiates a BFS of depth a  The query is processed and then becomes frozen at all nodes that are a hops from the source  S waiting for a time period W

10 Iterative Deepening  If query is not satisfied, S will start the next iteration, initiating a BFS of depth b. S send a “Resend” with a TTL of a A node that receives a Resend message will simply forward the message or if the node is at depth a, it will drop the resend message and unfreeze the corresponding query by forwarding the query message with a TTL of b-a to all its neighbors A node need only freeze a query for slightly more than W time units before deleting it

11 Directed BFS If minimizing response time is important to an application, iterative deepening may not be appropriate A source send query messages to just a subset of its neighbors A node maintains simple statistics on its neighbors  Number of results received from each neighbor  Latency of connection

12 Directed BFS (cont) Candidate nodes  Returned the Highest number of results  The neighbor that returns response messages that have taken the lowest average number of hops  High message count

13 Local Indices Each node n maintains an index over the data of all nodes within r hops radius. All nodes at depths not listed in the policy simply forward the query. Example: policy P= { 1, 5}

14 Experimental result

15 Routing Indices For Peer-to-Peer Systems Arturo Crespo, Hector Garcia-Molina Stanford University

16 Motivation A key part of a P2P system is document discovery The goal is to help users find documents with content of interest across potential P2P sources efficiently The mechanisms for searching can be classified in three categories  Mechanisms without an index  Mechanisms with specialized index nodes (centralized search)  Mechanisms with indices at each node (distributed search)

17 Motivation (cont.) Gnutella uses a mechanism where nodes do not have an index  Queries are propagated from node to node until matching documents are found  Although this approach is simple and robust, it has the disadvantage of the enormous cost of flooding the network every time a query is generated Centralized-search systems use specialized nodes that maintain an index of the documents available in the P2P system like Napster  The user queries an index node to identify nodes having documents with the content  A centralized system is vulnerable to attack and it is difficult to keep the indices up-to-date

18 Motivation (cont.) A distributed-index mechanism  Routing Indices (RIs)  Give a “direction” towards the document, rather than its actual location By using “routes” the index size is proportional to the number of neighbors

19 Peer-to-peer Systems A P2P system is formed by a large number of nodes that can join or leave the system at any time  Each node has a local document database that can be accessed through a local index  The local index receives content queries and returns pointers to the documents with the requested content

20 Query Processing in a Distributed Search P2P System In a distributed-search P2P system, users submit queries to any node along with a stop condition  A node receiving a query first evaluates the query against its own database, returns to the user pointers to any results  If the stop condition has not been reached, the node selects one or more of its neighbors and forwards the query to them Queries can be forwarded to the best neighbors in parallel or sequentially  A parallel approach yields better response time, but generates higher traffic and may waste resources

21 Routing indices The objective of a Routing Index (RI) is to allow a node to select the “best” neighbors to send a query A RI is a data structure that, given a query, returns a list of neighbors, ranked according to their goodness for the query Each node has a local index for quickly finding local documents when a query is received. Nodes also have a CRI containing  the number of documents along each path  the number of documents on each topic

22 Routing indices (cont.) Thus, the number of results in a path can be estimated as :  CRI(si) is the value for the cell at the column for topic si and at the row for a neighbor The goodness of B: 6 C: 0 D: 75 Note that these numbers are just estimates and they are subject to overcounts and/or undercounts A limitation of using CRIs is that they do not take into account the difference in cost due to the number of “hops” necessary to reach a document

23 Using Routing Indices

24 Using Routing Indices (cont.) The storage space required by an RI in a node is modest as we are only storing index information for each neighbor t is the counter size in bytes, c is the number of categories, N the number of nodes, and b the branching factor  Centralized index would require t × (c + 1) × N bytes  the total for the entire distributed system is t × (c + 1) × b × N bytes the RIs require more storage space overall than a centralized index, the cost of the storage space is shared among the network nodes

25 Creating Routing Indices

26 Maintaining Routing Indices Maintaining RIs is identical to the process used for creating them For efficiency, we may delay exporting an update for a short time so we can batch several updates, thus, trading RI freshness for a reduced update cost We can also choose sending minor updates, but reduce accuracy of the RI

27 Hop-count Routing Indices

28 Hop-count Routing Indices (cont.) The estimator of a hop-count RI needs a cost model to compute the goodness of a neighbor We assumes that document results are uniformly distributed across the network and that the network is a regular tree with fanout F We define the goodness (goodness hc ) of Neighbor i with respect to query Q for hop-count RI as: If we assume F = 3, the goodness of X for a query about “DB” documents would be 13+10/3 = and for Y would be 0+31/3 = 10.33

29 Exponentially aggregated RI Each entry of the ERI for node N contains a value computed as:  th is the height and F the fanout of the assumed regular tree, goodness() is the Compound RI estimator, N[j] is the summary of the local index of neighbor j of N, and T is the topic of interest of the entry Problems?!

30 Exponentially aggregated RI (cont.)

31 Cycles in the P2P Network There are three general approaches for dealing with cycles:  No-op solution: No changes are made to the algorithms  Cycle avoidance solution: In this solution we do not allow nodes to create an “update” connection to other nodes if such connection would create a cycle Absence of global information  Cycle detection and recovery: This solution detects cycles sometime after they are formed and, after that, takes recovery actions to eliminate the effect of the cycles

32 Experimental Results Modeling search mechanisms in a P2P system:  We consider three kinds of network topologies : a tree because it does not have cycles we start with a tree and we add extra vertices at random (creating cycles) a power-law graph, is considered a good model for P2P systems and allows us to test our algorithms against a “realistic” topology  We model the location of document results using two distributions: uniform and an 80/20 biased distribution 80/20 assigns uniformly 80% of the document results to 20% of the nodes  In this paper we focus on the network and we use the number of messages generated by each algorithm as a measure of cost

33 Experimental Results (cont.)

34 Experimental Results (cont.) In particular, CRI uses all nodes in the network, HRI uses nodes within a predefined a horizon, and ERI uses nodes until the exponentially decayed value of an index entry reaches a minimum value In the case of the No-RI approach, an 80/20 document distribution penalizes performance as the search mechanism needs to visit a number of nodes until it finds a content-loaded node

35 Experimental Results (cont.) RIs perform better in a power-law network than in a tree network (Query)  In a power-law network a few nodes have a significantly higher connectivity than the rest  Power-law distributions generate network topologies where the average path length between two nodes is lower than in tree topologies

36 Experimental Results (cont.) The tradeoff between query and update costs for RIs  The cost of CRI is much higher when compared with HRI and ERI  ERI only propagate the update to a subset of the network

37 Conclusions Achieve greater efficiency by placing Routing Indices in each node. Three possible RIs: compound RIs, hopcount RIs, and exponential RIs From experiments, ERIs and HRI offer significant improvements versus not using an RI, while keeping update costs low

38 Efficient Content Location Using Interest-based Locality in Peer-to-Peer Systems

39 Background  Each peer is connected randomly, and searching is done by flooding.  Allow keyword search Example of searching a mp3 file in Gnutella network. The query is flooded across the network.

40 Background DHT (Chord):  Given a key, Chord will map the key to the node.  Each node need to maintain O(log N) information  Each query use O(log N) messages.  Key search means searching by exact name An chord with about 50 nodes. The black lines point to adjacent nodes while the red lines are “finger” pointers that allow a node to find key in O(log N) time.

41 Interest-based Locality  Peers have similar interest will share similar contents

42 Architecture Shortcuts are modular. Shortcuts are performance enhancement hints.

43 Creation of shortcuts The peer use the underlying topology (e.g. Gnutella) for the first few searches. One of the return peers is selected from random and added to the shortcut lists. Each shortcut will be ordered by the metric, e.g. success rate, path latency. Subsequent queries go through the shortcut lists first. If fail, lookup through underlying topology.

44 Performance Evaluation Performance metric:  success rate  load characteristics (query packets per peers process in the system)  query scope (the fraction of peers in each query)  minimum reply path length  additional state kept in each node

45 Methodology – query workload Create traffic trace from the real application traffic:  Boeing firewall proxies  Microsoft firewall proxies  Passively collect the web traffic between CMU and the Internet  Passively collect typical P2P traffic (Kazza, Gnutella) Use exact matching rather than keyword matching in the simulation.  “song.mp3” and “my artist – song.mp3” will be treated as different.

46 Methodology – Underlying peers topology Based on the Gnutella connectivity graph in 2001, with 95% nodes about 7 hops away. Searching TTL is set to 7. For each kind of traffic (Boeing, Microsoft… etc), run 8 times simulations, each with 1 hour.

47 Simulation Results – success rate

48 Simulation Results – load and path length -- Query load for Boeing and Microsoft Traffic: -- Average path length of the traces:

49 Increase Number of Shortcuts 7 ~ 12 % performance gain Diminished return Add all shortcut at a time, no limit on the shortcut size Add k shortcut at a time, only 100 shortcuts are used. Enhancement of Interest-based Locality

50 Using Shortcuts’ Shortcuts Idea: Add the shortcut’s shortcut Performance gain of 7% on average Enhancement of Interest-based Locality

51 Interest-based Structures When viewed as an undirected graph:  In the first 10 minutes, there are many connected components, each component has a few peers in between.  At the end of simulation, there are few connected components, each component has several hundred peers. Each component is well connected.  The clustering coefficient is about 0.6 ~ 0.7, which is higher than that in Web graph.

52 Sensitivity of Shortcuts Run Interest based shortcuts over DHT (Chord) instead of Gnutella. Query load is reduced by a factor 2 – 4. Query scope is reduced from 7/N to 1.5/N

53 Conclusion Interest based shortcuts are modular and performance enhancement hints over existing P2P topology. Shortcuts are proven can enhance the searching efficiencies. Shortcuts form clusters within a P2P topology, and the clusters are well connected.

54 Replication Strategies in Unstructured Peer-to-Peer Networks Edith Cohen AT&T Labs-research Scott Shenker ICIR

55 (replication in) P2P architectures No proactive replication (Gnutella)  Hosts store and serve only what they requested  A copy can be found only by probing a host with a copy

56 Question: how to use replication to improve search efficiency in unstructured networks with a proactive replication mechanism ?

57 Search and replication model Search: probe hosts, uniformly at random, until the query is satisfied (or the search max size is exceeded) Goal: minimize average search size (number of probes till query is satisfied) Replication: Each host can store up to  copies of items. Unstructured networks with replication of keys or copies. Peers probed (in the search and replication process) are unrelated to query/item

58 What is the search size of a query ? Soluble queries: number of probes until answer is found. We look at the Expected Search Size (ESS) of each item. The ESS is inversely proportional to the fraction of peers with a copy of the item. Search size

59 Search Example 2 probes 4 probes

60 Expected Search Size (ESS) n nodes, capacity  R=n*  r i = number of copies of the i’th items Allocation : p 1(=r 1 /R), p 2, p 3,…, p m  i p i = 1 i th item is allocated p i fraction of storage. m items with relative query rates q 1 > q 2 > q 3 > … > q m.  i q i = 1 Search size for i th item is a Geometric r.v. with mean Ai = 1/(  p i ). ESS is  i q i A i = (  i q i / p i )/ 

61 Uniform and Proportional Replication Two natural strategies: Uniform Allocation: p i = 1/m Simple, resources are divided equally Proportional Allocation: p i = q i “Fair”, resources per item proportional to demand Reflects current P2P practices Example: 3 items, q 1 =1/2, q 2 =1/3, q 3 =1/6 UniformProportional

62 Basic Questions How do Uniform and Proportional allocations perform/compare ? Which strategy minimizes the Expected Search Size (ESS) ? Is there a simple protocol that achieves optimal replication in decentralized unstructured networks ?

63 ESS under Uniform and Proportional Allocations (soluble queries) Lemma: The ESS under either Uniform or Proportional allocations is m/  –Independent of query rates (!!!) –Same ESS for Proportional and Uniform (!!!) Proportional: ASS is (  i q i / p i )/  (  i q i / q i )/  m/  Uniform: ASS is (  i q i / p i )/  (  i m q i )/  m/   i q i  m/  p i =(R/m)/R Proof…

64 Space of Possible Allocations Definition: Allocation p 1, p 2, p 3,…, p m is “in- between” Uniform and Proportional if for 1  i <m, q i+1 /q i < p i+1 /p i < 1 Theorem1: All (strictly) in-between strategies are (strictly) better than Uniform and Proportional Theorem2: p is worse than Uniform/Proportional if for all i, p i+1 /p i > 1 (more popular gets less) OR for all i, q i+1 /q i > p i+1 /p i (less popular gets less than “fair share”) Proportional and Uniform are the worst “reasonable” strategies (!!!)

65 So, what is the best strategy for soluble queries ?

66 Square-Root Allocation p i is proportional to square-root(q i ) Lies “In-between” Uniform and Proportional Theorem: Square-Root allocation minimizes the ESS (on soluble queries) Minimize  i q i / p i such that  i p i = 1

67 How much can we gain by using SR ? Zipf-like query rates

68 Replication Algorithms Fully distributed where peers communicate through random probes; minimal bookkeeping; and no more communication than what is needed for search. Converge to/obtain SR allocation when query rates remain steady. Uniform and Proportional are “easy” :- – Uniform: When item is created, replicate its key in a fixed number of hosts. – Proportional: for each query, replicate the key in a fixed number of hosts Desired properties of algorithm:

69 Model for Copy Creation/Deletion Creation: after a successful search, C(s) new copies are created at random hosts. Deletion: is independent of the identity of the item; copy survival chances are non-decreasing with creation time. (i.e., FIFO at each node) average value of C used to replicate i th item. Claim: If / remains fixed over time, and,  , then p i /p j  q i /q j Property of the process:

70 Creation/Deletion Process If then Corollary: Algorithm for square-root allocation needs to have equal to or converge to a value inversely proportional to

71 SR Replication Algorithms Path replication: number of new copies C(s) is proportional to the size of the search Probe memory: each peer records number and combined search size of probes it sees for each item. C(S) is determined by collecting this info from number of peers proportional to search size.  Extra communication (proportional to that needed for search).

72 Path Replication Number of new copies produced per query,, is proportional to search size 1/p i Creation rate is proportional to q i Steady state: creation rate proportional to allocation p i, thus

73 Summary Random Search/replication Model: probes to “random” hosts Soluble queries: Proportional and Uniform allocations are two extremes with same average performance Square-Root allocation minimizes Average Search Size OPT (all queries) lies between SR and Uniform SR/OPT allocation can be realized by simple algorithms.