Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Hash Tables Parallel and Distributed Computing Spring 2011 1.

Similar presentations


Presentation on theme: "Distributed Hash Tables Parallel and Distributed Computing Spring 2011 1."— Presentation transcript:

1 Distributed Hash Tables Parallel and Distributed Computing Spring

2 2 Distributed Hash Tables Academic answer to p2p Goals – Guaranteed lookup success – Provable bounds on search time – Provable scalability Makes some things harder – Fuzzy queries / full-text search / etc. Hot Topic in networking since introduction in ~2000/2001

3 3 DHT: Overview Abstraction: a distributed hash-table (DHT) data structure supports two operations: – put(id, item); – item = get(id); Implementation: nodes in system form a distributed data structure – Can be Ring, Tree, Hypercube, Skip List, Butterfly Network,...

4 What Is a DHT? A building block used to locate key-based objects over millions of hosts on the internet Inspired from traditional hash table: – key = Hash(name) – put(key, value) – get(key) -> value Challenges – Decentralized: no central authority – Scalable: low network traffic overhead – Efficient: find items quickly (latency) – Dynamic: nodes fail, new nodes join – General-purpose: flexible naming 4

5 The Lookup Problem N1N1 N2N2 N3N3 N6N6 N5N5 N4N4 Publisher Put (Key=title Value=file data…) Client Get(key=title) Internet ? 5

6 DHTs: Main Idea N4N4 Publisher Client N6N6 N9N9 N7N7 N8N8 N3N3 N2N2 N1N1 Lookup(H(audio data)) Key=H(audio data) Value={artist, album title, track title} 6

7 7 DHT: Overview (2) Structured Overlay Routing: – Join: On startup, contact a bootstrap node and integrate yourself into the distributed data structure; get a node id – Publish: Route publication for file id toward a close node id along the data structure – Search: Route a query for file id toward a close node id. Data structure guarantees that query will meet the publication. – Fetch: Two options: Publication contains actual file => fetch from where query stops Publication says I have file X => query tells you has X, use IP routing to get X from

8 From Hash Tables to Distributed Hash Tables Challenge: Scalably distributing the index space: – Scalability issue with hash tables: Add new entry => move many items – Solution: consistent hashing (Karger 97) Consistent hashing: – Circular ID space with a distance metric – Objects and nodes mapped onto the same space – A key is stored at its successor: node with next higher ID 8

9 9 DHT: Consistent Hashing N32 N90 N105 K80 K20 K5 Circular ID space Key 5 Node 105 A key is stored at its successor: node with next higher ID

10 What Is a DHT? Distributed Hash Table: key = Hash(data) lookup(key) -> IP address put(key, value) get( key) -> value API supports a wide range of applications – DHT imposes no structure/meaning on keys Key/value pairs are persistent and global – Can store keys in other DHT values – And thus build complex data structures 10

11 Approaches Different strategies – Chord: constructing a distributed hash table – CAN: Routing in a d-dimensional space – Many more… Commonalities – Each peer maintains a small part of the index information (routing table) – Searches performed by directed message forwarding Differences – Performance and qualitative criteria 11

12 12 DHT: Example - Chord Associate to each node and file a unique id in an uni- dimensional space (a Ring) – E.g., pick from the range [0...2 m -1] – Usually the hash of the file or IP address Properties: – Routing table size is O(log N), where N is the total number of nodes – Guarantees that a file is found in O(log N) hops

13 Example 1: Distributed Hash Tables (Chord) Hashing of search keys AND peer addresses on binary keys of length m – Key identifier = SHA-1(key); Node identifier = SHA-1(IP address) – SHA-1 distributes both uniformly – e.g. m=8, key(yellow-submarine.mp3")=17, key( )=3 Data keys are stored at next larger node key peer with hashed identifier p, data with hashed identifier k k stored at node p such that p is the smallest node ID larger than k m=8 32 keys p p2 p3 k stored at predecessor Search possibilities? 1. every peer knows every other O(n) routing table size 2. peers know successor O(n) search cost 13

14 14 DHT: Chord Basic Lookup N32 N90 N105 N60 N10 N120 K80 Where is key 80? N90 has K80

15 15 DHT: Chord Finger Table N80 1/2 1/4 1/8 1/16 1/32 1/64 1/128 Entry i in the finger table of node n is the first node that succeeds or equals n + 2 i In other words, the ith finger points 1/2 n-i way around the ring

16 16 DHT: Chord Join Assume an identifier space [0..8] Node n1 joins i id+2 i succ Succ. Table

17 17 DHT: Chord Join Node n2 joins i id+2 i succ Succ. Table i id+2 i succ Succ. Table

18 18 DHT: Chord Join Nodes n0, n6 join i id+2 i succ Succ. Table i id+2 i succ Succ. Table i id+2 i succ Succ. Table i id+2 i succ Succ. Table

19 19 DHT: Chord Join Nodes: n0, n1, n2, n6 Items: f i id+2 i succ Succ. Table i id+2 i succ Succ. Table i id+2 i succ Succ. Table f7 Items i id+2 i succ Succ. Table

20 20 DHT: Chord Routing Upon receiving a query for item id, a node: Checks whether stores the item locally If not, forwards the query to the largest successor in its successor table that does not exceed id i id+2 i succ Succ. Table i id+2 i succ Succ. Table i id+2 i succ Succ. Table f7 Items i id+2 i succ Succ. Table query(7)

21 21 DHT: Chord Summary Routing table size? – Log N fingers Routing time? – Each hop expects to 1/2 the distance to the desired id => expect O(log N) hops.

22 Load Balancing in Chord Network size n=10^4 5 10^5 keys 22

23 Length of Search Paths Network size n=2^ ^12 keys Path length ½ Log 2 (n) 23

24 Chord Discussion Performance – Search latency: O(log n) (with high probability, provable) – Message Bandwidth: O(log n) (selective routing) – Storage cost: O(log n) (routing table) – Update cost: low (like search) – Node join/leave cost: O(Log 2 n) – Resilience to failures: replication to successor nodes Qualitative Criteria – search predicates: equality of keys only – global knowledge: key hashing, network origin – peer autonomy: nodes have by virtue of their address a specific role in the network 24

25 Example 2: Topological Routing (CAN) Based on hashing of keys into a d-dimensional space (a torus) – Each peer is responsible for keys of a subvolume of the space (a zone) – Each peer stores the addresses of peers responsible for the neighboring zones for routing – Search requests are greedily forwarded to the peers in the closest zones Assignment of peers to zones depends on a random selection made by the peer 25

26 Network Search and Join Node 7 joins the network by choosing a coordinate in the volume of 1 26

27 CAN Refinements Multiple Realities – We can have r different coordinate spaces – Nodes hold a zone in each of them – Creates r replicas of the (key, value) pairs – Increases robustness – Reduces path length as search can be continued in the reality where the target is closest Overloading zones – Different peers are responsible for the same zone – Splits are only performed if a maximum occupancy (e.g. 4) is reached – Nodes know all other nodes in the same zone – But only one of the neighbors 27

28 CAN Path Length 28

29 Increasing Dimensions and Realities 29

30 CAN Discussion Performance – Search latency: O(d n 1/d ), depends on choice of d (with high probability, provable) – Message Bandwidth: O(d n 1/d ), (selective routing) – Storage cost: O(d) (routing table) – Update cost: low (like search) – Node join/leave cost: O(d n 1/d ) – Resilience to failures: realities and overloading Qualitative Criteria – search predicates: spatial distance of multidimensional keys – global knowledge: key hashing, network origin – peer autonomy: nodes can decide on their position in the key space 30

31 Comparison of some P2P Solutions Search Paradigm Overlay maintenance costs Search Cost GnutellaBreadth-first on search graph O(1) ChordImplicit binary search trees O(log n) CANd-dimensional space O(d)O(d n 1/d )

32 DHT Applications Not only for sharing music anymore… – Global file systems [OceanStore, CFS, PAST, Pastiche, UsenetDHT] – Naming services [Chord-DNS, Twine, SFR] – DB query processing [PIER, Wisc] – Internet-scale data structures [PHT, Cone, SkipGraphs] – Communication services [i3, MCAN, Bayeux] – Event notification [Scribe, Herald] – File sharing [OverNet] 32

33 33 DHT: Discussion Pros: – Guaranteed Lookup – O(log N) per node state and search scope Cons: – No one uses them? (only one file sharing app) – Supporting non-exact match search is hard

34 34 When are p2p / DHTs useful? Caching and soft-state data – Works well! BitTorrent, KaZaA, etc., all use peers as caches for hot data Finding read-only data – Limited flooding finds hay – DHTs find needles BUT

35 35 A Peer-to-peer Google? Complex intersection queries (the + who) – Billions of hits for each term alone Sophisticated ranking – Must compare many results before returning a subset to user Very, very hard for a DHT / p2p system – Need high inter-node bandwidth – (This is exactly what Google does - massive clusters)

36 36 Writable, persistent p2p Do you trust your data to 100,000 monkeys? Node availability hurts – Ex: Store 5 copies of data on different nodes – When someone goes away, you must replicate the data they held – Hard drives are *huge*, but cable modem upload bandwidth is tiny - perhaps 10 Gbytes/day – Takes many days to upload contents of 200GB hard drive. Very expensive leave/replication situation!

37 Research Trends: A Superficial History Based on Articles in IPTPS In the early 00s ( ): – DHT-related applications, optimizations, reevaluations… (more than 50% of IPTPS papers!) – System characterization – Anonymization 2005-… – BitTorrent: improvements, alternatives, gaming it – Security, incentives More recently: – Live streaming – P2P TV (IPTV) – Games over P2P 37

38 Whats Missing? Very important lessons learned – …but did we move beyond vertically-integrated applications? Can we distribute complex services on top of p2p overlays? 38

39 39 P2P: Summary Many different styles; remember pros and cons of each – centralized, flooding, swarming, unstructured and structured routing Lessons learned: – Single points of failure are very bad – Flooding messages to everyone is bad – Underlying network topology is important – Not all nodes are equal – Need incentives to discourage freeloading – Privacy and security are important – Structure can provide theoretical bounds and guarantees


Download ppt "Distributed Hash Tables Parallel and Distributed Computing Spring 2011 1."

Similar presentations


Ads by Google