Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed hash tables Protocols and applications Jinyang Li.

Similar presentations


Presentation on theme: "Distributed hash tables Protocols and applications Jinyang Li."— Presentation transcript:

1 Distributed hash tables Protocols and applications Jinyang Li

2 Peer to peer systems Symmetric in node functionalities –Anybody can join and leave –Everybody gives and takes Great potentials –Huge aggregate network capacity, storage etc. –Resilient to single point of failure and attack E.g. Gnapster, Gnutella, Bittorrent, Skype, Joost

3 Motivations: the lookup problem How to locate something given its name? Examples: –File sharing –VoIP –Content distribution networks (CDN)

4 Strawman #1: a central search service Central service poses a single point of failure or attack Not scalable(?) Central service needs $$$

5 Improved #1: Hierarchical lookup Performs hierarchical lookup (like DNS) More scalable than a central site Root of the hierarchy is still a single point of vulnerability

6 Strawman #2: Flooding Symmetric design: No special servers needed Too much overhead traffic Limited in scale No guarantee for results Where is item K?

7 Improved #2: super peers E.g. Kazaa, Skype Lookup only floods super peers Still no guarantees for lookup results

8 DHT approach: routing Structured: –put(key, value); get(key, value) –maintain routing state for fast lookup Symmetric: –all nodes have identical roles Many protocols: –Chord, Kademlia, Pastry, Tapestry, CAN, Viceroy

9 DHT ideas Scalable routing needs a notion of “closeness” Ring structure [Chord]

10 Different notions of “closeness” Tree-like structure, [Pastry, Kademlia, Tapestry] 01100101 01100111011001010110011001100100011011**011001**011010**011000** Distance measured as the length of longest matching prefix with the lookup key

11 Different notions of “closeness” Cartesian space [CAN] 0,0.5, 0.5,1 0.5,0.5, 1,1 0,0, 0.5,0.5 0.5,0.25,0.75,0.5 0.5,0,0.75,0.25 0.75,0, 1,0.5 (0,0) (1,1) Distance measured as geometric distance

12 DHT: complete protocol design 1.Name nodes and data items 2.Define data item to node assignment 3.Define per-node routing table contents 4.Handle churn How do new nodes join? How do nodes handle others’ failures?

13 Chord: Naming Each node has a unique flat ID –E.g. SHA-1(IP address) Each data item has a unique flat ID (key) –E.g. SHA-1(data content)

14 Data to node assignment Consistent hashing Predecessor is “closest” to Successor is responsible for

15 Key-based lookup (routing) Correctness –Each node must know its correct successor

16 Key-based lookup Fast lookup –A node has O(log n) fingers exponentially “far” away Node x’s i-th finger is at x+2 i away Why is lookup fast?

17 Handling churn: join New node w joins an existing network –Issue a lookup to find its successor x –Set its successor: w.succ = x –Set its predecessor: w.pred = x.pred –Obtain subset of responsible items from x –Notify predecessor: x.pred = w

18 Handling churn: stabilization Each node periodically fixes its state –Node w asks its successor x for x.pred –If x.pred is “closer” to itself, set w.succ = x.pred Starting with any connected graph, stabilization eventually makes all nodes find correct successors

19 Handling churn: failures Nodes leave without notice –Each node keeps s (e.g. 16) successors –Replicate keys on s successors

20 Handling churn: fixing fingers Much easier to maintain fingers –Any node at [2 i, 2 i+1 ] distance away will do –Geographically closer fingers --> lower latency –Periodically flush dead fingers

21 Using DHTs in real systems Amazon’s Dynamo key-value storage [SOSP’07] Serve as persistent state for applications –shopping carts, user preferences How does it apply DHT ideas? –Assign key-value pairs to responsible nodes –Automatic recovery from churn New challenges? –Manage read-write data consistency

22 Using DHTs in real systems Keyword-based searches for file sharing –eMule, Azureus How to apply a DHT? –User has file 1f3d… with name jingle bell Britney –Insert mappings: SHA-1(jingle)  1f3d, SHA-1(bell)  1f3d, SHA-1(britney)  1f3d –How to answer query “jingle bell”, “Britney”? Challenges? –Some keywords are much more popular than others –RIAA inserts a billion “Britney spear xyz”  crap?

23 Using DHTs in real systems CoralCDN

24 Motivation: alleviating flash crowd Origin Server Browser http proxy Proxies handle most client requests Proxies cooperate to fetch content from each other

25 Getting content with CoralCDN Origin Server 1.Server selection What CDN node should I use? Clients use CoralCDN via modified domain name nytimes.com/file → nytimes.com.nyud.net/file 2.Lookup(URL) What nodes are caching the URL? 3.Content transmission From which caching nodes should I download file?

26 Coral design goals Don’t control data placement –Nodes cache based on access patterns Reduce load at origin server –Serve files from cached copies whenever possible Low end-to-end latency –Lookup and cache download optimize for locality Scalable and robust lookups

27 Given a URL: –Where is the data cached? –Map name to location: URL  {IP 1, IP 2, IP 3, IP 4 } –lookup(URL)  Get IPs of nearby caching nodes –insert(URL,myIP)  Add me as caching URL Lookup for cache locations,TTL) for TTL seconds Isn’t this what a distributed hash table is for?

28 URL 1 ={IP 1,IP 2,IP 3,IP 4 } SHA-1(URL 1 ) insert(URL 1,myIP) A straightforward use of DHT Problems –No load balance for a single URL –All insert and lookup go to the same node (cannot be close to everyone)

29 #1 Solve load balance problem: relax hash table semantics DHTs designed for hash-table semantics –Insert and replace:URL  IP last –Insert and append:URL  {IP 1, IP 2, IP 3, IP 4 } Each Coral lookup needs only few values –lookup (URL)  {IP 2, IP 4 } –Preferably ones close in network

30 Prevent hotspots in index 123 # hops: Route convergence –O(b) nodes are 1 hop from root –O(b 2 ) nodes are 2 hops away from root … Leaf nodes (distant IDs) Root node (closest ID)

31 Prevent hotspots in index 123 # hops: Request load increases exponentially towards root URL={IP 1,IP 2,IP 3,IP 4 } Root node (closest ID) Leaf nodes (distant IDs)

32 Rate-limiting requests 123 # hops: Refuse to cache if already have max # “fresh” IPs / URL, –Locations of popular items pushed down tree Root node (closest ID) URL={IP 1,IP 2,IP 3,IP 4 } Leaf nodes (distant IDs) URL={IP 3,IP 4 } URL={IP 5 }

33 Rate-limiting requests 123 # hops: Root node (closest ID) URL={IP 1,IP 2,IP 6,IP 4 } Leaf nodes (distant IDs) Refuse to cache if already have max # “fresh” IPs / URL, –Locations of popular items pushed down tree Except, nodes leak through at most β inserts / min / URL –Bound rate of inserts towards root, yet root stays fresh URL={IP 3,IP 4 } URL={IP 5 }

34 Load balance results Aggregate request rate: ~12 million / min Rate-limit per node ( β ): 12 / min Root has fan-in from 7 others 494 nodes on PlanetLab 3 β 2 β 1 β 7 β

35 #2 Solve lookup locality problem Cluster nodes hierarchically based on RTT Lookup traverses up hierarchy –Route to “closest” node at each level

36 Preserve locality through hierarchy 000… 111… Distance to key None < 60 ms < 20 ms Thresholds Minimizes lookup latency Prefer values stored by nodes within faster clusters

37 Clustering reduces lookup latency Reduces median lat by factor of 4

38 Putting all together: Coral reduces server load Local disk caches begin to handle most requests Most hits in 20-ms Coral cluster Few hits to origin 400+ nodes provide 32 Mbps 100x capacity of origin


Download ppt "Distributed hash tables Protocols and applications Jinyang Li."

Similar presentations


Ads by Google