Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scalable Group Communication for the Internet Idit Keidar MIT Lab for Computer Science Theory of Distributed Systems Group The main part of this talk is.

Similar presentations


Presentation on theme: "Scalable Group Communication for the Internet Idit Keidar MIT Lab for Computer Science Theory of Distributed Systems Group The main part of this talk is."— Presentation transcript:

1 Scalable Group Communication for the Internet Idit Keidar MIT Lab for Computer Science Theory of Distributed Systems Group The main part of this talk is joint work with Sussman, Marzullo and Dolev

2 Collaborators Tal Anker Ziv Bar-Joseph Gregory Chockler Danny Dolev Alan Fekete Nabil Huleihel Kyle Ingols Roger Khazan Carl Livadas Nancy Lynch Keith Marzullo Yoav Sasson Jeremy Sussman Alex Shvartsman Igor Tarashchanskiy Roman Vitenberg Esti Yeger-Lotem

3 Outline Motivation Group communication - background A novel architecture for scalable group communication services in WAN A new scalable group membership algorithm –Specification –Algorithm –Implementation –Performance Conclusions

4 Modern Distributed Applications (in WANs) Highly available servers –Web –Video-on-Demand Collaborative computing –Shared white-board, shared editor, etc. –Military command and control –On-line strategy games Stock market

5 Important Issues in Building Distributed Applications Consistency of view –Same picture of game, same shared file Fault tolerance, high availability Performance –Conflicts with consistency? Scalability –Topology - WAN, long unpredictable delays –Number of participants

6 Generic Primitives - Middleware, “Building Blocks” E.g., total order, group communication Abstract away difficulties, e.g., –Total order - a basis for replication –Mask failures Important issues: –Well specified semantics - complete –Performance

7 Research Approach Rigorous modeling, specification, proofs, performance analysis Implementation and performance tuning Services  Applications Specific examples  General observations

8 G Send(G) Group Communication Group abstraction - a group of processes is one logical entity Dynamic Groups (join, leave, crash) Systems: Ensemble, Horus, ISIS, Newtop, Psync, Sphynx, Relacs, RMP, Totem, Transis

9 Virtual Synchrony [ Birman, Joseph 87] Group members all see events in same order –Events: messages, process crash/join Powerful abstraction for replication Framework for fault tolerance, high availability Basic component: group membership –Reports changes in set of group members

10 Example: Highly Available VoD [ Anker, Dolev, Keidar ICDCS1999] Dynamic set of servers Clients talk to “abstract” service Server can crash, client shouldn’t know

11 VoD Service: Exploiting Group Communication Group abstraction for connection establishment and transparent migration (with simple clients) Membership services detect conditions for migration - fault tolerance and load balancing Reliable group multicast among servers for consistently sharing information Virtual Synchrony allows servers to agree upon migration immediately (no message exchange) Reliable messages for control Server: ~2500 C++ lines –All fault tolerance logic at server

12 Related Projects Moshe: Group Membership ICDCS 00 Architecture for Group Membership in WAN DIMACS 98 Specification Survey 99 Virtual Synchrony ICDCS 00 Inheritance-based Modeling ICSE 00 Object Replication PODC 96 CSCW NGITS 97 Highly Available VoD ICDCS 99 Group communication Applications Dynamic Voting PODC 97 QoS Support TINA 96, OPODIS 00 Optimistic VS SRDS 00

13 A Scalable Architecture for Group Membership in WANs Tal Anker, Gregory Chockler, Danny Dolev, Idit Keidar DIMACS Workshop 1998

14 Scalable Membership Architecture Dedicated distributed membership servers “divide and conquer” –Servers involved only in membership changes –Members communicate with each other directly (implement “virtual synchrony”) Two levels of membership –Notification Service NSView - “who is around” –Agreed membership views

15 Architecture NSView: "Who is around" failure/join/leave Agreed View: Members set and identifier Notification Service (NS) Membership {A,B,C,D,E},7 Notification Service (NS) Membership {A,B,C,D,E},7

16 The Notification Service (NS) Group members send requests: –join(Group G), –leave(Group G) directly to (local) NS NS detects faults (member / domain) Information propagated to all NS servers NS servers notify membership servers of new NSView

17 The NS Communication: Reliable FIFO links Membership servers can send each other messages using NS FIFO order If S1 sends m1 and later m2 then any server which receives both, receives m1 first. Reliable links If S1 sends m to S2 then eventually either S2 receives m or S1 suspects S2 (and all of its clients).

18 Moshe: A Group Membership Algorithm for WANs Idit Keidar, Jeremy Sussman Keith Marzullo, Danny Dolev ICDCS 2000

19 Membership in WAN: the Challenge Message latency is large and unpredictable Frequent message loss è Time-out failure detection is inaccurate è We use a notification service (NS) for WANs è Number of communication rounds matters è Algorithms may change views frequently è View changes require communication for state transfer, which is costly in WAN

20 Moshe’s Novel Concepts Designed for WANs from the ground up –Previous systems emerged from LAN Avoids delivery of “obsolete” views –Views that are known to be changing –Not always terminating (but NS is) Runs in a single round (“typically”)

21 Member-Server Interaction

22 Moshe Guarantees View - –Identifier is monotonically increasing Conditional liveness property: Agreement on views If “all” eventually have the same last NSView then “all” eventually agree on the last view obsolete views Composable  Allows reasoning about individual components  Useful for applications

23 Moshe Operation: Typical Case In response to new NSView (members), –send proposal to other servers with NSView –send startChange to local members (clients) Once proposals from all servers of NSView members arrive, deliver view: –members - NSView, –identifier higher than all previous to local members

24 Example: Client B joins a group

25 Goal: Self Stabilizing Once the same last NSView is received by all servers: –All send proposals for this NSView –All the proposals reach all the servers –All servers use these proposals to deliver the same view And they live happily ever after!

26 è To avoid deadlock: A must respond Out-of-Sync Case: unexpected proposal X proposal +c X -c proposal AB C

27 è Extra proposals are redundant, responding with a proposal may cause live-lock Out-of-Sync Case: unexpected proposal AB C -C +C +AB+C view

28 Out-of-Sync Case: missing proposal AB C -C +C +AB+C view This case exposed by correctness proof

29 Detecting a “Missing Proposal” Proposals are numbered –Numbers monotonically increasing PropNum - the latest I sent Proposal has extra information: –Used[] contains last proposal used for a view (per server) Detection: proposal p arrives with: p.Used [me] = PropNum

30 Missing Proposal Detection AB C +C+AB+C 111 view Used: 1,1,1 -C +C 2, [111] Used[C] =1= PropNum Detection! PropNum:1

31 Detecting Blocking Cases My last proposal was used by me with an earlier proposal of server A –A’s last proposal will arrive when my algorithm is not running (extra proposal) My last proposal was used by A with an earlier proposal of A –In A’s latest proposal, Used[me] = PropNum

32 Handling Out-of-Sync Cases: “Slow Agreement” Also sends proposals, tagged “SA” Invoked upon blocking detection or upon receipt of “SA” proposal Upon receipt of “SA” proposal with bigger number than PropNum, respond with same number Deliver view only with “full house” of same number proposals

33 Rational for Slow Agreement Algorithm Termination Number can only increase if NSView occurs –Eventually, it stops increasing (if stable) Every recipient responds with greater or equal number Numbers are strictly increasing – Same number is not sent twice

34 How Typical is the “typical” Case? Depends on the notification service (NS) –Classify NS good behaviors: symmetric and transitive perception of failures Transitivity depends on logical topology, how suspicions propagate Typical case should be very common Need to measure

35 Implementation Use CONGRESS [ Anker et al ] –NS for WAN –Always symmetric, can be non-transitive –Logical topology can be configured Moshe servers extend CONGRESS servers Socket interface with processes

36 The Experiment Run over the Internet –In the US: MIT, Cornell (CU), UCSD –In Taiwan: NTU –In Israel: HUJI Run for 10 days in one configuration, 2.5 days in another 10 clients at each location –continuously join/leave 10 groups

37 Two Experiment Configurations

38 Percentage of “Typical” Cases Configuration 1: –MIT: 10,786 views, 10,661 one round - 98.84% –Other sites: 98.8%, 98.9%, 98.97%, 98.85% Configuration 2: –MIT: 2,559 views, 2,555 one round - 99.84% –Other sites: 99.82%, 99.79%, 99.81%, 99.84% Overwhelming majority for one round! Depends on topology  can scale

39 Performance: Surprise! 0 200 400 600 800 1000 1200 1400 0 200400600800 1000120014001600180020002200240026002800300032003400360038004000 milliseconds number of runs Histogram of Moshe duration MIT, configuration 1, runs up to 4 seconds (97%)

40 Performance: Part II Histogram of Moshe duration MIT, configuration 2, runs up to 3 seconds (99.7%) 0 50 100 150 200 250 300 350 400 450 0 150300450600750900 10501200135015001650180019502100225024002550270028503000 milliseconds number of runs

41 Performance over the Internet: What is Going on? Without message loss, running time is close to biggest round-trip-time, ~650 ms. –As expected Message loss has a big impact Configuration 2 has much less loss,  more cases of good performance

42 “Slow” versus “Typical” Slow can take 1 or 2 rounds once it is run –Depending on PropNum Slow after NE –One-round is run first, then detection, and slow –Without loss - 900 ms., 40% more than usual Slow without NE –Detection by unexpected proposal –Only slow algorithm is run –Runs less time than one-round

43 Unstable Periods: No Obsolete Views “Unstable” = –constant changes; or –connected processes differ in failure detection Configuration 1: –379 of the 10,786 views  4 seconds, 3.5% –167  20 seconds, 1.5% –Longest running time 32 minutes Configuration 2: –14 of 2,559 views  4 seconds, 0.5% –Longest running time 31 seconds

44 Scalability Measurements Controlled experiment at MIT and UCSD –Prototype NS, based on TCP/IP ( Sasson ) –Inject faults to test “slow” case Vary number of members, servers Measure end-to-end latencies at member, from join/leave/suspicion to corresponding view Average of 150 (50 slow) runs

45 End-to-End Latency: Scalable! Member scalability: 4 servers (constant) Server and member scalability: 4-14 servers

46 Conclusion: Moshe Features Avoiding obsolete views A single round –98% of the time in one configuration –99.8% of the time in another Using a notification service for WANs –Good abstraction –Flexibility to configure multiple ways –Future work: configure more ways Scalable “divide and conquer” architecture

47 Retrospective: Role of Theory Specification –Possible to implement –Useful for applications (composable) Specification can be met in one round “typically” (unlike Consensus) Correctness proof exposes subtleties –Need to avoid live-lock –Two types of detection mechanisms needed

48 Future Work: The QoS Challenge Some distributed applications require QoS –Guaranteed available bandwidth –Bounded delay, bounded jitter Membership algorithm terminates in one round under certain circumstances –Can we leverage on that to guarantee QoS under certain assumptions? Can other primitives guarantee QoS?


Download ppt "Scalable Group Communication for the Internet Idit Keidar MIT Lab for Computer Science Theory of Distributed Systems Group The main part of this talk is."

Similar presentations


Ads by Google