Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pushing group communication to the edge will enable radically new distributed applications Ken Birman Cornell University.

Similar presentations


Presentation on theme: "Pushing group communication to the edge will enable radically new distributed applications Ken Birman Cornell University."— Presentation transcript:

1 Pushing group communication to the edge will enable radically new distributed applications Ken Birman Cornell University

2 First, a premise An industry driven by disruptive changes – Incremental advances are fine but dont shake things up very much – Those who quickly seize and fully exploit disruptive technologies thrive while those who miss even a single major event are left behind Our challenge as researchers? – Notice these opportunities first

3 Ingredients for disruptive change Set the stage: – Pent up opportunity to leverage legacy app base – Infrastructure enablers suddenly in place – Potential for radically new / killer applications Some sort of core problem to solve – Often demands out-of-the-box innovation Deployable in easily used form – These days, developers demand integrated tools

4 Why group communication? Or more specifically… – Why groups as opposed, say, to pub-sub? – Why does the edge of the network represent such a big opportunity? Who would use it, and why? Can software challenges (finally) be overcome? How to integrate with existing platforms?

5 Groups: A common denominator Not a new idea… dates back 20 years or more – V system (Stanford) and Isis system (Cornell) A group is a natural distributed abstraction – Can represent replicated data or services, shared keys or other consistent state, leadership or other coordination abstractions, shared memory… – These days would visualize a group as an object Can store pointer to it in file system name space Each has an associated type (endpoint management class) A process opens as many groups as it likes (like files)

6 A very short history of groups V system offered O/S level groups but stalled Isis added strong semantics and also many end- user presentations, like pub-sub – Strong semantics indistinguishable from a single entity implementing same abstraction – Our name for this model: virtual synchrony Can be understood as a form of transactional serializability but with processes, groups, messages as components Weaker than Paxos, which uses a strong consensus model Pub-sub emerged as the most popular end-user API

7 Virtual Synchrony Model crash G 0 ={p,q} G 1 ={p,q,r,s} G 2 ={q,r,s} G 3 ={q,r,s,t} pqrstpqrst r, s request to join r,s added; state xfer t added, state xfer t requests to join p fails... to date, the only widely adopted model for consistency and fault-tolerance in highly available networked applications

8 Four models side-by-side Traditional pub-sub as supported in current products – No guarantees, often scales poorly (1-1 TCP connection). – Versions that use IP multicast are prone to melt-down when stressed Virtual synchrony – Fastest and most scalable of the strongly consistent models – Cheats on apparent synchrony whenever it can, reducing risk of a correlated failure caused by a poison-pill multicast Paxos – Closest fit to consensus agreement and f.tol. properties – Performance limited by 2-phase protocol required to achieve this – Virtual synchrony uses Paxos-like protocol to track group membership Transactions – Most expensive at all: updates touch persistent storage – Execution model too constraining for high-speed comm. apps

9 Successes… and failures Some successes: – New York Stock Exchange, Swiss Exchange, French Air Traffic Control System, US Navy AEGIS, telephony, factory automation. Microsoft Vista clusters, IBM Websphere. – Paxos popular for small fault-tolerance services Some failures: – Implementations were often fragile, didnt scale well, poorly integrated with devel. environments – Pub-sub users tolerated weak semantics Bottom line: Market didnt scale adequately

10 Why didnt the market scale? Mile high perspective: – All existing group communication solutions targeted server platforms, e.g. to replicate data in a clustered application – Pub-sub became a majority solution for sending data (for example stock trades) from data centers to client platforms But neither market was ultimately all that large – The number of server platforms is tiny compared to the number of client systems… and group communication isnt the whole story – you always needed more technology – Meanwhile, once you license pub-sub to every major trading floor you exhaust the associated revenue opportunity

11 Why didnt either displace the other? Pub-sub systems were best-effort technologies – Compete with systems that just make lots of TCP connections and push data… – Lacking stronger semantics, applications that want security or stronger reliability had to build extra end-to-end logic But group communication systems had scalability issues of their own – Most platforms focus on processes using a small number of small groups (often just one group of 3-5 members) – Other positioning involves relaying through a central service

12 Is there an answer? Retarget group communication towards the edge of the network! – Provide it as a direct client-to-client option – Integrate tightly into the dominant client computing platform (.net, web services) – Make it scale much better Now value is tied to number of client systems, not number of servers…

13 Potential roles for group communication at the edge of the net? Gaming systems and VR immersion Delivery of streaming media, stock quotes, projected pricing for stock and bonds… Replication of security keys, other system structuring data and management information Vision: Any client can securely produce and/or consume data streams… servers are present but in a supporting role

14 Recall our list of ingredients… Pent up demand: developers have lacked a way to do this for decades… Technology enabler: Availability of ubiquitous broadband connectivity, high bandwidths Potential for high-value use in legacy apps and potential for new killer apps... But can we overcome scalability limits?

15 Quicksilver: Krzys Ostrowski, Birman, Phanishayee, Dolev Publish-subscribe eventing and notification Scalable in many dimensions – Number of publishers, subscribers – Number of groups (topics) – Churn, failures, loss, perturbances – High data rates Reliable Easy to use, and supporting standard APIs

16 Quicksilver: Key ideas Design dissemination, reliability, security, virtual synchrony as concurrently active stacks No need to relay multicasts through any form of centralized service… Send typical message with a single (or a few) IP or overlay multicasts Meta-protocols aggregate work across groups for efficiency System is extensively optimized to maximize throughput in all respects

17 Quicksilver is a work in progress… The basic scalable infrastructure is working today (coded in C#, runs on.net) Were currently adding: – Clean integration into.net to make it easy to use in much the same way that files are used today – Scalable virtual synchrony protocols, may also offer Paxos for those who want stronger model – Comprehensive, scalable security architecture Planning a series of free releases from Cornell

18 Conclusions? Enablers for a revolution at the edge – Groups that look like a natural part of.net, web services – Incredibly easy to use… much like shared files. – They scale well enough so that they can actually be used in the ways you want… and offer powerful security and consistency guarantees for applications that need them – Will also integrate with a persistency service to capture the history associated with a group if desired. Like a transactional log… but much faster and more flexible! Shared group with strong properties enable a new generation of trustworthy applications

19 Extra Slides (provided by Krzys) What to read? Our OSDI submission: QuickSilver Scalable Multicast. Krzysztof Ostrowski, Ken Birman, and Amar Phanishayee. QuickSilver Scalable Multicast. On www.cs.cornell.edu/Projects/Quicksilver QSM itself is available for download today.

20 Non-Goals We dont aim at real time guarantees – We always try to deliver messages, even if late We can sacrifice latency for throughput – Unavoidable trade-off Buffering, scheduling (at high rates, systems arent idle) – But were still in 10-30ms range for 1K messages We dont do pub-sub filtering – We provide multicast. Cayuga will do the filtering.

21

22 Why multiple groups? Groups are out there in the wild...

23 Why multiple groups? Groups are easy to think of... Why not use them like we would use files? – Separate group for each: Event Category of data items, user requests Stock Category of products Type of service – May lead to new, easier ways of programming

24 Limitations of existing approaches Existing protocols arent enough – Designed to scale in one dimesion at a time Overheads Bottlenecks – Costly to run (typically CPU-bound) – Example: JGroups Popular, and considered a solid platform. Part of JBoss. Running in managed environment (Java)

25 Limitations of existing approaches 1 sender 1 group sending as fast as possible cluster of PIII 1.3 GHz 512 MB 100 Mbps

26 Limitations of existing approaches 1 sender sending as fast as possible all groups have the exact same members

27 Limitations of existing approaches Lightweight groups: Overloaded agents Wasted bandwidth Filtering on receive Extra network hops Protocol per group: ACK/NAK overload

28 QSM was tested on 110 nodes 1 group sending as fast as possible rates set manually

29 QSM was tested on 8192 groups 1 sender groups perfectly overlap

30 QSM is very cheap to run 1 group 110 nodes 1-2 senders

31 QSM is very cheap to run... 1 sender 1 group 110 nodes maximum rate

32 QSM has an acceptable latency...

33 ...yet sometimes it needs tuning the default buffering settings lead to higher latencies for large messages lower latencies achievable via manually tuning its settings

34 QSM tolerates bursty packet loss 1 sender 110 nodes once every 10s a selected node (receiver) drops every incoming packet including data and control for the period of 1s and returns back to normal for the remaining 9 seconds

35 QSM tolerates bursty packet loss 1 sender 110 nodes loss occurs every 10s as before duration of the loss is varying

36 QSM tolerates node crashes...

37 ...and much worse scenarios worst case scenario node freezes for 10s in the middle of the run, but then resumes and triggers a substantial amounf of loss recovery

38 Cumulative effect of perturbances cumulative delay how much extra time we need to send the same N messages as a result of the perturbance

39 QSM doesnt collapse but might oscillate 2 senders 110 nodes trying to send at a rate that exceeds the maximum of what can be achieved

40 Key Insight: Regions G(x) = set of groups node x is a member of x and y are in the same region if G(x) = G(y) Interest sharing – Receiving the same messages Fate sharing – Experiening the same load, burstiness – Experiencing the same losses – Being similarly affected by churn, crashes etc.

41 Key Insights: Regions

42

43

44 Key Insights: Internals

45 As of today... Already available: Multicast scalable in multiple dimensions Simple reliability model (keep trying until ACKed) Simple messaging API Work in progress: Extending with strong reliability Support for the WS-* APIs, typed endpoints Request-Reply communication

46 Deployment scenarios Default architecture – Independent processes Can have multiple processes on each node Processes are not cooperating But: best performance only with one process per node! – Could be different on multi-processes machines. – Coordinator Global Membership Service (GMS) Failure Detector (FD) Currently a single-machine design

47 Deployment scenarios

48 Architecture with the local daemon – Support multiple processes more smoothly – Support WS-BrokeredNotification – Support WS-Eventing – Support non-.NET applications by linking with a separate thin library Only needs to talk to the local daemon No need to implement any part of the QSM protocol Small, might be written in any language! – Currently in progress...

49 Deployment scenarios

50 Eventing

51 Conclusions QSM currently delivers multicast scalable in multiple dimesions, with basic reliability properties Future: We are ading support for WS-* APIs We are extending the robustness and reliability Two new dimensions Request-reply and pub-sub mode of communication Strong typing of groups (e.g. for security)

52 Publications QuickSilver Scalable Multicast Krzysztof Ostrowski, Ken Birman, and Amar Phanishayee Extensible Web Services Architecture for Notification in Large-Scale Systems Krzysztof Ostrowski and Ken Birman Extensible Web Services Architecture for Notification in Large-Scale Systems http://www.cs.cornell.edu/projects/quicksilver/pubs.html


Download ppt "Pushing group communication to the edge will enable radically new distributed applications Ken Birman Cornell University."

Similar presentations


Ads by Google