Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Programming and Consistency: Principles and Practice Peter Alvaro Neil Conway Joseph M. Hellerstein UC Berkeley.

Similar presentations


Presentation on theme: "Distributed Programming and Consistency: Principles and Practice Peter Alvaro Neil Conway Joseph M. Hellerstein UC Berkeley."— Presentation transcript:

1 Distributed Programming and Consistency: Principles and Practice Peter Alvaro Neil Conway Joseph M. Hellerstein UC Berkeley

2 Part I: Principles

3 Motivation

4 Why are distributed systems hard? Uncertainty In communication

5 Why are distributed systems hard? We attack at dawn ? ?

6 Why are distributed systems hard? Wait for my signal Then attack!

7 Why are distributed systems hard? Attack! No, WAIT! ?

8 Distributed systems are easier when messages are

9 Reorderable

10 Distributed systems are easier when messages are Reorderable Retryable

11 Distributed systems are easier when messages are Reorderable Retryable Retraction-free

12 (notes) Point to make: convergent objects are NOT retraction-free.

13 Context: replicated distributed systems Distributed: connected (but not always, or well) Replicated: redundant data

14

15 Context: replicated distributed systems Running example: a key-value store put(“dinner”, “pizza”) Dinner = pizza get(“dinner”) “pizza”

16 Context: replicated distributed systems Distributed  replication is desirable Distributed  consistency is expensive

17 Consistency? Definitions abound: pick one. but try to be consistent…

18 What isn’t consistent? Replication anomalies: Read anomalies (staleness) Write divergence (concurrent updates)

19 Anomalies Stale reads put(“dinner”, “pasta”) Dinner = pizza get(“dinner”) “pizza” Dinner = pasta ?

20 Anomalies Write conflicts put(“dessert”, “cake”) Dessert = cake Dessert = fruit put(“dessert”, “fruit”) ?

21 Consistency Anomalies witness replication. A consistent replicated datastore rules out (some) replication anomalies.

22 Consistency models Strong consistency Eventual consistency Weaker models

23 Strong consistency AKA ``single copy’’ consistency Replication is transparent; no witnesses of replication

24 Strong consistency Reads that could witness replication block Concurrent writes take turns

25 Strong consistency Some strategies: Single master with synchronous replication – Writes are totally ordered, reads see latest values Quorum systems – A (majority) ensemble simulates a single master ``State machine replication’’ – Use consensus to establish a total order over reads and writes systemwide

26 Strong consistency Drawbacks: Latency, availability, partition tolerance

27 Eventual Consistency Tolerate stale reads and concurrent writes Ensure that eventually* all replicas converge * When activity has ceased and all messages are delivered to all replicas

28 Eventual Consistency Strategies: Establish a total update order off critical path (eg bayou). – Epidemic (gossip-based) replication – Tentatively apply, then possibly retract, updates as the order is learned

29 Eventual Consistency Strategies: Deliver updates according to a ``cheap’’ order (e.g. causal). – Break ties with timestamps, merge functions hmmmm

30 Eventual Consistency Strategies: Constrain the application so that updates are reorderable Won’t always work. When will it work?

31 Eventual consistency – more definitions Convergence – Conflicting writes are uniformly resolved – Reads eventually return current data – State-centric Confluence – A program has deterministic executions Output is a function of input – Program-centric

32 Eventual consistency – more definitions Confluence is a strong correctness criterion – Not all programs are meant to be deterministic But it’s a nice property – E.g., for replay-based debugging – E.g., because the meaning of a program is its output (not its ``traces’’) Confluent => convergent

33 Eventual consistency – more definitions Confluent => convergent Deterministic executions imply replica agreement

34 Eventual consistency – more definitions But convergent state does not imply deterministic executions

35 (peter notes) EC systems focus only on controlling write anomalies (stale reads always fair game, though session guarantees may restrict which read anomalies can happen) EC systems are convergent – eventually there are no divergent states Deterministic => convergent (but not tother way) Determinism is compositional: two deterministic systems glued together make a deterministic system Convergence is not compositional: two convergent systems glued together do not necessarily make a convergent system (eg if the glue is NM)

36 (peter notes) Guarded asynchrony – need to carefully explain the significance of this Essentially, confluent programs cannot allow one-shot queries on changing state (even monotonically changing) One-shot queries must be converted into subscriptions to a stream of updates – That way, we are guaranteed to see the last update to a given lattice

37 (joe notes) There’s stuff you can do in The storage layer…. Or you can pop up. Layer vs language Sequential emulation at the storage layer Crdts – state-centric attempt to achieve relaxed ordering (object-by-object) Then bloom The programming model should match the computation model

38 Distributed design patterns for eventual consistency

39 ACID 2.0 The classic ACID has the goal to make the application perceive that there is exactly one computer and it is doing nothing else while this transaction is being processed. Consider the new ACID (or ACID2.0). The letters stand for: Associative, Commutative, Idempotent, and Distributed. The goal for ACID2.0 is to succeed if the pieces of the work happen:  At least once,  Anywhere in the system,  In any order. - Pat Helland, Building on quicksand

40 ACID 2.0 Associative -- operations can be ``eagerly’’ processed Commutative – operations can be reordered Idempotent – retry is always an option Distributed – (needed a ``D’’)

41 ACID 2.0 Instead of low-level reads and writes programmers use an abstract vocabulary of reorderable, retryable actions: Retry – a mechanism to ensure that all messages are delivered Reorderability -- ensures that all replicas converge

42 Putting ACID 2.0 into practice

43 1.CRDTs – A state-based approach – Keep distributed state in data structures providing only ACI methods 2.Disorderly programming – A language-based approach – Encourage structuring computation using reorderable statements and data

44 Formalizing ACID 2.0 ACI are precisely the properties that define the LUB operation in a join semilattice If states form a lattice, we can always merge states using the LUB.

45 C(v)RDTs Convergent Replicated Datatypes Idea: represent state as a join semilattice. Provide a ACI merge function

46 CRDTs Data structures: 1.Grow-only set (Gset) – Trivial – merge is union and union is commutative 2.2PSet – Two Gsets – one for adds, the other for tombstones – Idiosyncrasy: you can only add/delete once. 3.Counters 1.Tricky! Vector clock with an entry for each replica 2.Increment @ replica I => VC[i] += 1 3.Value: sum of all VC values

47 CRDTs Difficulties: Convergent objects alone are not strong enough to build confluent systems.

48 Asynchronous messaging You never really know

49 Asynchronous messaging AB C send AB C

50 Monotonic Logic The more you know, the more you know.

51 Monotonic Logic AB C E D A C E select/ filter

52 Monotonic Logic AB C project / map f(A)f(B) f(C)

53 Monotonic Logic A B C D E B D B D join / compose F

54 Monotonic Logic is order-insensitive AB C E D A C E

55 Monotonic Logic is pipelineable AB C E D A C E

56 Nonmonotonic Logic When do you know for sure?

57 Nonmonotonic Logic A B C D E B D set minus A B C D E Retraction! X Y Z

58 Nonmonotonic logic is order-sensitive A B C D E B D set minus A C E

59 Nonmonotonic logic is blocking A set minus A ? A?

60 Nonmonotonic logic is blocking A set minus A ? ``Sealed’’

61 CALM Analysis Asynchrony => loss of order Nonmonotonicity => order-sensitivity Asynchrony ; Nonmonotonicity => Inconsistency […]

62 CALM Analysis Asynchrony => loss of order Nonmonotonicity => order-sensitivity Asynchrony ; Nonmonotonicity => Inconsistency ? ``Point of Order’’ […]

63 Disorderly programming An aside about logic programming: In (classical) logic, theories are Associative and commutative – Consequences are the same regardless of the order in which we make deductions Idempotent – Axioms can be reiterated freely

64 Disorderly programming An aside about logic programming: In (classical) logic, theories are Associative, Commutative, and Idempotent because Knowledge is monotonic: The more you know, the more you know

65 Disorderly programming An aside about logic programming: It is challenging to even talk about order in logic programming languages [dedalus]. Yet we can build …

66 Disorderly programming Idea: embody the ACID 2.0 design patterns in how we structure distributed programs. Disorderly data: unordered relations Disorderly code: specify how data changes over time

67 Bloom

68 Bloom Rules do |mes, mem| [mem.address, mes.id, mes.payload] end multicast<~(message * members)

69 Operational model

70 Time Set (Union) Integer (Max) Boolean (Or) “Growth”: Larger Sets “Growth”: Larger Numbers “Growth”: false  true

71 Time Set (merge = Union) Integer (merge = Max) Boolean (merge = Or) size() >= 5 Monotone function from set  max Monotone function from max  boolean

72 Builtin Lattices NameDescription?a t bSample Monotone Functions lboolThreshold testfalse a ∨ b when_true() ! v lmaxIncreasing number 1max(a,b ) gt(n) ! lbool +(n) ! lmax -(n) ! lmax lminDecreasing number −1−1min(a,b)lt(n) ! lbool lsetSet of values;a [ bintersect(lset) ! lset product(lset) ! lset contains?(v) ! lbool size() ! lmax lpsetNon-negative set;a [ bsum() ! lmax lbagMultiset of values;a [ bmult(v) ! lmax +(lbag) ! lbag lmapMap from keys to lattice values empty map at(v) ! any-lat intersect(lmap) ! lmap 72

73 Quorum Vote in Bloom L QUORUM_SIZE = 5 RESULT_ADDR = "example.org" class QuorumVote include Bud state do channel :vote_chn, [:@addr, :voter_id] channel :result_chn, [:@addr] lset :votes lmax :vote_cnt lbool :got_quorum end bloom do votes <= vote_chn {|v| v.voter_id} vote_cnt <= votes.size got_quorum <= vote_cnt.gt_eq(QUORUM_SIZE) result_chn <~ got_quorum.when_true { [RESULT_ADDR] } end Map set ! max Map max ! bool Threshold test on bool Lattice state declarations 73 Communication interfaces Accumulate votes into set Annotated Ruby class Program state Program logic Merge function for set lattice

74 Convergence – a 2PSet

75 The difficulty with queries ``The work of multiple transactions can interleave as long as they are doing the commutative operations. If any transaction dares to READ the value, that does not commute, is annoying, and stops other concurrent work.’’ -- Pat Helland


Download ppt "Distributed Programming and Consistency: Principles and Practice Peter Alvaro Neil Conway Joseph M. Hellerstein UC Berkeley."

Similar presentations


Ads by Google