CS 347: Parallel and Distributed Data Management Notes07: Data Replication Hector Garcia-Molina CS 347 Notes07.

Slides:



Advertisements
Similar presentations
CM20145 Concurrency Control
Advertisements

Serializability in Multidatabases Ramon Lawrence Dept. of Computer Science
Two phase commit. Failures in a distributed system Consistency requires agreement among multiple servers –Is transaction X committed? –Have all servers.
Database Systems (資料庫系統)
More About Transaction Management Chapter 10. Contents Transactions that Read Uncommitted Data View Serializability Resolving Deadlocks Distributed Databases.
1 Concurrency Control Chapter Conflict Serializable Schedules  Two actions are in conflict if  they operate on the same DB item,  they belong.
Transaction Management: Concurrency Control CS634 Class 17, Apr 7, 2014 Slides based on “Database Management Systems” 3 rd ed, Ramakrishnan and Gehrke.
Lecture 11 Recoverability. 2 Serializability identifies schedules that maintain database consistency, assuming no transaction fails. Could also examine.
Consensus Algorithms Willem Visser RW334. Why do we need consensus? Distributed Databases – Need to know others committed/aborted a transaction to avoid.
CIS 720 Concurrency Control. Timestamp-based concurrency control Assign a timestamp ts(T) to each transaction T. Each data item x has two timestamps:
Distributed Systems Fall 2010 Replication Fall 20105DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
Database Replication techniques: a Three Parameter Classification Authors : Database Replication techniques: a Three Parameter Classification Authors :
Transaction Management and Concurrency Control
CS 582 / CMPE 481 Distributed Systems
CS 245Notes 101 CS 245: Database System Principles Notes 10: More TP Hector Garcia-Molina.
Quick Review of May 1 material Concurrent Execution and Serializability –inconsistent concurrent schedules –transaction conflicts serializable == conflict.
1 ICS 214B: Transaction Processing and Distributed Data Management Replication Techniques.
Session - 14 CONCURRENCY CONTROL CONCURRENCY TECHNIQUES Matakuliah: M0184 / Pengolahan Data Distribusi Tahun: 2005 Versi:
Transaction Management
1 Distributed Databases CS347 Lecture 16 June 6, 2001.
Distributed Systems Fall 2009 Replication Fall 20095DV0203 Outline Group communication Fault-tolerant services –Passive and active replication Highly.
CS 425 / ECE 428 Distributed Systems Fall 2014 Indranil Gupta (Indy) Lecture 18: Replication Control All slides © IG.
1 ICS 214B: Transaction Processing and Distributed Data Management Distributed Database Systems.
CMPT 401 Summer 2007 Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.
CMPT Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.
CS 603 Data Replication February 25, Data Replication: Why? Fault Tolerance –Hot backup –Catastrophic failure Performance –Parallelism –Decreased.
CMPT Dr. Alexandra Fedorova Lecture XI: Distributed Transactions.
CS162 Section Lecture 10 Slides based from Lecture and
04/18/2005Yan Huang - CSCI5330 Database Implementation – Distributed Database Systems Distributed Database Systems.
CS 245Notes 101 CS 245: Database System Principles Notes 10: More TP Hector Garcia-Molina.
Replicated Databases. Reading Textbook: Ch.13 Textbook: Ch.13 FarkasCSCE Spring
Commit Algorithms Hamid Al-Hamadi CS 5204 November 17, 2009.
Transactions and Concurrency Control. Concurrent Accesses to an Object Multiple threads Atomic operations Thread communication Fairness.
2/29/ Replication CSEP 545 Transaction Processing Philip A. Bernstein Sameh Elnikety Copyright ©2012 Philip A. Bernstein.
1 Concurrency Control Lecture 22 Ramakrishnan - Chapter 19.
Transaction Management Transparencies. ©Pearson Education 2009 Chapter 14 - Objectives Function and importance of transactions. Properties of transactions.
3/6/99 1 Replication CSE Transaction Processing Philip A. Bernstein.
1 Database Systems ( 資料庫系統 ) December 27, 2004 Chapter 17 By Hao-hua Chu ( 朱浩華 )
1 CSE232A: Database System Principles More Concurrency Control and Transaction Processing.
CSE 486/586, Spring 2012 CSE 486/586 Distributed Systems Replication Steve Ko Computer Sciences and Engineering University at Buffalo.
Multidatabase Transaction Management COP5711. Multidatabase Transaction Management Outline Review - Transaction Processing Multidatabase Transaction Management.
1 Controlled concurrency Now we start looking at what kind of concurrency we should allow We first look at uncontrolled concurrency and see what happens.
CSE 486/586, Spring 2014 CSE 486/586 Distributed Systems Transactions on Replicated Data Steve Ko Computer Sciences and Engineering University at Buffalo.
Jinze Liu. ACID Atomicity: TX’s are either completely done or not done at all Consistency: TX’s should leave the database in a consistent state Isolation:
Distributed Transactions What is a transaction? (A sequence of server operations that must be carried out atomically ) ACID properties - what are these.
Distributed Databases – Advanced Concepts Chapter 25 in Textbook.
Remote Backup Systems.
Outline Introduction Background Distributed DBMS Architecture
6.4 Data and File Replication
Two phase commit.
Outline Introduction Background Distributed DBMS Architecture
Outline Announcements Fault Tolerance.
CS 347: Parallel and Distributed Data Management Notes 11: Network Partitions Hector Garcia-Molina CS347 Notes11.
EEC 688/788 Secure and Dependable Computing
Distributed Database Management Systems
EEC 688/788 Secure and Dependable Computing
Distributed Transactions
Lecture 21: Replication Control
Atomic Commit and Concurrency Control
Chapter 7: Distributed Transactions
Distributed Databases Recovery
UNIVERSITAS GUNADARMA
EEC 688/788 Secure and Dependable Computing
Lecture 21: Replication Control
Remote Backup Systems.
Database Systems (資料庫系統)
Lecture 18: Concurrency Control
CIS 720 Concurrency Control.
Outline Introduction Background Distributed DBMS Architecture
Transactions, Properties of Transactions
Presentation transcript:

CS 347: Parallel and Distributed Data Management Notes07: Data Replication Hector Garcia-Molina CS 347 Notes07

Data Replication Reliable net, fail-stop nodes The model Database node 1 node 2 node 3 item fragment CS 347 Notes07

Study one fragment, for time being Data replication  higher availability CS 347 Notes07

Outline Basic Algorithms Improved (Higher Availability) Algorithms Multiple Fragments & Other Issues CS 347 Notes07

Basic Solution (for C.C.) Treat each copy as an independent data item Txi Txj Txk Lock mgr X2 Lock mgr X3 Lock mgr X1 X1 X2 X3 Object X has copies X1, X2, X3 CS 347 Notes07

Read(X): get shared X1 lock get shared X2 lock get shared X3 lock read one of X1, X2, X3 at end of transaction, release X1, X2, X3 locks X1 X2 X3 read lock mgr lock mgr lock mgr CS 347 Notes07

Write(X): get exclusive X1 lock get exclusive X2 lock write new value into X1, X2, X3 at end of transaction, release X1, X2, X3 locks lock lock lock X1 X2 X3 write write write CS 347 Notes07

Problem: Low availability Correctness OK 2PL  serializability 2PC  atomic transactions Problem: Low availability down! X1 X2 X3  cannot access X! CS 347 Notes07

Basic Solution — Improvement Readers lock and access a single copy Writers lock all copies and update all copies Good availability for reads Poor availability for writes X1 X2 X3 reader has lock writer will conflict! CS 347 Notes07

Reminder With basic solution use standard 2PL use standard commit protocols CS 347 Notes07

Variation on Basic: Primary copy reader X1 * X2 X3 lock write writer Select primary site (static for now) Readers lock and access primary copy Writers lock primary copy and update all copies CS 347 Notes07

Commit Options for Primary Site Scheme Local Commit lock, write, commit X1 * X2 X3 writer propagate update CS 347 Notes07

Commit Options for Primary Site Scheme Local Commit lock, write, commit X1 * X2 X3 writer propagate update Write(X): Get exclusive X1* lock Write new value into X1* Commit at primary; get sequence number Perform X2, X3 updates in sequence number order ... CS 347 Notes07

Example t = 0 X1: 0 Y1: 0 Z1: 0 * X2: 0 Y2: 0 Z2: 0 T1: X  1; Y  1; T2: Y  2; T3: Z  3; CS 347 Notes07

Example t = 1 X1: 1 Y1: 1 Z1: 3 * X2: 0 Y2: 0 Z2: 0 T1: X  1; Y  1; T2: Y  2; T3: Z  3; active at node 1 waiting for lock at node 1 CS 347 Notes07

Example t = 2 X1: 1 Y1: 2 Z1: 3 * X2: 0 Y2: 0 Z2: 0 T1: X  1; Y  1; T2: Y  2; T3: Z  3; committed active at 1 #1: Z  3 CS 347 Notes07

Example t = 3 X1: 1 Y1: 2 Z1: 3 * X2: 1 Y2: 2 Z2: 3 T1: X  1; Y  1; T2: Y  2; T3: Z  3; committed CS 347 Notes07

What good is RPWP-LC? primary backup backup updates can’t read! CS 347 Notes07

What good is RPWP-LC? primary backup backup updates can’t read! Answer: Can read “out-of-date” backup copy (also useful with 1-safe backups... later) CS 347 Notes07

Commit Options for Primary Site Scheme Distributed Commit lock, compute new value X1 * X2 X3 writer prepare to write new value ok CS 347 Notes07

Commit Options for Primary Site Scheme Distributed Commit commit (write) lock, compute new value X1 * X2 X3 writer prepare to write new value ok CS 347 Notes07

Example X1: 0 Y1: 0 Z1: 0 * X2: 0 Y2: 0 Z2: 0 T1: X  1; Y  1; T2: Y  2; T3: Z  3; CS 347 Notes07

Basic Solution Read lock all; write lock all: RAWA Read lock one; write lock all: ROWA Read and write lock primary: RPWP local commit: LC distributed commit: DC CS 347 Notes07

Comparison N = number of nodes with copies P = probability that a node is operational CS 347 Notes07

Comparison N = number of nodes with copies P = probability that a node is operational PN PN 1 - (1-P)N PN P P P PN CS 347 Notes07

Comparison N = 5 = number of nodes with copies P = 0.99 = probability that a node is operational CS 347 Notes07

Comparison N = 100 = number of nodes with copies P = 0.99 = probability that a node is operational CS 347 Notes07

Comparison N = 5 = number of nodes with copies P = 0.90 = probability that a node is operational CS 347 Notes07

Outline Basic Algorithms Improved (Higher Availability) Algorithms Mobile Primary Available Copies Multiple Fragments & Other Issues CS 347 Notes07

Mobile Primary (with RPWP) backup backup (1) Elect new primary (2) Ensure new primary has seen all previously committed transactions (3) Resolve pending transactions (4) Resume processing CS 347 Notes07

(1) Elections Can be tricky... One idea: Nodes have IDs Largest ID wins CS 347 Notes07

(1) Elections: One scheme (a) Broadcast “I want to be primary, ID=X” (b) Wait long enough so anyone with larger ID can stop my takeover (c) If I see “I want to be primary” message with smaller ID, kill that takeover (d) After wait without seeing bigger ID, I am new primary! CS 347 Notes07

(1) Elections: Epoch Number It is useful to attach an epoch or version number to messages: primary: n3 epoch# = 1 primary: n5 epoch# = 2 primary: n3 epoch# = 3 ... CS 347 Notes07

(2) Ensure new primary has previously committed transactions backup committed: T1, T2 need to get and apply: T1, T2  How can we make sure new primary is up to date? More on this coming up... CS 347 Notes07

(3) Resolve pending transactions primary new primary backup T3 in “W” state T3 in “W” state T3? CS 347 Notes07

Failed Nodes: Example now: primary backup backup -down- -down- later: -down- -down- -down- later primary backup still: P1 P2 P3 commits T1 P1 P2 P3 P1 P2 P3 -down- commit T2 (unaware of T1!) CS 347 Notes07

RPWP:DC & 3PC take care of problem! Option A Failed node waits for: commit info from active node, or all nodes are up and recovering Option B Majority voting CS 347 Notes07

RPWP: DC & 3PC primary backup 1 backup 2 (1) T end work (2) send data (3) get acks (4) prepare (5) get acks (6) commit time may use 2PC... CS 347 Notes07

Node Recovery All transactions have commit sequence number Active nodes save update values “as long as necessary” Recovering node asks active primary for missed updates; applies in order CS 347 Notes07

Example: Majority Commit state: C T1,T2,T3 T1,T2 T1,T3 P T3 T2 W T4 T4 T4 CS 347 Notes07

Example: Majority Commit t1: C1 fails t2: C2 new primary t3: C2 commits T1, T2, T3; aborts T4 t4: C2 resumes processing t5: C2 commits T5, T6 t6: C1 recovers; asks C2 for latest state t7: C2 sends committed and pending transactions; C2 involves C1 in any future transactions CS 347 Notes07

2-safe vs. 1-safe Backups Up to now we have covered 3/2-safe backups (RPWP:DC): primary backup 1 backup 2 (1) T end work (2) send data (3) get acks (4) commit time CS 347 Notes07

Guarantee After transaction T commits at primary, any future primary will “see” T now: primary backup 1 backup 2 T1, T2, T3 later: primary next primary backup 2 T1, T2, T3, T4 CS 347 Notes07

Performance Hit 3PC is very expensive Can use 2PC many messages locks held longer (less concurrency) [Note: group commit may help] Can use 2PC may have blocking 2PC still expensive [up to 1 second reported] CS 347 Notes07

Alternative: 1-safe (RPWP:LC) Commit transactions unilaterally at primary Send updates to backups as soon as possible primary backup 1 backup 2 (1) T end work (2) T commit (3) send data (4) purge data time CS 347 Notes07

Problem: Lost Transactions now: primary backup 1 backup 2 T1, T2, T3 T1 T1 later: primary next primary backup 2 T1, T4, T5 T1, T4 CS 347 Notes07

Claim Lost transaction problem tolerable failures rare only a “few” transactions lost CS 347 Notes07

Primary Recovery with 1-safe When failed primary recovers, need to “compensate” for missed transactions now: primary next primary backup 2 T1, T2, T3 T1, T4, T5 T1, T4 later: backup 3 next primary backup 2 T1, T2, T3, T3-1, T2-1, T4, T5 T1, T4, T5 T1, T4, T5 compensation CS 347 Notes07

Log Shipping “Log shipping:” propagate updates to backup primary backup Backup replays log How to replay log efficiently? e.g., elevator disk sweeps e.g., avoid overwrites CS 347 Notes07

So Far in Data Replication RAWA ROWA Primary copy static local commit distributed commit mobile primary 2-safe (distributed commit) blocking or non-blocking 1-safe (local commit) CS 347 Notes07

Outline Basic Algorithms Improved (Higher Availability) Algorithms Mobile Primary Available Copies Multiple Fragments & Other Issues CS 347 Notes07

PC-lock available copies * primary down X1 X2 X3 X4 Transactions write lock at all available copies Transactions read lock at any available copy Primary site (static) manages U – set of available copies CS 347 Notes07

Update Transaction (1) Get U from primary (2) Get write locks from U nodes (3) Commit at U nodes U={C0, C1} C0 Primary C1 Backup C2 Backup updates, 2PC U Trans T3, U={C0, C1} CS 347 Notes07

A potential problem - example Now: U={C0, C1} -recovering- I am recovering C0 Primary C1 Backup C2 Backup Trans T3, U={C0, C1} CS 347 Notes07

A potential problem - example Later: U={C0, C1, C2} -recovering- You missed T0, T1, T2 C0 Primary C1 Backup C2 Backup T3 updates T3 updates Trans T3, U={C0, C1} CS 347 Notes07

Solution: Initially transaction T gets copy of U’ of U from primary (or uses cached value) At commit of T, check U’ with current U at primary (if different, abort T) CS 347 Notes07

Solution Continued When CX recovers: request missed and pending transactions from primary (primary updates U) set write locks for pending transactions Primary polls nodes to detect failures (updates U) CS 347 Notes07

Example Revisited Trans T3, U={C0, C1} You missed T0, T1, T2 U={C0, C1, C2} I am recovering C0 Primary C1 Backup C2 Backup reject -recovering- prepare prepare Trans T3, U={C0, C1} CS 347 Notes07

Available Copies — No Primary Let all nodes have a copy of U (not just primary) To modify U, run a special atomic transaction at all available sites (use commit protocol) E.g.: U1={C1, C2}  U2={C1, C2 , C3} only C1, C2 participate in this transaction E.g.: U2={C1, C2 , C3}  U3={C1, C2} only C1, C2 participate in this transaction CS 347 Notes07

What if commit of U-change blocks? Details are tricky... What if commit of U-change blocks? CS 347 Notes07

Node Recovery (no primary) Get missed updates from any active node No unique sequence of transactions If all nodes fail, wait for - all to recover - majority to recover CS 347 Notes07

Example recovering node  How much information (update values) must be remembered? By whom? Committed: A,B,C,D,E,F Pending: G Committed: A,B Committed: A,C,B,E,D Pending: F,G,H CS 347 Notes07

Correctness with replicated data X1 X2 S1: r1[X1]  r2[X2]  w1[X1]  w2[X2]  Is this schedule serializable? CS 347 Notes07

S1: r1[X1]  r2[X2]  w1[X1]  w2[X2]  Is this schedule serializable? One idea: Require transactions to update all copies S1: r1[X1]  r2[X2]  w1[X1]  w2[X2] w1[X2]  w2[X1] CS 347 Notes07

S1: r1[X1]  r2[X2]  w1[X1]  w2[X2]  Is this schedule serializable? One idea: Require transactions to update all copies S1: r1[X1]  r2[X2]  w1[X1]  w2[X2] w1[X2]  w2[X1] (not a good idea for high-availability algorithms) Another idea: Build in copy-semantics into notion of serializability CS 347 Notes07

One copy serializable (1SR) A schedule S on replicated data is 1SR if it is equivalent to a serial history of the same transactions on a one-copy database CS 347 Notes07

To check 1SR Take schedule Treat ri[Xj] as ri[X] Xj is copy of X wi[Xj] as wi[X] Compute P(S) If P(S) acyclic, S is 1SR CS 347 Notes07

Example S1: r1[X1]  r2[X2]  w1[X1]  w2[X2] S1 is not 1SR! T2T1 T1T2 CS 347 Notes07

Second example S2: r1[X1]  w1[X1]  w1[X2] r2[X1]  w2[X1]  w2[X2] P(S2): T1  T2 S2 is 1SR CS 347 Notes07

Second example S2: r1[X1]  w1[X1]  w1[X2] r2[X1]  w2[X1]  w2[X2] Equivalent serial schedule SS: r1[X]  w1[X] r2[X]  w2[X] CS 347 Notes07

Question: Is this a “good” schedule? S3: r1[X1]  w1[X1]  w1[X2] r2[X1]  w2[X1] CS 347 Notes07

Question: Is this a “good” schedule? S3: r1[X1]  w1[X1]  w1[X2] r2[X1]  w2[X1] S3: r1[X]  w1[X]  w1[X] r2[X]  w2[X] to be valid schedule, need precedence edge between w1(X) and w2(X) S3: r1[X]  w1[X] r2[X]  w2[X] CS 347 Notes07

Question: Is this a “good” schedule? We need to know how w2(X2) is resolved: OK: S3: r1[X1]  w1[X1]  w1[X2] r2[X1]  w2[X1] w2[X2] Not OK: S3: r1[X1]  w1[X1]  w1[X2] r2[X1]  w2[X1] w2[X2] CS 347 Notes07

Question: Is this a “good” schedule? S3: r1[X1]  w1[X1]  w1[X2] r2[X1]  w2[X1] Bottom Line: When w2[X2] is missing because X2 is down, assume X2 recover will perform w2[X2] in correct order S3: r1[X]  w1[X] r2[X]  w2[X] CS 347 Notes07

Another example: S3 continues with T3: S4: r1[X1]  w1[X1]  w1[X2]  r3[X2]  w3[X1] r2[X1]  w2[X1] w3[X2] CS 347 Notes07

Another example: S4: r1[X1]  w1[X1]  w1[X2]  r3[X2]  w3[X1] S4: r1[X]  w1[X]  r3[X]  w3[X] r2[X]  w2[X] Seems OK but where do we place missing w2[X2]? CS 347 Notes07

Another example: S4: r1[X1]  w1[X1]  w1[X2]  r3[X2]  w3[X1] w2[X2] must be before w1[X2] or after r3[x2] (else t3 read different) S4: r1[X1]  w1[X1]  w1[X2]  r3[X2]  w3[X1] r2[X1]  w2[X1] w3[X2] CS 347 Notes07

Another example: S4: r1[X1]  w1[X1]  w1[X2]  r3[X2]  w3[X1] w2[X2] must be before w1[X2] or after r3[x2] (else t3 read different) One option: S4: r1[X1]  w1[X1]  w1[X2]  r3[X2]  w3[X1] r2[X1]  w2[X1] w3[X2] w2[X2] performed by X2 recovery CS 347 Notes07

Outline Basic Algorithms Improved (Higher Availability) Algorithms Mobile Primary Available Copies (and 1SR) Multiple Fragments & Other Issues CS 347 Notes07

Multiple fragments Fragment 1 Fragment 2 Node 1 Node 2 Node 3 Node 4 A transaction spanning multiple fragments must - follow locking rules for each fragment - commit must involve “majority” in each fragment CS 347 Notes07

Careful with update transactions that read but do not modify a fragment Example: F1 C1 C2 F2 T1 C3 C4 T2 Read lock F1 at C1 Read lock F2 at C3 CS 347 Notes07

C1,C3 fail… F1 C1 C2 F2 T2 C3 C4 T1 Writes F1 at C2 Commits F1 at C2 CS 347 Notes07

Solution: commit at read sites too Equivalent history: r1[F1] r2[F2] w1[F2] w2[F1] not serializable! Solution: commit at read sites too CS 347 Notes07

C1,C3 fail… cannot commit at F1 because U= {C1} is out of date... F1 Writes F1 at C2 Commits F1 at C2 Writes F2 at C4 Commits F2 at C4 cannot commit at F1 because U= {C1} is out of date... CS 347 Notes07

Read-Only Transactions Can provide “weaker correctness” Does not impact values stored in DB C1: primary C2: backup A: A: B: B: T1: A  3 T2: B  5 CS 347 Notes07

Later on: T1: A  3 T2: B  5 C1: primary C2: backup A: :A B  5 B: :B 0 3 0 3 :A B  5 B: 0 5 :B R2 read transaction at backup sees “old” but “valid” state R1 read transaction sees current sate CS 347 Notes07

States Are Equivalent States at Primary: States at Backup: no transactions T1 T1, T2 T1, T2, T3 States at Backup: CS 347 Notes07

States Are Equivalent States at Primary: States at Backup: no transactions T1 T1, T2 T1, T2, T3 States at Backup: at this point in time, backup may be behind... CS 347 Notes07

Schedule is Serializable S1 = T1 R1 T2 T3 R2 T4 ...(R1)...(R2)... CS 347 Notes07

Example 2 A, B have different primaries now 1-safe protocol used C1 C2 primary :A :A B: B: primary T1: A  3 T2: B  5 CS 347 Notes07

T1: A  3 T2: B  5 C1 C2 A  3 primary primary B  5 0 3 0 5 CS 347 0 3 A  3 0 5 primary B  5 CS 347 Notes07

Q1: reads A, B at C1; sees T1 Q1 T2 T1: A  3 T2: B  5 C1 C2 primary 0 3 A  3 0 5 primary B  5 At this time: Q1: reads A, B at C1; sees T1 Q1 T2 Q2: reads A, B at C2; sees T2 Q2 T1 CS 347 Notes07

Schedule of update transactions is OK: Each R.O.T. sees OK schedule: Eventually: C1 C2 primary 0 3 :A 0 3 :A B: 0 5 B: 0 5 primary Schedule of update transactions is OK: T1 T2  T2 T1 Each R.O.T. sees OK schedule: T1 Q1 T2 or T2 Q2 T1 But there is NO single complete schedule that is “OK”... CS 347 Notes07

In many cases, such a scenario is OK Called weak serializability: update schedule is serializable R.O.T. see committed data CS 347 Notes07

Data Replication RAWA, ROWA Primary copy static [local commit or distributed commit] mobile primary [2-safe (2PC or 3PC) or 1-safe] Available copies [with or without primary] Correctness (1SR) Multiple Fragments Read-Only Transactions CS 347 Notes07

Issues To centralize control or not? How much availability? “Weak” reads OK? CS 347 Notes07

Paxos Paxos is a replicated data management algorithm supposedly used in ZooKeeper. CS 347 Notes07

Paxos with 2 Phases CS 347 Notes07

Paxos with 2 Phases uses majority voting can handle partitions primary (leader) 3 phase commit! CS 347 Notes07