Minimizing Commit Latency of Transactions in Geo-Replicated Data Stores Paper Authors: Faisal Nawab, Vaibhav Arora, Divyakant Argrawal, Amr El Abbadi University.

Slides:



Advertisements
Similar presentations
Wyatt Lloyd * Michael J. Freedman * Michael Kaminsky David G. Andersen * Princeton, Intel Labs, CMU Dont Settle for Eventual : Scalable Causal Consistency.
Advertisements

Megastore: Providing Scalable, Highly Available Storage for Interactive Services. Presented by: Hanan Hamdan Supervised by: Dr. Amer Badarneh 1.
There is more Consensus in Egalitarian Parliaments Presented by Shayan Saeed Used content from the author's presentation at SOSP '13
High throughput chain replication for read-mostly workloads
Replication. Topics r Why Replication? r System Model r Consistency Models r One approach to consistency management and dealing with failures.
Author: Yang Zhang[SOSP’ 13] Presentator: Jianxiong Gao.
The SMART Way to Migrate Replicated Stateful Services Jacob R. Lorch, Atul Adya, Bill Bolosky, Ronnie Chaiken, John Douceur, Jon Howell Microsoft Research.
Causal Consistency Without Dependency Check Messages Willy Zwaenepoel.
Persistent Linda 3.0 Peter Wyckoff New York University.
1 Cheriton School of Computer Science 2 Department of Computer Science RemusDB: Transparent High Availability for Database Systems Umar Farooq Minhas 1,
Benchmarking Cloud Serving Systems with YCSB Brian F. Cooper, Adam Silberstein, Erwin Tam, Raghu Ramakrishnan, Russell Sears Yahoo! Research Presenter.
Log-Structured Memory for DRAM-Based Storage Stephen Rumble, Ankita Kejriwal, and John Ousterhout Stanford University.
Scaling Distributed Machine Learning with the BASED ON THE PAPER AND PRESENTATION: SCALING DISTRIBUTED MACHINE LEARNING WITH THE PARAMETER SERVER – GOOGLE,
What Should the Design of Cloud- Based (Transactional) Database Systems Look Like? Daniel Abadi Yale University March 17 th, 2011.
Database Replication techniques: a Three Parameter Classification Authors : Database Replication techniques: a Three Parameter Classification Authors :
Peer-to-Peer Based Multimedia Distribution Service Zhe Xiang, Qian Zhang, Wenwu Zhu, Zhensheng Zhang IEEE Transactions on Multimedia, Vol. 6, No. 2, April.
Signature Based Concurrency Control Thomas Schwarz, S.J. JoAnne Holliday Santa Clara University Santa Clara, CA 95053
Performance Evaluation
Two Techniques For Improving Distributed Database Performance ICS 214B Presentation Ambarish Dey Vasanth Venkatachalam March 18, 2004.
GentleRain: Cheap and Scalable Causal Consistency with Physical Clocks Jiaqing Du | Calin Iorgulescu | Amitabha Roy | Willy Zwaenepoel École polytechnique.
Paxos Quorum Leases Sayed Hadi Hashemi.
Clock-RSM: Low-Latency Inter-Datacenter State Machine Replication Using Loosely Synchronized Physical Clocks Jiaqing Du, Daniele Sciascia, Sameh Elnikety.
Wide-area cooperative storage with CFS
Low-Latency Multi-Datacenter Databases using Replicated Commit
OS Fall ’ 02 Performance Evaluation Operating Systems Fall 2002.
MAC Layer Protocols for Sensor Networks Leonardo Leiria Fernandes.
Module 14: Scalability and High Availability. Overview Key high availability features available in Oracle and SQL Server Key scalability features available.
Dynamo: Amazon’s Highly Available Key-value Store Presented By: Devarsh Patel 1CS5204 – Operating Systems.
Database Design Table design Index design Query design Transaction design Capacity Size limits Partitioning (shard) Latency Redundancy Replica overhead.
Server Load Balancing. Introduction Why is load balancing of servers needed? If there is only one web server responding to all the incoming HTTP requests.
Evaluating the Running Time of a Communication Round over the Internet Omar Bakr Idit Keidar MIT MIT/Technion PODC 2002.
Orbe: Scalable Causal Consistency Using Dependency Matrices & Physical Clocks Jiaqing Du, EPFL Sameh Elnikety, Microsoft Research Amitabha Roy, EPFL Willy.
Chord & CFS Presenter: Gang ZhouNov. 11th, University of Virginia.
Overcast: Reliable Multicasting with an Overlay Network CS294 Paul Burstein 9/15/2003.
RAMCloud: System Performance Measurements (Jun ‘11) Nandu Jayakumar
Megastore: Providing Scalable, Highly Available Storage for Interactive Services J. Baker, C. Bond, J.C. Corbett, JJ Furman, A. Khorlin, J. Larson, J-M.
High Performance Computing on Virtualized Environments Ganesh Thiagarajan Fall 2014 Instructor: Yuzhe(Richard) Tang Syracuse University.
Log-structured Memory for DRAM-based Storage Stephen Rumble, John Ousterhout Center for Future Architectures Research Storage3.2: Architectures.
Serverless Network File Systems Overview by Joseph Thompson.
1 Clock Synchronization for Wireless Sensor Networks: A Survey Bharath Sundararaman, Ugo Buy, and Ajay D. Kshemkalyani Department of Computer Science University.
Low Cost Commit Protocols for Mobile Computing Environments Marc Perron & Baochun Bai.
Server to Server Communication Redis as an enabler Orion Free
Architectural Design of Distributed Applications Chapter 13 Part of Design Analysis Designing Concurrent, Distributed, and Real-Time Applications with.
Eiger: Stronger Semantics for Low-Latency Geo-Replicated Storage Wyatt Lloyd * Michael J. Freedman * Michael Kaminsky † David G. Andersen ‡ * Princeton,
DaaS (Desktop as a Service) Last Update: July 15 th, 2015.
Computer Science Lecture 13, page 1 CS677: Distributed OS Last Class: Canonical Problems Distributed synchronization and mutual exclusion Distributed Transactions.
Chap 7: Consistency and Replication
Introduction to Distributed Databases Yiwei Wu. Introduction A distributed database is a database in which portions of the database are stored on multiple.
Distributed Systems Lecture 5 Time and synchronization 1.
Kitsuregawa Laboratory Confidential. © 2007 Kitsuregawa Laboratory, IIS, University of Tokyo. [ hoshino] paper summary: dynamo 1 Dynamo: Amazon.
Cofax Scalability Document Version Scaling Cofax in General The scalability of Cofax is directly related to the system software, hardware and network.
CalvinFS: Consistent WAN Replication and Scalable Metdata Management for Distributed File Systems Thomas Kao.
Fault – Tolerant Distributed Multimedia Streaming Web Application By Nirvan Sagar – Srishti Ganjoo – Syed Shahbaaz Safir
DISTRIBUTED FILE SYSTEM- ENHANCEMENT AND FURTHER DEVELOPMENT BY:- PALLAWI(10BIT0033)
A Hierarchical Edge Cloud Architecture for Mobile Computing IEEE INFOCOM 2016 Liang Tong, Yong Li and Wei Gao University of Tennessee – Knoxville 1.
Log-Structured Memory for DRAM-Based Storage Stephen Rumble and John Ousterhout Stanford University.
Slide credits: Thomas Kao
Shuai Mu, Lamont Nelson, Wyatt Lloyd, Jinyang Li
a journey from the simple to the optimal
Dynamo: Amazon’s Highly Available Key-value Store
The SNOW Theorem and Latency-Optimal Read-Only Transactions
Clock-SI: Snapshot Isolation for Partitioned Data Stores
Distributed Transactions and Spanner
Building a Database on S3
Outline Midterm results summary Distributed file systems – continued
EECS 498 Introduction to Distributed Systems Fall 2017
Replication and Recovery in Distributed Systems
Bandwidth Requirement
Scalable Causal Consistency
The SMART Way to Migrate Replicated Stateful Services
Presentation transcript:

Minimizing Commit Latency of Transactions in Geo-Replicated Data Stores Paper Authors: Faisal Nawab, Vaibhav Arora, Divyakant Argrawal, Amr El Abbadi University of California, Santa Barbara Presenter: Stefan Dao 1

Latency Definition: Time interval between stimulation and response Why is it important? – Low latency means more $$$ 2

Datacenters Challenges DCs must be resilient to disruptions – Solution? Replication…but it’s expensive! Replication High level of consistency More sent messages Higher latency! 3

Related Work Cassandra, Dynamo (Geo-replication) Megastore, Paxos-CP (Serializability) Message Futures (Low Latency) Spanner, Orbe, GentleRain (Clock Synchronization) RAMCloud (Distributed Shared Log) 4

Helios Serializable Transactions Geo- replicated Low latency Loosely Synchronized Clocks Log Replication 5

Lower-Bound on Commit Latency Commit Latency (L x ) Summation of latencies between datacenters must be greater than or equal to round trip time (RTT) 6

Optimal Latency Minimum Average Optimal latency (MAO) – Linear equation to minimize L A – Subject to L A + L B > RTT(A,B) – L A > 0 7

Helios Cont. N x N time table – Ex. T A [B, C] = t 1 Commit offsets – Ex. (co A B ) Knowledge Timestamps (kts) kts t B = q(t) + co A B 8

Helios Algorithm (Commit Request) PT Pool EPT Pool Transaction T Check for conflicts Read Set 9

Helios Algorithm (Commit Request) PT Pool EPT Pool Transaction T Check if overwritten Read Set 10

Helios Algorithm (Commit Request) PT Pool EPT Pool Transaction T Get t.kts t of all datacenters Read Set Add to PTPool 11

Helios Algorithm (Log Processing) PT Pool Transaction T Check for conflicts (and not local) EPT Pool Data Store Time Tables 12

Helios Algorithm (Log Processing) PT Pool Transaction T If T is preparing, append EPT Pool Data Store Time Tables 13

Helios Algorithm (Log Processing) PT Pool Transaction T If T is finished and committed, write to datastore EPT Pool Data Store Time Tables 14 Either way, remove from EPT Pool

Helios Algorithm (Log Processing) PT Pool Transaction T Update time tables with timestamp EPT Pool Data Store Time Tables 15

Helios Algorithm (Committing Preparing Transactions) PT Pool Transaction T Data Store If time > t.kts, commit and log the commit 16

Liveness of Helios We want our system to be robust to failures! – How? Allow up to f failures within the system Wait on commits for f responses Utilizes time-based invalidation (Grace Time) – Invalidates transactions after GT Grace Time has tradeoffs… – Increased latency – Small Grace Time vs Large Grace Time 17

Liveness Cont. A A B B C C 18

Liveness Cont. A A B B C C 19

Liveness Cont. A A B B C C Helios-1 : A needs confirmation from C before it can commit 20

Liveness Cont. A A B B C C Transactions from B before failure are invalidated 21

Evaluation Paper evaluates the following: – How does Helios perform relative to existing systems (latency, throughput, abort rate)? (Message Futures, Replicated Commit, 2PC/Paxos) – What are the effects of poor clock synchronization? – What are the effects of poor RTT estimation? 22

Evaluation 23 Experiment Setup 60 clients with 5 distinctly geo-located datacenters Running regular instances of Helios

Evaluation 24 Experiment Setup Increasing number of clients to see the effects on throughput, latency, and abort percentage of each system

Evaluation 25 Experiment Setup Vary clock Virginia datacenter by -100/+100 ms Vary clock on each datacenter by set amount

My Thoughts! Paper was very easy to follow Thoughtful experimentation! – Utilized Zipf distribution for data store access. Addressed issues such as scalability In terms of latency…not the best system but performs well overall 26

Discussion: Minimizing Commit Latency of Transactions in Geo-Replicated Data Stores Scribed by Jen-Yang Wen

Summary Feature: Low latency serializable transactions on geo-replicated data stores Technique: Timestamps by local loosely synchronized clocks Shared Log Optimizing average commit latency by linear programming Configurable f-resilient Evaluation: On Amazon AWS with 5 geo-replicated data centers

Discussion Pros: High performance Novel theory, proof, and protocol Flexibility – Separate serializability from liveness – Able to manual tuning parameters Evaluation on AWS geo-replicated systems Use shared log for higher stability Extensive analysis Well organized

Discussion Pros: High performance Novel theory, proof, and protocol Flexibility – Separate serializability from liveness – Able to manual tuning parameters Evaluation on AWS geo-replicated systems Use shared log for higher stability Extensive analysis Well organized Cons: Irrealistic assumptions Performance sensitive to clock sync. Not the best in all the three evaluation aspects Experiment only over 10 mins Proof not formal Liveness and commit latency tradeoff Tedious configuration process No test under failure Focus on average, not tail latency Storage overhead of the full copy shared logs Limited discussion on Grace Time/ f- value

Questions A quick poll: Does the “Proof of lower-bound “ seem formal to you? Different servers have different commit speed, a good idea? It would be interesting to see how multiple applications running on cloud platform and requiring different average commit latencies can be handled. Any additional questions or comments?