Presentation is loading. Please wait.

Presentation is loading. Please wait.

Taha Rafiq MMath Thesis Presentation 24/04/2013

Similar presentations


Presentation on theme: "Taha Rafiq MMath Thesis Presentation 24/04/2013"— Presentation transcript:

1 Taha Rafiq MMath Thesis Presentation 24/04/2013
Elasca: Workload-Aware Elastic Scalability for Partition Based Database Systems Taha Rafiq MMath Thesis Presentation 24/04/2013

2 Outline Introduction & Motivation VoltDB & Elastic Scale-Out Mechanism
Partition Placement Problem Workload-Aware Optimizer Experiments & Results Supporting Multi-Partition Transactions Conclusion

3 IntroDuction & Motivation

4 DBMS Scalability Replication Partitioning
Replication: durability, fault tolerance, avalability, faster reads Problems: consistency (writes are complex) Partitioning: Horizontal or vertical partitioning, performance improvement, more data can be stored Problems: complicates application logic, multi-partition transactions costly Partitioning

5 Traditional (DBMS) Scalability
Higher Load Add Resources Better Performance Expensive Downtime Resources are added to a system that’s experiencing load higher than it can handle to get better performance. However, traditional DBMS do not allow addition of resources on-the-fly, resulting in expensive downtime which can be costly for a lot of use cases (Amazon) Ability of a system to be enlarged to handle growing amount of work

6 Elastic (DBMS) Scalability
Higher Load Dynamically Add Resources Better Performance No Downtime Elastic scalability alleviates this problem by allowing resources to be added while the system is live, without affecting performance significantly Use of computer resources which vary dynamically to meet a variable workload

7 Elastically Scaling a Partition Based DBMS
Re-Partitioning Partition 1 Node 1 Node 1 Scale Out Partition 1 Re-partitioning is difficult in an elastic setting How to partition + do it while the system is live Partition 2 Node 2 Scale In

8 Elastically Scaling a Partition Based DBMS
Partition Migration Node 1 P1 P2 P1 Node 1 P2 P3 P4 Scale Out Another more attractive option is to use partition migration A large number of small partitions are aggregated on a few nodes, and are migrated to new nodes as they are added Node 2 P3 P4 Scale In

9 Partition Migration for Elastic Scalability
Mechanism How to add/remove nodes and move partitions Policy/Strategy Which partitions to move when and where during scale out/scale in Mechanism: for migrating partitions Minimize effect on transaction processing Maintain consistency Policy: To decide when partitions need to be move where

10 Elasca Elastic Scale-Out Mechanism Partition Placement & Migration Optimizer
= + Elasca consists of An elastic scale-out mechanism built into VoltDB, a commercial partition-based DBMS A workload aware partition placement and migration optimizer

11 VoltDB & Elastic Scale-oUT Mechanism

12 What is VoltDB? In memory, partition based DBMS
No disk access = very fast Shared nothing architecture, serial execution No locks Stored procedures No arbitrary transactions Replication Fault tolerance & durability

13 VoltDB Architecture P1 P2 ES1 ES2 Initiator P3 P1 ES1 ES2 Initiator P2
Client Interface P3 P1 ES1 ES2 Initiator Client Interface P2 P3 ES1 ES2 Initiator Client Interface Threads Execution sites – cores – threads Client Client Client Client

14 Single-Partition Transactions
ES1 ES2 Initiator Client Interface P3 P1 ES1 ES2 Initiator Client Interface P2 P3 ES1 ES2 Initiator Client Interface Client Client Client Client

15 Multi-Partition Transactions
ES1 ES2 Initiator Client Interface P3 P1 ES1 ES2 Initiator Client Interface P2 P3 ES1 ES2 Initiator Client Interface ES1 Client Client Client Client

16 Elastic Scale-Out Mechanism
Scale-Out Node (Failed) ES4 Initiator Client Interface ES1 P1 P2 P1 P3 P4 P4 ES1 ES2 ES3 ES4 Initially the scale-out node is not part of the cluster, and is perceived as a failed node The scale-out node ‘rejoins’ the cluster and informs all the nodes about which partitions it needs to recover The node containing the partitions stream the partition data to the scale-out node After partition migration is complete, the source execution sites and partitions are shutdown Initiator Client Interface

17 Overcommitting Cores VoltDB suggests: Partitions per node < Cores per node Wasted resources when load is low or data access is skewed Idea Aggregate extra partitions on each node and scale out when load increases Execution sites can run on separate cores while leaving at least one core for host-level tasks

18 Partition Placement Problem

19 Cluster and System Specifications
Given… Cluster and System Specifications Number of CPU cores Max. Number of Nodes Memory

20 Given…

21 Given…

22 Current Partition-to-Node Assignment
Given… Current Partition-to-Node Assignment Partition Node 1 Node 2 Node 3 P1 P2 P3 P4 P5 P6 P7 P8

23 Optimal Partition-to-Node Assignment (For Next Time Interval)
Find… Optimal Partition-to-Node Assignment (For Next Time Interval) Partition Node 1 Node 2 Node 3 P1 ? P2 P3 P4 P5 P6 P7 P8

24 Optimization Objectives
Maximize Throughput Match the performance of a static, fully provisioned system Minimize Resources Used Use the minimum number of nodes required to meet performance demands Importance of different objectives

25 Optimization Objectives
Minimize Data Movement Data movement adversely affects system performance and incurs network costs Balance Load Effectively Minimizes the risk of overloading a node during the next time interval

26 Workload-Aware Optimizer

27 System Overview

28 Statistics Collected β. CPU overhead of host-level tasks
α. Maximum number of transactions that can be executed on a partition per second Max capacity of Execution Sites β. CPU overhead of host-level tasks How much CPU capacity the Initiator uses

29 Effect of β

30 Estimating CPU Load CPU Load Generated by Each Partition
Average CPU Load of Host-Level Tasks Per Node Average CPU Load Per Node

31 Optimizer Details Mathematical Optimization vs. Heuristics
Mixed-Integer Linear Programming (MILP) Can be solved using any general-purpose solver (we use IBM ILOG CPLEX) Applicable for wide variety of scenarios

32 Objective Function Minimizes data movement as primary objective and balances load as secondary objective Two-stage objective function

33 Effect of ε

34 Minimizing Resources Used
Calculate the minimum number of nodes that can handle the load of all the partitions Non-integer assignment Explicitly tell optimizer how many nodes to use If optimizer can’t find solution with minimum nodes, it tries again with N + 1 nodes

35 Constraints Replication: Replicas of a given partition must be assigned to different nodes CPU Capacity: Sum of the load of partitions must be less than capacity of node Memory Capacity: All the partitions assigned to a node must fit in its memory Host-Level Tasks: The overhead of host-level tasks must not exceed capacity of single core

36 Staggering Scale In Fluctuating workload can result in excessive data movement Staggering scale in mitigates this problem Delay scaling in by s time steps Slightly higher resources used to provide stability

37 Experimental Evaluation

38 Optimizers Evaluated ELASCA: Our workload-aware optimizer
ELASCA-S: ELASCA with staggered scale in OFFLINE: Offline optimizer that minimizes resources used and data movement GREEDY: A greedy first-fit optimizer SCO: Static, fully provisioned system (no optimization)

39 Benchmarks Used TPC-C: Modified to make it cleanly partitioned and fit in memory (3.6 GB) TATP: Telecommunication Application Transaction Processing Benchmark (250 MB) YCSB: Yahoo! Cloud Serving Benchmark with 50/50 read/write ratio (1 GB)

40 Dynamic Workloads Varying the aggregate request rate
Periodic waveforms Sine, Triangle, Sawtooth Skewing the data access Temporal skew Statistical distributions Uniform, Normal, Categorical, Zipfian

41 Temporal Skew

42 Experimental Setup Each experiment run for 1 hour 15 time intervals
Optimizer run every four minutes Combination of simulation and actual runs Exact numbers for data movement, resources used and load balance through simulation Cluster has 4 nodes, 2 separate client machines

43 Data Movement (TPC-C) Triangle Wave (f = 1) ELASCA – 63%
ELASCA-S – 72% Why is Elasca better? (Load balance + data movement is primary objective)

44 Triangle Wave (f = 1), Zipfian Skew
Data Movement (TPC-C) Triangle Wave (f = 1), Zipfian Skew P=64 ELASCA – 73% ELASCA-S – 79%

45 Data Movement (TPC-C) Triangle Wave (f = 4) ELASCA-S ------------
GREEDY 83% ELASCA 57%

46 Computing Resources Saved (TPC-C)
Triangle Wave (f = 1) Upto 14% difference

47 Load Balance (TPC-C) Triangle Wave (f = 1) 4x higher variance

48 Database Throughput (TPC-C)
Sine Wave (f = 2) ELASCA worst case – 6% GREEDY worst case – 14%

49 Database Throughput (TPC-C)
Sine Wave (f = 2), Normal Skew

50 Database Throughput (TATP)
Sine Wave (f = 2) Gap is small because of initiator bottleneck and less partitions

51 Database Throughput (YCSB)
Sine Wave (f = 2)

52 Database Throughput (TPC-C)
Triangle Wave (f = 4) GREEDY – worst case 21% ELASCA – worst case 15% ELASCA-S – worst case 6% OFFLINE – worst case 12%

53 Optimizer Scalability

54 Supporting Multi-Partition Transactions

55 Factors Affecting Performance
Maximum MPT Throughput (η): The maximum number of transactions an execution site can coordinate per second Probability of MPTs (pmpt): Percentage of transactions that are MPTs Partitions Involved in MPTs: The number of partitions involved in MPTs

56 Changes to Model CPU load generated by each partition is equal to sum of: Load due to transaction work (same as SPTs) Load due to coordinating MPTs

57 Maximum MPT Throughput
Simulation based

58 Probability of MPTs

59 Effect on Resources Saved
Made changes to optimizer and ran the optimizer again to get these values

60 Effect on Data Movement

61 Conclusion

62 Related Work Data replication and partitioning Database consolidation
Live database migration Key-value stores Data placement

63 Elasca Elastic Scale-Out Mechanism Partition Placement & Migration Optimizer
= +

64 Conclusion Elasca = Mechanism + Optimizer Workload-Aware Optimizer
Meets performance demands Minimizes computing resources used Minimizes data movement Effectively balances load Scalable to large problem sizes for online setting

65 Future Work Migrating to VoltDB 3.0
Intelligent client routing, master/slave partitions Supporting multi-partition transactions Automated parameter tuning Transaction mixes Workload prediction

66 Thank You Questions?


Download ppt "Taha Rafiq MMath Thesis Presentation 24/04/2013"

Similar presentations


Ads by Google