Download presentation
Presentation is loading. Please wait.
Published byRobert Jacobs Modified over 9 years ago
1
Cloud Storage: All your data belongs to us! Theo Benson This slide includes images from the Megastore and the Cassandra papers/conference slides
2
Outline Storage review Workload Cassandra [FB] – High reads, high writes Megastore [Google] – High reads, low writes
3
NoSQL vs. RDBMS NoSQL (Bigtable, HBase, Cassandra) – Pros: Highly scalable Highly available – Cons: Lean with few features put/get of keys – Bad schema, query, limited API Eventual consistency loose consistency
4
NoSQL vs. RDBMS RDBMS (mysql, sql-server) – Pro: Very mature and well understood, Strong consistency strong transactional semantics – Cons: Hard to scale takes lots of time Synchronous replication may have performance and scalability issues may not have fault‐tolerant replication mechanisms
5
Motivation Existing solutions not sufficient NoSQL vs. RDBMS Goal: (Software Storage) – global, – Reliable, – arbitrarily large in scale Reality: (hardware system) – geographically confined, – failure-prone, – suffer limited capacity.
6
Building Blocks for Storage – Commodity server limited resources Fails quite easily Quite Cheap – Geodistributed data centers Many servers Potential for low latency
7
Big Data Wants … Best of both worlds Highly scalable good performance! Highly available fault tolerant Very mature and well understood, Strong consistency strong transactional semantics What is good performance? (Difference in workloads) – Read-heavy: search – Read/Write heavy: social networks
8
Outline Storage review Workload Cassandra [FB] – High reads, high writes Megastore [Google] – High reads, low writes
9
Motivation online services should be highly scalable (millions of users) responsive to users highly available provide a consistent view of data (update should be visible immediately and durable) it is challenging for the underlying storage system to meet all these potential conflicting requirements
10
Workloads Facebook Personalized data is localized Delay in page load time is somewhat acceptable Who will you use? – MySpace? Constantly updating and reading – Scope is localized – High reads/high writes Google Same data used by everyone Delay in load is unacceptable Index once, many infinite times – High reads/low writes
11
Outline Storage review Workload Cassandra [FB] – High reads, high writes Megastore [Google] – High reads, low writes
12
Data Model Table is a multi dimensional map indexed by key (row key). Columns are grouped into Column Families. 2 Types of Column Families – Simple – Super (nested Column Families) Each Column has – Name – Value – Timestamp
13
Data Model keyspace settings column family settings column namevaluetimestamp * Figure taken from Eben Hewitt’s (author of Oreilly’s Cassandra book) slides.
14
System Architecture Partitioning: provides high throughput – How data is partitioned across nodes? Load balancing Overcome failures Replication: overcome failure How data is duplicated across nodes
15
Partitioning and Replication 01 1/2 F E D C B A N=3 h(key2) h(key1) 15 * Figure taken from Avinash Lakshman and Prashant Malik (authors of the paper) slides. Nodes are logically structured in Ring Topology. Make hash of content’s value, h(k) H(k) used to assign content to a node Use H(k) % S to determine location in the ring Lightly loaded nodes moves position to alleviate highly loaded nodes.
16
Challenges with Partition scheme Partition doesn’t account for – Load imbalance (some items are more popular) – Key imbalance (items aren’t evenly distributed to keys) How to solve this? – Have a node appear in multiple locations – Move lightly loaded nodes in the ring
17
System Architecture Partitioning: provides high throughput How data is partitioned across nodes? What do we want from a good partition algorithm? Replication: overcome failure How data is duplicated across nodes?
18
System Architecture Partitioning: provides high throughput How data is partitioned across nodes? What do we want from a good partition algorithm? Replication: overcome failure – How data is duplicated across nodes? Challenges: Consistency issues Overhead of replication
19
Replication Each data item is replicated at N (replication factor) nodes. Different Replication Policies – Rack Unaware – replicate data at N-1 successive nodes after its coordinator – Rack Aware – uses ‘Zookeeper’ to choose a leader which tells nodes the range they are replicas for – Datacenter Aware – similar to Rack Aware but leader is chosen at Datacenter level instead of Rack level. Why??
20
Local Persistence Relies on local file system for data persistency. Write operations happens in 2 steps – Write to commit log in local disk of the node – Update in-memory data structure. Read operation – Looks up in-memory ds first before looking up files on disk. – Uses Bloom Filter (summarization of keys in file store in memory) to avoid looking up files that do not contain the key.
21
Read Operation Query Closest replica Cassandra Cluster Replica A Result Replica BReplica C Digest Query Digest Response Result Client Read repair if digests differ * Figure taken from Avinash Lakshman and Prashant Malik (authors of the paper) slides.
22
Failure Detection Traditional approach – Heart-beats (Used by HDFS & Hadoop): binary (yes/no) – If you don’t get X number of heart beats then assume failure Accrual failure approach – Returns a # representing probability of death X of the last Y messages were received: (X/Y)*100% – Modify this # to reflect N/W congestion & server load – Based on the distribution of interarrival times of update messages How would you do this?
23
Outline Storage review Workload Cassandra [FB] – High reads, high writes Megastore [Google] – High reads, low writes Storage in the Cloud – HDFS in VMWare
24
HDFS (also GFS) A version of NoSQL Provides get/put API Optimized for high read throughput – Uses replication Bad write performance!
25
Contribution Overcome limitations by Combining NoSQL w/ RDBMS Partition High throughput + Scale: – Partition data-space into a number of distinct little DBs Replication low-latency+Fault tolerance: – Replicate each little DB across the world Optimizes latency Optimize for long links – synch fault tolerant log system (Paxos)
26
Scale Axis vs. Wide Replication Axis For scalability and throughput, based on the observation that most internet services can be suitably partitioned, Megastore partitions the database into entity groups (small sub‐databases), each with its replicated log stored in per‐replica/per‐datacenter Bigtable.
27
More Features Enriches traditional GFS/HDFS – Adds rich primitives: e.g. ACID transactions, consistency – Simplifies application development Biggest challenge is consistency across replicas
28
Consistency Recall, read and write can be simultaneous Different read consistency: trade-off time V consistency – Current read – Snapshot read – Inconsistent read
29
Distributed Storage Side by Side Cassandra Facebook – High throughput reads – Focus on within a datacenter NoSQL Partition throughput – Scales Replication – Fault tolerance Megastore Google – High throughput reads – Low latency writes NoSQL+RDBMS Partition throughput – scales Replication low latency – Fault tolerance
30
Questions
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.