Presentation is loading. Please wait.

Presentation is loading. Please wait.

Decentralized Distributed Storage System for Big Data Presenter: Wei Xie Data-Intensive Scalable Computing Laboratory(DISCL) Computer Science Department.

Similar presentations


Presentation on theme: "Decentralized Distributed Storage System for Big Data Presenter: Wei Xie Data-Intensive Scalable Computing Laboratory(DISCL) Computer Science Department."— Presentation transcript:

1 Decentralized Distributed Storage System for Big Data Presenter: Wei Xie Data-Intensive Scalable Computing Laboratory(DISCL) Computer Science Department Texas Tech University Texas Tech University 2016 Symposium on Big Data

2  Trends in Big Data and Cloud Storage  Decentralized storage technique  UniStore project at Texas Tech Outline Texas Tech University 2016 Symposium on Big Data

3  Large capacity: 100s terabytes of data and more  Performance-intensive: demanding big data analytics applications, real-time response  Data protection: protect 100s terabytes of data from loss Big Data Storage Requirements Texas Tech University 2016 Symposium on Big Data

4 Why Data Warehousing Fails in Big Data Texas Tech University 2016 Symposium on Big Data Data warehousing has been used to process very large data sets for decades A core component of Business Intelligence Not designed to handle unstructured data (emails, log files, social media, etc.) Not designed for real-time and fast response

5  Traditional data warehousing problem Retrieve the sales figures of a particular item in a chain of retail stores exist in a database Comparison Texas Tech University 2016 Symposium on Big Data  Big data problem Cross-reference sales of a particular item with weather conditions at time of sale, or with various customer details, and to retrieve that information quickly

6  Scale-out storage A number of compute/storage elements connected via network Capacity and performance can be added incrementally Not limited by the RAID controller Big Data Storage Trends Texas Tech University 2016 Symposium on Big Data

7  Scaled-out NAS NAS: network attached storage Scale-out offers more flexible capacity/performance expansion (add NAS instead of disk in the slots of NAS) Parallel/distributed file system (Hadoop) to handle scale-out NAS EMC Isilon, Hitachi Data Systems, Data Direct Networks hScaler, IBM SONAS, HP X9000, and NetApp DATA Ontap Big Data Storage Trends Texas Tech University 2016 Symposium on Big Data

8  Object Storage Flat namespace instead of hierarchical namespace of a file system Objects are identified by IDs Better scalability and performance for very large number of objects Amazon S3  Hyperscale Architecture Mainly used for large infrastructure sites by Facebook, Google, Microsoft and Amazon Scaled-out DAS: direct attached server, commodity enterprise server attached with storage devices Redundancy: fail over entire server instead of components Hadoop run on top of a cluster of DAS to support big data analytics Part of the Software Defined Storage platform Commercial product: EMC’s ViPR Big Data Storage Trends Texas Tech University 2016 Symposium on Big Data

9  Compute, network, storage and virtualization tightly integrated  Buy a hardware box and get all you need  VMware, Nutanix, Nimboxx Hyper-converged Storage Texas Tech University 2016 Symposium on Big Data

10  A centralized storage cluster: metadata server, storage servers and interconnections Scalability is bounded by the metadata server Multi-site distributed storage? Redundancy achieved by RAID  Decentralized storage cluster No metadata server to limit the scalability Multi-site, geographically distributed Data replicated across servers, racks or sites Scale-out Storage Centralized vs. Decentralized Texas Tech University 2016 Symposium on Big Data

11  How to distribute data across nodes/servers/disks? P2P based protocol Distributed hash table  Advantage Incremental scalability: build a small cluster and expand in the future Self-organizing Redundancy  Issues Data migration upon data center expansion and failures Handling heterogeneous servers Decentralized Storage Texas Tech University 2016 Symposium on Big Data

12 Decentralized Storage: Consistent Hashing 1 holds D1 2 holds D2 3 holds D3 4 holds D1 2 holds D2 3 holds D3 1 holds nothing SHA-1 function

13 Properties of Consistent Hashing Balance: each server owns equal portion of keys Smoothness: to add the k th server, 1/k fraction of keys located between it and predecessor server should be migrated Fault tolerance: multiple copies for each key, if one server down, find next successor with small change to the cluster view and balance still holds

14 Unistore Overview Workloads Access patterns Devices Bandwidth Throughput Block erasure Concurrency Wear-leveling Characterization Component I/O Pattern Random/Sequential Read/write Hot/cold I/O Functions Write_to_SSD Read_from_SSD Write_to_HDD Data Placement Component Placement Algorithm Modified Consistent Hash To build a unified storage architecture (Unistore) for Cloud storage systems with the co-existence and efficient integration of heterogeneous HDDs and SCM (Storage Class Memory) devices Based on a decentralized consistent hashing based storage system - Sheepdog guide

15  Heterogeneous storage environment  Distinct throughput NVMe SSD: 2000 or more MB/s SATA SSD: ~500 MB/s Enterprise HDD: ~150 MB/s  Large SSDs are becoming available, but still expensive 1.2TB NVMe Intel 750 costs $1000 1TB SATA Saumsung 640 EVO costs $500 10 or more costly than HDDs  SSDs still co-exist with HDDs as accelerator instead of replacing them Background: Heterogeneous Storage 15

16  Traditional way of using SCMs (i.e. SSD) in cloud-scale distributed storage: as cache layer Caching/buffering generates extensive writes to SSD, which wears out the device Need fine-tuned caching/buffering scheme Not fully utilize capacity of SSDs The capacity of SSDs is growing fast  Tiered Storage Data placed on SSD or HDD servers according to requirements  Throughput  Latency  Access frequency Data transfer between tiers when the requirements changed Background: How to Use SSDs in Cloud-scale Storage 16

17  CRUSH ensures data placed across multiple independent locations to improve data availability  Tiered-CRUSH integrates storage tiering into the CRUSH data placement Tiered-CRUSH 17

18  The virtualized volumes have different access pattern  Access frequency of object recorded per volume, hotter data more likely to be placed on faster tiers  Fair storage utilization maintained Tiered-CRUSH 18

19 Tiered-CRUSH: Evaluation 19  Implemented in a benchmarks tool compiled with the CRUSH library functions  Simulation showed that data distribution uniformity can be maintained  Simulation shows 1.5 to 2X improvement in overall bandwidth in our experimental settings Device nameNumberCapacity(GB)Read bandwidth (MB/s) Samsung NVMe SSD 11282000 Samsung SATA SSD 2256540 Seagate HDD31000156

20  Trace object I/O requests when executing applications at first time  Trace analysis, correlation finding and object grouping  Reorganize objects for replication in the background Pattern-directed Replication 20

21 Version Consistent Hashing Scheme Build versions into the consistent hashing Avoid data migration when adding nodes or node fails Maintain efficient data lookup

22 Conclusions Decentralized storage becomes the standard in cloud storage Tiered-CRUSH algorithm achieves better IO performance and higher data availability at the same time for heterogeneous storage system Version consistent hashing scheme for improving manageability of data center PRS for high performance data replication by reorganizing the placement of data replications

23 Thank you! Questions? Visit: discl.cs.ttu.edu for more details Texas Tech University 2016 Symposium on Big Data


Download ppt "Decentralized Distributed Storage System for Big Data Presenter: Wei Xie Data-Intensive Scalable Computing Laboratory(DISCL) Computer Science Department."

Similar presentations


Ads by Google