Presentation is loading. Please wait.

Presentation is loading. Please wait.

1. Scality RING Organic Storage 4 Jan. 2013 Agenda Corporate Profile Industry Trends and Business Challenges Scality RING 4 Scality Solutions Ecosystem.

Similar presentations


Presentation on theme: "1. Scality RING Organic Storage 4 Jan. 2013 Agenda Corporate Profile Industry Trends and Business Challenges Scality RING 4 Scality Solutions Ecosystem."— Presentation transcript:

1 1

2 Scality RING Organic Storage 4 Jan. 2013

3 Agenda Corporate Profile Industry Trends and Business Challenges Scality RING 4 Scality Solutions Ecosystem Roadmap and Vision Conclusion Slide 3

4 Scality – Quick Facts Founded 2009 Experienced management team HQ in the San Francisco, Global reach <50 Employees, 20 engineers in Paris 24 x 7 support team US patents $13M invested in Scality to-date 120% annual growth Industry Associations Slide 4 “Aggressive use of a scale-out architecture like that enabled by Scality's RING architecture will become more prevalent, as IT organizations develop best practices that boost storage asset use, reduce operational overhead, and meet high data availability expectations.”

5 A Rolling and Unstoppable Ball… Slide 5

6 Industry Trends Rapid unstructured data growth “Enterprises will see a staggering 52% growth in data over the next years — much of it an increase in unstructured data”, IDC x8-10 in 5 years, x100 in 10 years !! 100TB today will be 1PB then 10PB Rapid decline in storage costs “Cost of disk storage systems is declining >30% annually”, IDC Skyrocketing storage budget & complexity More storage devices need to be managed Increasing cost of commercial real estate Increasing energy costs (power, cooling) Cost of managing storage between 5 and 10 times the acq. cost (Gartner, IDC) Slide 6 $/TB Cost TB Growth Storage Budget

7 Data and Storage Challenges 100s Millions of Users, 10s-100s PB of Data and Billions of files to store and serve What all these companies have in common ? Internet/Cloud business impossible to sustain and develop with traditional IT approaches Slide 7

8 Scality’s Mission Slide 8 Their DC Their App. YOUR Data Their DC YOUR App. YOUR Data YOUR DC YOUR App. YOUR Data Control and Efficiency at Scale Unified Scale-Out Storage Software from primary to long term storage with Cloud’s advantages, investment protection and ready for future

9 Scality RING 4 Slide 9 Scality RING Organic Storage 4 x86 Ring Topology P2P End-to-End Parallelism DATA MD Object Storage MESA NewSQL DB Replication ARC Erasure Coding Geo Redundancy Tiering S3 CDMI StandardManagement File StorageStaaSDigital Media Enterprise & Cloud System Big Data* Origin Server Scale Out File System Data Processing with Hadoop S3 & CDMI API * Available Q2/2013

10 Hardware Agnostic Slide 10 x86 server (1U, 2U…4U) with CPU, RAM, Ethernet and DAS running Linux Racks (10…40U) Clusters Scale-Out Storage Software based on Shared Nothing Model

11 Distributed Architecture From Servers to Storage Nodes RING Topology, P2P Architecture Limitless Scale-Out Storage based on Shared Nothing model Fully Distributed Storage (Data and Meta-data) Slide 11 P2P servers (6)storage nodes (ex: 6/server, total=36) 36 storage nodes projected on a ring Scality RING 1 node manages 1/36th of the key space

12 Distributed Architecture DHT & Consistent Hashing 360° key space, each node manages a key range Key generation and projection (no overhead at running time) MIT CHORD Inter-routing Algo. + 3 Scality methods (Proxy, Balance & Rebuild) 1/6 key space to rebalance when a storage node is lost No Single Point of Failure, No central Catalog or DB Stateless architecture Easy to grow, high resistance and seamless to failures Self Healing and Organic Upgrade Elastic Clustering Inherent Load balancing & Symmetry Every node can serve a request Slide 12 P2P Maximum # of hops 100 nodes nodes nodes7 Topology is cashed at connector level.

13 End-to-End Parallelism Parallel Connectors access to Storage Nodes Performance aggregation Redundant Data Path Multiple Storage Nodes per server Minimum 6 to increase parallelism and data independence Fast and easy rebuild Multiple IO Daemons per Server Control Physical and Boost IO Slide 13 I/O DAEMONS STORAGE NODES I/O DAEMONS APPLICATIONS / CONNECTORS SSD TIERED STORAGE SATA Scality Parallelism Factor #Storage Nodes x #IO Daemons vs Simple server node with only 1 IO Engine Independent Performance and Capacity Scalability

14 Object and Key/Values Stores Object as a opaque entity with Metadata and Data No limit for object size and number Random block R/W access on large objects Object Chunking, Splitting, Striping and Streaming 160 bit Key Space with 128 bit Object key Store node acting as Simple and High Performance Key/Value stores Fully Autonomous Slide 14 DATA MD 128 bits 160 bits 24 bits DispersionPayloadClass of Service 8 bits ReplicaClass

15 Internal Distributed DB Scality MESA NewSQL Philosophy Distributed Tables across Storage Nodes Multiple Indexes per table Schemas updates/Flexible schemas 100% ACID 100% Elastic Linear Transactional Performance Automatic Fault Tolerance SQL front-end (not exposed) Slide 15 Main DEFS Properties /Name Mapping Table Key Page -> Page Page # Key Page # Key Page # Key Page Value Key Value Key Value Key Page Value Key Value Key Value Key

16 Data Replication No data transformation Clear/Native data format Very fast access Simple projection Class of Storage Up to 5 replicas (6 copies) Rack-aware Guarantee of full independent object location Self-healing Balance misplaced objects Transparently proxy misplaced objects Rebuild missing replicas Permanent CRC of all contents (no silent data corruption) Slide

17 ARC: Scality’s Erasure Coding RAID 5 or 6 can’t scale without risk With Bit Error Rate vendors information (1 bit error among bits) 55% of probability to receive an error for a volume of 10TB 99.9% for 100TB !! (worst beyond) Replication is a great method but too costly for large storage environment Solution: Scality Advanced Resilience Configuration (ARC) Available as a standard RING Core feature (no extra cost) Configuration option, running within the RING Leveraging Reed Solomon technology Slide 17 ARC

18 Scality ARC Fully configurable, by default ARC(14,4) meaning… 14 data fragments + 4 checksums fragments so max. loss of 4 elements in a transaction (18 total fragments committed on disk) Read() is satisfied with minimum of 14 fragments: (14,0), (13,1), …, (10,4) Data fragments = Native Data (no transformation) Direct and fast read access Slide 18 ARC Scality ARC(14,4) Scality ARC(14,4) Data Check sums Data inputs Data sourceOutput to be stored Scality RING 4 14 … HW overhead ratio = 1.3 (14+4/14) Better Durability & Reduced Storage overhead than Replication

19 Scality ARC Comparison Slide 19 ARC Replication A A A B B B Storage space x3 Multiple copies Direct access Dispersed A-B Reduced storage space Scrambled data Latency with additional computation for each read (decoding phase) A+B A*B Reduced storage space Redundant information Direct access Scenario #1: loss of A read(A+B) - read(B) = A Scenario #2: loss of B read(A+B) - read(A) = B Scality ARC A B A+B

20 Geo Redundancy Slide 20 Asynchronous multiple independent RINGs Synchronous stretched RING across 2 sites Business Continuity with “true %” availability including maintenance Multi-site topology with Scality RING (up to 6 sites) Replication or Geo Erasure Coding implementation – Synchronous Or Multi-RINGs on Multi-site (independent topology) – Asynchronous

21 Auto Tiering Tiering like HSM Fully automated and transparent for application 20/80 approach Perfect for Hot data on SSD/Flash and Cold data on SATA Criteria Age, Access Time & Object size Potentially different data protection mechanisms Replication with 3 copies on Tier 1 Erasure Code – Scality ARC – on Tier 2 Slide 21 Tier 2 Secondary Site Site B Tier 1 Primary Site Site A Data Migration across RINGs within same site or across sites 20% 80%

22 NFS SOFS S3 CDMI Access Methods and Standards Open Cloud Access Strategy Scality RING as the storage backend File and Object Access Methods Local and Remote Global Namespace Scality RS2 API (HTTP/REST) Compatible with Amazon S3 Full CDMI server implementation By object ID and hierarchical path- based namespace Cloud Files as the OpenStack Swift protocol Slide 22 SOFS S3 CDMI NFS Scality RING Internet

23 Management Supervisor Central management platform Monitor application connectors & storage nodes down to individual disk drives Passive component Detailed Activity Log Slide 23 RingSH Command line interface Easy to integrate or script with Manage platform, store/retrieve/delete objects List all keys

24 « Exceptional Performance » “ESG Lab verified exceptional performance for an object-based storage solution, which rivals block-based solutions. Aggregate throughput scaled linearly as nodes were added to a RING. Response times improved as the RING grew in size, allowing for predictability when deploying a RING.” Slide 24 Content Delivery Object Type Simultaneously Sustained Objects Delivered 5 servers with Intel SSD Internet Audio (MP3)211,424 Internet Image (JPG)135,311 Internet Video (MPEG)90,208 CD Audio (ISO)18,877 Broadcast TV (HD)2,298

25 Scality RING – The BIG picture Slide 25

26 Scality Solutions Slide 26 Full platformintegration Certified withOpenwaveMessaging,Critical Path,Cyrus, Dovecot,VMware Zimbra,Open-Xchange Full platformintegration Certified withOpenwaveMessaging,Critical Path,Cyrus, Dovecot,VMware Zimbra,Open-Xchange File Storage Scale-Out FileSystem for Linux NFS/CIFS CDMI by Path GlobalNamespace GeoSync File Storage Scale-Out FileSystem for Linux NFS/CIFS CDMI by Path GlobalNamespace GeoSync Digital Media Optimized SOFSfor CDN Originserver Digital Media Optimized SOFSfor CDN Originserver StaaS Amazon S3 API Authentication,Metering Option Multi-Geo Sync-n-Share 1 StaaS Amazon S3 API Authentication,Metering Option Multi-Geo Sync-n-Share 1 S3 & CDMI API Big Data 2 Hadoopintegration within-place data processing Scality SOFSinstead of HDFS Big Data 2 Hadoopintegration within-place data processing Scality SOFSinstead of HDFS Scality RING Core P2P, Replication, Erasure Code, Tiering, API/Interface (HTTP/REST with Scality SRWS and RS2 Light, CDMI by Obj. ID), Management Open Cloud Access 1: Product available separately / 2: Available Q2/2013

27 Scality Ecosystem and Partnership Slide 27 File Sync & Share Backup & Archive Gateway Hardware vendor strategic partnerships

28 Scality Product availability Software only Hardware agnostic RING Core + Solutions Pack Multi-Geo option Price per usable storage capacity Slide 28 Scality RING Organic Storage 4.0 “Appliance” based on MIS (Duo or Solo server) RING Core + StaaS Pack Multi-Geo option Price per enclosure (4U) Highest storage density on the market (72 x 4TB/4U)

29 Release Calendar & Roadmap Slide 29 Feb Q RING Digital Media - Open Cloud Access (CDMI & SOFS) - SOFS GeoSync - Supervisor (usage) - OpenStack integration(EBS with Cinder) Q RING Digital Media - Open Cloud Access (CDMI & SOFS) - SOFS GeoSync - Supervisor (usage) - OpenStack integration(EBS with Cinder) H1/2014 TBC RING 5.0 “Universal Data Platform” - Block interface - Policy Engine &Data Placement H1/2014 TBC RING 5.0 “Universal Data Platform” - Block interface - Policy Engine &Data Placement Q RING SOFS 2.0 (NFS, CIFS) + GeoSync - Hadoop (HDFS replacement,computing on nodes) - Multipart Upload Q RING SOFS 2.0 (NFS, CIFS) + GeoSync - Hadoop (HDFS replacement,computing on nodes) - Multipart Upload 2014 || Q RING OCA Phase 3 (Metering/Multi-Tenant + Swift + S3) - Act./Act. Geo RS2 Q RING OCA Phase 3 (Metering/Multi-Tenant + Swift + S3) - Act./Act. Geo RS2 June 2012 RING ARC (Erasure Code) - SOFS 1.0 (FUSE) - Supervisor (Keys, Agent, SNMP) June 2012 RING ARC (Erasure Code) - SOFS 1.0 (FUSE) - Supervisor (Keys, Agent, SNMP)

30 Scality Technology Milestones Slide | Q4/2013 Blob Store wth Replication and Tiering for primary storage (HTTP, S3) Blob Store wth Replication and Tiering for primary storage (HTTP, S3) Distributed Database Authentication, Metering, Security for S3 Scale Out File System for Linux Erasure Code data protection CDMI File Sync-and-Share OEM Nomadesk Cloud Files Swift Protocol NFS CIFS based on Samba 4 Block Storage (to add storage to VMs) with OpenStack integration via Cinder Hadoop for Big Data SQL front-end for Distributed DB Policy data placement, data protection Multi-Tenancy, QoS Mgmt (IOPS, Throughput, Capacity) & Snapshot || Q2/2013 | Mature Ready Beta Alpha Prototype Design

31 Scality Technical Vision Slide 31 Note: Underlined items are roadmap SSD SATA Erasure Code SATA Metering Auth. Statistics Security Customer IT Infrastructure Customer IT Infrastructure Discovery

32 Conclusion Technology Leader Software only solution with oem agreements Proven deployed solution 100% Data Availability and Durability Fully Autonomous High Performance and strong Security Cost efficiency and rapid ROI Slide 32 Unified Scale-Out Storage Software

33 33


Download ppt "1. Scality RING Organic Storage 4 Jan. 2013 Agenda Corporate Profile Industry Trends and Business Challenges Scality RING 4 Scality Solutions Ecosystem."

Similar presentations


Ads by Google