Download presentation
Presentation is loading. Please wait.
Published byTiana Hodder Modified over 9 years ago
1
Software Defined storage, Big Data and Ceph. What is all the fuss about? Kamesh Pemmaraju, Sr. Product Mgr, Dell Neil Levine, Dir. of Product Mgmt, Red Hat OpenStack Summit Atlanta, May 2014
2
CEPH
3
CEPH UNIFIED STORAGE FILE SYSTEM BLOCK STORAGE OBJECT STORAGE Keystone Geo-Replication Native API 3 Multi-tenant S3 & Swift OpenStack Linux Kernel iSCSI Clones Snapshots CIFS/NFS HDFS Distributed Metadata Linux Kernel POSIX Copyright © 2013 by Inktank | Private and Confidential
4
ARCHITECTURE 4 Copyright © 2013 by Inktank | Private and Confidential APPHOST/VMCLIENT
5
COMPONENTS 5 S3/SWIFTHOST/HYPERVISORiSCSICIFS/NFSSDK INTERFACES STORAGE CLUSTERS MONITORSOBJECT STORAGE DAEMONS (OSD) BLOCK STORAGE FILE SYSTEM OBJECT STORAGE Copyright © 2014 by Inktank | Private and Confidential
6
THE PRODUCT
7
7 INKTANK CEPH ENTERPRISE WHAT’S INSIDE? Ceph Object and Ceph Block Calamari Enterprise Plugins (2014) Support Services Copyright © 2013 by Inktank | Private and Confidential
9
USE CASE: OPENSTACK 9
10
Copyright © 2013 by Inktank | Private and Confidential USE CASE: OPENSTACK 10 Volumes Ephemeral Copy-on-Write Snapshots
11
Copyright © 2013 by Inktank | Private and Confidential USE CASE: OPENSTACK 11
12
Copyright © 2013 by Inktank | Private and Confidential USE CASE: CLOUD STORAGE 12 S3/Swift
13
Copyright © 2013 by Inktank | Private and Confidential USE CASE: WEBSCALE APPLICATIONS 13 Native Protocol Native Protocol Native Protocol Native Protocol
14
ROADMAP INKTANK CEPH ENTERPRISE 14 Copyright © 2013 by Inktank | Private and Confidential May 2014Q4 2014 2015
15
Copyright © 2013 by Inktank | Private and Confidential USE CASE: PERFORMANCE BLOCK 15 CEPH STORAGE CLUSTER
16
Copyright © 2013 by Inktank | Private and Confidential USE CASE: PERFORMANCE BLOCK 16 CEPH STORAGE CLUSTER Read/Write
17
Copyright © 2013 by Inktank | Private and Confidential USE CASE: PERFORMANCE BLOCK 17 CEPH STORAGE CLUSTER Write Read
18
Copyright © 2013 by Inktank | Private and Confidential USE CASE: ARCHIVE / COLD STORAGE 18 CEPH STORAGE CLUSTER
19
ROADMAP INKTANK CEPH ENTERPRISE 19 Copyright © 2013 by Inktank | Private and Confidential April 2014September 2014 2015
20
Copyright © 2013 by Inktank | Private and Confidential USE CASE: DATABASES 20 Native Protocol Native Protocol Native Protocol Native Protocol
21
Copyright © 2013 by Inktank | Private and Confidential USE CASE: HADOOP 21 Native Protocol Native Protocol Native Protocol Native Protocol
22
22 Training for Proof of Concept or Production Users Online Training for Cloud Builders and Storage Administrators Instructor led with virtual lab environment INKTANK UNIVERSITY Copyright © 2014 by Inktank | Private and Confidential VIRTUALPUBLIC May 21 – 22 European Time-zone June 4 - 5 US Time-zone
23
23 Ceph Reference Architectures and case study
24
24 Outline Planning your Ceph implementation Choosing targets for Ceph deployments Reference Architecture Considerations Dell Reference Configurations Customer Case Study
25
25 Business Requirements – Budget considerations, organizational commitment – Avoiding lock-in – use open source and industry standards – Enterprise IT use cases – Cloud applications/XaaS use cases for massive-scale, cost-effective storage – Steady-state vs. Spike data usage Sizing requirements – What is the initial storage capacity? – What is the expected growth rate? Workload requirements – Does the workload need high performance or it is more capacity focused? – What are IOPS/Throughput requirements? – What type of data will be stored? – Ephemeral vs. persistent data, Object, Block, File? Planning your Ceph Implementation
26
26 How to Choose Targets Use Cases for Ceph Virtualization and Private Cloud (traditional SAN/NAS) High Performance (traditional SAN) Performance Capacity NAS & Object Content Store (traditional NAS) Cloud Applications Traditional IT XaaS Compute Cloud Open Source Block XaaS Content Store Open Source NAS/Object Ceph Target
27
27 Tradeoff between Cost vs. Reliability (use-case dependent) Use the Crush configs to map out your failures domains and performance pools Failure domains – Disk (OSD and OS) – SSD journals – Node – Rack – Site (replication at the RADOS level, Block replication, consider latencies) Storage pools – SSD pool for higher performance – Capacity pool Plan for failure domains of the monitor nodes Consider failure replacement scenarios, lowered redundancies, and performance impacts Architectural considerations – Redundancy and replication considerations
28
28 Server Considerations Storage Node: – one OSD per HDD, 1 – 2 GB ram, and 1 Gz/core/OSD, – SSD’s for journaling and for using the tiering feature in Firefly – Erasure coding will increase useable capacity at the expense of additional compute load – SAS JBOD expanders for extra capacity (beware of extra latency and oversubscribed SAS lanes) Monitor nodes (MON): odd number for quorum, services can be hosted on the storage node for smaller deployments, but will need dedicated nodes larger installations Dedicated RADOS Gateway nodes for large object store deployments and for federated gateways for multi-site
29
29 Networking Considerations Dedicated or Shared network – Be sure to involve the networking and security teams early when design your networking options – Network redundancy considerations – Dedicated client and OSD networks – VLAN’s vs. Dedicated switches – 1 Gbs vs 10 Gbs vs 40 Gbs! Networking design – Spine and Leaf – Multi-rack – Core fabric connectivity – WAN connectivity and latency issues for multi-site deployments
30
30 Ceph additions coming to the Dell Red Hat OpenStack solution Pilot configuration Components Dell PowerEdge R620/R720/R720XD Servers Dell Networking S4810/S55 Switches, 10GB Red Hat Enterprise Linux OpenStack Platform Dell ProSupport Dell Professional Services Avail. w/wo High Availability Specs at a glance Node 1: Red Hat Openstack Manager Node 2: OpenStack Controller (2 additional controllers for HA) Nodes 3-8: OpenStack Nova Compute Nodes: 9-11: Ceph 12x3 TB raw storage Network Switches: Dell Networking S4810/S55 Supports ~ 170-228 virtual machines Benefits Rapid on-ramp to OpenStack cloud Scale up, modular compute and storage blocks Single point of contact for solution support Enterprise-grade OpenStack software package Benefits Rapid on-ramp to OpenStack cloud Scale up, modular compute and storage blocks Single point of contact for solution support Enterprise-grade OpenStack software package Storage bundles
31
31 Example Ceph Dell Server Configurations TypeSizeComponents Performance20 TB R720XD 24 GB DRAM 10 X 4 TB HDD (data drives) 2 X 300 GB SSD (journal) Capacity44TB / 105 TB* R720XD 64 GB DRAM 10 X 4 TB HDD (data drives) 2 X 300 GB SSH (journal) MD1200 12 X 4 TB HHD (data drives) Extra Capacity144 TB / 240 TB* R720XD 128 GB DRAM 12 X 4 TB HDD (data drives) MD3060e (JBOD) 60 X 4 TB HHD (data drives)
32
32 Dell & Red Hat & Inktank have partnered to bring a complete Enterprise-grade storage solution for RHEL-OSP + Ceph The joint solution provides: – Co-engineered and validated Reference Architecture – Pre-configured storage bundles optimized for performance or storage – Storage enhancements to existing OpenStack Bundles – Certification against RHEL-OSP – Professional Services, Support, and Training › Collaborative Support for Dell hardware customers › Deployment services & tools What Are We Doing To Enable?
33
33 UAB Case Study
34
34 Overcoming a data deluge Inconsistent data management across research teams hampers productivity Growing data sets challenged available resources Research data distributed across laptops, USB drives, local servers, HPC clusters Transferring datasets to HPC clusters took too much time and clogged shared networks Distributed data management reduced researcher productivity and put data at risk
35
35 Solution: a storage cloud Centralized storage cloud based on OpenStack and Ceph Flexible, fully open-source infrastructure based on Dell reference design − OpenStack, Crowbar and Ceph − Standard PowerEdge servers and storage − 400+ TBs at less than 41¢ per gigabyte Distributed scale-out storage provisions capacity from a massive common pool − Scalable to 5 petabytes Data migration to and from HPC clusters via dedicated 10Gb Ethernet fabric Easily extendable framework for developing and hosting additional services − Simplified backup service now enabled “We’ve made it possible for users to satisfy their own storage needs with the Dell private cloud, so that their research is not hampered by IT.” David L. Shealy, PhD Faculty Director, Research Computing Chairman, Dept. of Physics
36
36 Building a research cloud Project goals extend well beyond data management Designed to support emerging data-intensive scientific computing paradigm – 12 x 16-core compute nodes – 1 TB RAM, 420 TBs storage – 36 TBs storage attached to each compute node Virtual servers and virtual storage meet HPC − Direct user control over all aspects of the application environment − Ample capacity for large research data sets Individually customized test/development/ production environments − Rapid setup and teardown Growing set of cloud-based tools & services − Easily integrate shareware, open source, and commercial software “We envision the OpenStack-based cloud to act as the gateway to our HPC resources, not only as the purveyor of services we provide, but also enabling users to build their own cloud-based services.” John-Paul Robinson, System Architect
37
37 Research Computing System (Next Gen) A cloud-based computing environment with high speed access to dedicated and dynamic compute resources Open Stack node HPC Cluster HPC Storage DDR InfinibandQDR Infiniband 10Gb Ethernet Cloud services layer Virtualized server and storage computing cloud based on OpenStack, Crowbar and Ceph UAB Research Network
38
38 THANK YOU!
39
39 Contact Information Reach Kamesh and Neil for additional information: Dell.com/OpenStack Dell.com/Crowbar Inktank.com/Dell Kamesh_Pemmaraju@Dell.com @kpemmaraju Neil.Levine@Inktank.com @neilwlevine Visit the Dell and Inktank booths in the OpenStack Summit Expo Hall
40
THANK YOU! Neil Levine Dir. of Product Mgmt Red Hat nlevine@redhat.com @neilwlevine
41
41
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.