Download presentation
Presentation is loading. Please wait.
Published byRussell McDonald Modified over 8 years ago
1
SOFTWARE DEFINED STORAGE The future of storage
2
Tomas Florian IT Security Virtualization Asterisk Empower people in their own productivity, privacy and security
3
Why Software Defined Storage? Exclusive TechnologyCommodity Technology Expensive (warranties and special maintenance is a must) Inexpensive (to the point of being disposable) Has to be shipped inAvailable at local retailers Proprietary technologyOpen standards based technology Locked-in ecosystem (spare parts, modules, upgrades all come from same vendor) Open ecosystem (mix and match) Commit to specific capacityGrow incrementally
4
Why Ceph? Runs on commodity hardware (JBOD) Assumes hardware failures a norm not exception Reliability determined by amount of spare capacity, not by quality of underlying components Self-healing, self-managing Distributed - no central point of failure Open Source
5
Who uses Ceph Yahoo/Flickr 500 PB CERN (Large Hadron Collider) 1.5 PB Dreamhost 3 PB DataHive 40 TB+ You?
6
Ceph Stack
7
Simple Infrastructure Diagram OSD: Object storage device. This is the hard drive where data is stored physically Mon: Coordinates all activity on the cluster
8
Infrastructure Diagram
9
More Terminology CRUSH algorithm: Figures out where to store the data CRUSH map: Defines the underlying physical topology for the CRUSH algorithm PG (Placement Groups): Aggregates objects for better efficiency. RADOS block device (RBD): Exposes underlying objects as a block device (ex /dev/rbd1)
10
Self-managing? Self-healing?
11
AABBCC
12
AABBCC
13
Rebalancing AABBCCB
14
AABBBCCB
15
Rebalancing AABB CC
16
More failures AA,BBABCC
17
More Rebalancing AA,BBABABCC
18
OSDs Back Online ABABCC
19
Rebalancing AABABABABCC
20
Self-managing Self-healing AABBCC
21
Getting Started with Ceph Minimum Hardware 1 GB RAM for each 1 TB of storage JBOD of SATA,SSD or mix. Minimum 3 nodes 1 Gbs network OS CentOS/Redhat, Ubuntu Guide www.ceph.com: Storage Cluster Quick Start
22
Taking it up a notch 10 Gbs network or bonding Redundant switch iSCSI multipathing Separate public and cluster network SSD caching layer for mechanical drives Custom CRUSH map for multiple racks / locations
23
Performance SSD Journals Network Pure SSD Scrubbing Rebalancing
24
Lessons Learned You can abuse Ceph in all kinds of ways as long as you keep it well fed with free disk space. If Ceph is full (or even close to full) bad things will happen. Local NTP is a must iSCSI is much better than NFS for highly available gateways to Vmware
25
Ceph future (after V10) Native integration into VMWare Stable CephFS Multi region replication
26
Questions? / Contact tomas@florian.ca (403) 714-3914
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.