Presentation is loading. Please wait.

Presentation is loading. Please wait.

A quick introduction to SANs and Panasas ActivStor

Similar presentations


Presentation on theme: "A quick introduction to SANs and Panasas ActivStor"— Presentation transcript:

1 A quick introduction to SANs and Panasas ActivStor
My initial experience with SANs was learning just enough to get the first NGS cluster operational; always had nagging doubt that I was missing a trick or two bought SANs for dummies and found, no, there really isn’t that much more to them. Original motivation for much of our SAN purchases was to support a RH GFS project to provide high performance access to storage for SCARF cluster users. GFS was troublesome to say the least, so we went back to good old ext3 + fileserver access model. Over time, more of the SAN-based storage is being used by other projects, such as VMWare, Oracle database + other NFS based file servers. Storage Networking Kevin Haines, eScience Centre, STFC, RAL

2 WikiPedia defines a SAN:
A storage area network (SAN) is an architecture to attach remote computer storage devices (such as disk arrays, tape libraries, and optical jukeboxes) to servers in such a way that the devices appear as locally attached to the operating system. Contrast to Direct Attached Storage, usually found inside or just outside of the server, and Network Attached Storage using file based protocols such as NFS.

3 SAN Topology It’s like a LAN... Host/HBA 1 Storage 1 Host/HBA 2
SAN Switch

4 SAN Topology It’s like a LAN... it has a switch... Host/HBA 1
Storage 1 Host/HBA 2 Storage 2 Qlogic SANBox ports, two pairs of stacking ports (10Gb/s?) for expansion Host/HBA 3 SAN Switch

5 SAN Topology It’s like a LAN... it has a switch, and network cards.
Host/HBA 1 Storage 1 Host/HBA 2 Storage 2 Called HBAs – host bus adaptors Host/HBA 3 SAN Switch

6 SAN Topology It’s like a LAN... it has a switch, and network cards.
Storage 1 SAN Switch Storage 2 Host/HBA 1 Host/HBA 2 Host/HBA 3 Storage Arrays, may have one or two integral HBAs.

7 SAN Topology It’s like a LAN...but a little different.
Fibre Connectors 2/4/8Gb/s Host/HBA 1 Storage 1 Host/HBA 2 Storage 2 Fibre connectors, not copper 2/4/8 Gb/s WWNs instead of MAC addresses Initiators and Targets HBAs don’t talk to each other (usually?) FC Protocol not Ethernet/IP SCSI over FCP Ultimate result is to present a SCSI device to the Host OS Host/HBA 3 SAN Switch Initiators Targets WWNs

8 Controlling Access Zoning implemented by the switch Host/HBA 1
Storage 1 Host/HBA 2 Storage 2 Ensure that HBAs only ‘see’ the storage we want them to. Not necessary, but a good precaution: NGS site incident, no zoning or LUN masking used, wrong device formatted wiping out home file system Fine on its own, as long as we’re dealing with entire storage arrays/devices as the targets, but if storage is partitioned for use by different HBAs, then Zoning on its own is not adequate. Host/HBA 3 SAN Switch Zone 1 = HBA1 + Storage1 Zone 2 = HBA2 + HBA3 + Storage 2

9 Controlling Access LUN masking on the storage array Host/HBA 1
LD1 HBA2 If partition your storage array, then LUN masking, implemented on storage array, Usually a list of HBAs WWNs that are allowed to read and write to the Logical Drive sometimes, can be a binary masking that includes/excludes a range of WWNs (limited use) LUN masking can be used on its own, or in combination with Zoning, but using both gives an extra level of protection from yourself. LD2 Host/HBA 3 HBA2 LD3 HBA3 SAN Switch Zone 1 = HBA1 + Storage1 Zone 2 = HBA2 + HBA3 + Storage 2

10 Controlling Access LUN masking on the storage array Host/HBA 1
LD1 HBA2 LD2 Host/HBA 3 HBA2 LD3 HBA3 SAN Switch Zone 1 = HBA1 + Storage1 Zone 2 = HBA2 + HBA3 + Storage 2

11 Increasing Resilience
Multipath HOST HBA 1 SAN Switch 1 Storage 1 HBA 2 SAN Switch 2 Multipath adds resilience to failures in the HBA, SAN switch, and storage array channels, at cost of money and complexity. Two HBAs in the host, two SAN switches interconnected, and two channels to the storage array, there are 4 possible paths to the same storage in this diagram Host ‘sees’ 4 instances of the same disk, requires extra software to manage this complexity. In RHL this is devicemapper-multipath (0.4.5). Use dm-managed devices instead of /dev/sdb /dev/sdd etc. No personal experience of implementing this, nor of requiring it. SAN has been very reliable system, ~2-3 system failures in 5 years, and only 1 component failure.

12 Questions about SANs?

13 Panasas ActivStor Our goal (again) to provide high performance and resilient storage solution to the SCARF users. Wanted something easy to manage, scalable and affordable Bad experience with GFS, even if fixed - can’t afford to expand to > 600 SAN ports – HBAs, switches cabling etc. Evaluated Isilon & Panasas, tenders from both and selected Panasas.

14 Panasas ActivStor Sold in 4U shelf format, with 11 ‘blades’ per shelf, network switch (or two), redundant power and battery backup. All hot swappable. 2 types of blades. Director blades provide NFS access and meta-data + volume management. Up to 3 DBs per shelf. Storage blades provide 2 disks (up two 2TB per blade) and 2xGb/s ethernet (failover). Can have 11 SBs in shelf, but must be managed by a DB somewhere else.

15 Panasas ActivStor Director Blades
Sold in 4U shelf format, with 11 ‘blades’ per shelf, network switch (or two), redundant power and battery backup. All hot swappable. 2 types of blades. Director blades provide NFS access and meta-data + volume management. Up to 3 DBs per shelf. Storage blades provide 2 disks (up two 2TB per blade) and 2xGb/s ethernet (failover). Can have 11 SBs in shelf, but must be managed by a DB somewhere else. Director Blades Meta-data and volume management services NFS access gateway

16 Panasas ActivStor Storage Blades Two disks (up to 2TB per blade)
Sold in 4U shelf format, with 11 ‘blades’ per shelf, network switch (or two), redundant power and battery backup. All hot swappable. 2 types of blades. Director blades provide NFS access and meta-data + volume management. Up to 3 DBs per shelf. Storage blades provide 2 disks (up two 2TB per blade) and 2xGb/s ethernet (failover). Can have 11 SBs in shelf, but must be managed by a DB somewhere else. Storage Blades Two disks (up to 2TB per blade) 2xGb/s ethernet (failover mode) 11 per shelf (must be managed by DB)

17 Panasas ActivStor Resilience
Resilience to failure is achieved by striping the data across the SBs, with one or more SBs worth of storage allocated as spare capacity. Can configure steadily decreasing number of SBs. E.g. Start with 3 then reduce number as necessary. It is not the case that one entire SB stands idle –every SB is involved in serving the clients, therefore getting full performance capability of system. Comprehensive System monitoring, detect early signs of blade failure and initiate blade drain for pre-emptive replacement. DBs provide volume management/metadata services to rest of system, as well as NFS, and can be configured for failover. Data is striped across all Storage Blades (RAID5 or RAID1) One or more Storage Blades’ worth of space reserved for failures System monitoring for pre-emptive actions (Blade Drain) Two or more Director Blades can provide failover for each other

18 Panasas ActivStor Performance
DirectFLOW clients Available for most major Linux distributions Directly communicates with Storage Blades RAID computations performed by client ~5000 supported (12000 option) To get best performance, need to use the DirectFLOW client, implemented as a Linux Kernel module

19 Panasas ActivStor Performance
DirectFLOW clients Available for most major Linux distributions Directly communicates with Storage Blades RAID computations performed by client ~5000 supported (12000 option) Results show scalable claim is true RAL: 1.2 GB/s from 22 nodes (2 shelves) RoadRunner: 60GB/s (103 shelves) Summary: Feel we’ve got a good balance between SAN attached storage, where fast attached storage needed, and high-performance network-based global filespace offered by Panasas.


Download ppt "A quick introduction to SANs and Panasas ActivStor"

Similar presentations


Ads by Google