Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.

Slides:



Advertisements
Similar presentations
Archive Task Team (ATT) Disk Storage Stuart Doescher, USGS (Ken Gacke) WGISS-18 September 2004 Beijing, China.
Advertisements

The Linux Storage People Simple Fast Massively Scalable Network Storage Coraid EtherDrive ® Storage.
© 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice HP Simply StorageWorks Roadshow.
NAS vs. SAN 10/2010 Palestinian Land Authority IT Department By Nahreen Ameen 1.
Windows Server 2012 Storage: Windows Gets a Bit SANer Presented by Mark on twitter 1 V2.00. contents copyright 2013 Mark.
SAN Last Update Copyright Kenneth M. Chipps Ph.D. 1.
1 CSC 486/586 Network Storage. 2 Objectives Familiarization with network data storage technologies Understanding of RAID concepts and RAID levels Discuss.
Open-File Backup & App. Consistent Snapshots Microsoft Volume Shadow Copy Service Introduced 2002 –VSS Broadly Adopted by 3 rd Party Backup Vendors –Enables.
© 2006 EMC Corporation. All rights reserved. Network Attached Storage (NAS) Module 3.2.
Network-Attached Storage
SQL Server, Storage And You Part 2: SAN, NAS and IP Storage.
1 Recap (RAID and Storage Architectures). 2 RAID To increase the availability and the performance (bandwidth) of a storage system, instead of a single.
Server Platforms Week 11- Lecture 1. Server Market $ 46,100,000,000 ($ 46.1 Billion) Gartner.
5/8/2006 Nicole SAN Protocols 1 Storage Networking Protocols Nicole Opferman CS 526.
COEN 180 NAS / SAN. NAS Network Attached Storage (NAS) Each storage device has its own network interface. Filers: storage device that interfaces at the.
Storage Area Network (SAN)
How to Cluster both Servers and Storage W. Curtis Preston President The Storage Group.
Module – 7 network-attached storage (NAS)
COEN 180 NAS / SAN. Storage Trends Storage Trends: Money is spend on administration Morris, Truskowski: The evolution of storage systems, IBM Systems.
Implementing Failover Clustering with Hyper-V
Storage Networking. Storage Trends Storage growth Need for storage flexibility Simplify and automate management Continuous availability is required.
Session 3 Windows Platform Dina Alkhoudari. Learning Objectives Understanding Server Storage Technologies Direct Attached Storage DAS Network-Attached.
Data Storage Willis Kim 14 May Types of storages Direct Attached Storage – storage hardware that connects to a single server Direct Attached Storage.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Mass Storage System EMELIZA R. YABUT MSIT. Overview of Mass Storage Structure Traditional magnetic disks structure ◦Platter- composed of one or more.
Microsoft Load Balancing and Clustering. Outline Introduction Load balancing Clustering.
Storage Cheap or Fast, Pick One. Storage Great--you can do a lot of computation. But this often generates a lot of data. Where are you going to put it?
BACKUP/MASTER: Immediate Relief with Disk Backup Presented by W. Curtis Preston VP, Service Development GlassHouse Technologies, Inc.
IBM TotalStorage ® IBM logo must not be moved, added to, or altered in any way. © 2007 IBM Corporation Break through with IBM TotalStorage Business Continuity.
SAN VS NAS What the HECK you talking about???. NAS – Various Devices (CD Towers, SCSI Towers, Specialty Servers) Each Device Connected Directly to network,
Storage Survey and Recent Acquisition at LAL Michel Jouvin LAL / IN2P3
Managing Storage Lesson 3.
Module 10 Configuring and Managing Storage Technologies.
GeoVision Solutions Storage Management & Backup. ๏ RAID - Redundant Array of Independent (or Inexpensive) Disks ๏ Combines multiple disk drives into a.
Hardware (The part you can kick). Overview  Selection Process  Equipment Categories  Processors  Memory  Storage  Support.
This courseware is copyrighted © 2011 gtslearning. No part of this courseware or any training material supplied by gtslearning International Limited to.
High Performance Computing G Burton – ICG – Oct12 – v1.1 1.
Module 9: Configuring Storage
Chapter 8 Implementing Disaster Recovery and High Availability Hands-On Virtual Computing.
Slide 1 DESIGN, IMPLEMENTATION, AND PERFORMANCE ANALYSIS OF THE ISCSI PROTOCOL FOR SCSI OVER TCP/IP By Anshul Chadda (Trebia Networks)-Speaker Ashish Palekar.
Trends In Network Industry - Exploring Possibilities for IPAC Network Steven Lo.
NAS Last Update Copyright Kenneth N. Chipps Ph.D. 1.
11/05/07 1TDC TDC 564 Local Area Networks Lecture 8 IP-based Storage Area Network.
1 U.S. Department of the Interior U.S. Geological Survey Contractor for the USGS at the EROS Data Center EDC CR1 Storage Architecture August 2003 Ken Gacke.
Using NAS as a Gateway to SAN Dave Rosenberg Hewlett-Packard Company th Street SW Loveland, CO 80537
Hosted by Minimizing the Impact of Storage on Your Network W. Curtis Preston President The Storage Group.
Disk Interfaces Last Update Copyright Kenneth M. Chipps Ph.D. 1.
AoE and HyperSCSI on Linux PDA Prepared by They Yu Shu.
 The End to the Means › (According to IBM ) › 03.ibm.com/innovation/us/thesmartercity/in dex_flash.html?cmp=blank&cm=v&csr=chap ter_edu&cr=youtube&ct=usbrv111&cn=agus.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
ISCSI. iSCSI Terms An iSCSI initiator is something that requests disk blocks, aka a client An iSCSI target is something that provides disk blocks, aka.
Internet Protocol Storage Area Networks (IP SAN)
STORAGE ARCHITECTURE/ MASTER): Where IP and FC Storage Fit in Your Enterprise Randy Kerns Senior Partner The Evaluator Group.
Storage Networking. Storage Trends Storage grows %/year, gets more complicated It’s necessary to pool storage for flexibility Intelligent storage.
1 CEG 2400 Fall 2012 Network Servers. 2 Network Servers Critical Network servers – Contain redundant components Power supplies Fans Memory CPU Hard Drives.
© 2007 EMC Corporation. All rights reserved. Internet Protocol Storage Area Networks (IP SAN) Module 3.4.
Open-E Data Storage Software (DSS V6)
Storage Area Networks The Basics.
Integrating Disk into Backup for Faster Restores
Nexsan iSeries™ iSCSI and iSeries Topologies Name Brian Montgomery
Storage Networking.
Introduction to Networks
Introduction to Networks
Direct Attached Storage Overview
Direct Attached Storage and Introduction to SCSI
Storage Networking.
Storage Networks and Storage Devices
Storage Networking Protocols
Cost Effective Network Storage Solutions
CS 295: Modern Systems Organizing Storage Devices
Presentation transcript:

Storage Area Networks The Basics

Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better performance Option of disk distributed across multiple locations

Direct Attached Storage Classically, for storage we had a single box with a bunch of disks attached: SCSI Bus LUN0 LUN1LUN2 Server Public Network

Attached Storage The server speaks to the SCSI disks using a command language: Read from LUN0, Block 123 Write to LUN1, Block 456 All this goes over the SCSI bus, which is directly attached to the server; only that server has access to the bus The server would create a filesystem on the disk(s) and could then make the disk available to other computers via NFS, Samba, etc.

Network Attached Storage This idea is easily extended to an appliance approach. Configure a utility box with some disk that does only NFS or Samba/SMB, place on network SCSI Bus Public Network NFS Client NFS Server NAS Server

NAS and Servers SCSI Bus Public Network NFS Server NAS Server Redundant web servers share the same data--but they both talk to the same NFS server Web server, data NFS mounted Web server, data NFS mounted

Attached Storage We can also do things like place a RAID array on the NAS server. This works, but it has some limitations: If the server goes down, there is no access to the disk File sharing goes through the network storage server and across the network, which can be slow Limitations on location of disks--must be near server, within range of the disk bus Adding or subtracting disk space can be difficult What we want is a shared disk pool that all servers can access

Storage Area Network What we want is something that looks like this: NFS Client Public Ethernet Net SAN Participants Disk Pool

Storage Area Network Notice: You can take down a server and still maintain access to the disk pool via the other SAN participants Disk added to the pool is available to all servers, not just one Shared, high speed access to the disk pool; can run clustered copies of SQL database or web server if the SQL databases or web servers are also SAN participants Can still serve up the disk pool via an NFS or SMB server on a SAN-connected box “serverless backups”--just send command to copy blocks from disk A to disk B. Snapshots easier, shortened backup windows-- you can have a SAN particpant handle moving a volume to tape

Storage Area Network So how does this work? It’s a scaled up version of the old system. The commands being sent are the same disk standard commands: either SCSI or ATA disk bus commands, READ, WRITE, etc. The network connecting the SAN servers to the disk is typically (but not always) higher speed, eg FibreChannel Some extra glue to allow for concurrent access by more than one server--need a shared filesystem Special filesystems to allow for concurrent access

Storage Area Network A popular choice: SCSI for the bus commands (commands sent over the wire) Fiber Channel for the SAN network EMC or similar for the glue volume software Fiber Channel is 2+ Gbit/sec, and can be deployed across up to a 500m distance (sometimes) and up to 70 KM with special equipment

Storage Area Network Another option is to use gigabit ethernet for the SAN networking. Cheap! Commodity equipment, don’t need to learn new Fiber Channel stuff, reuse existing gear But also lower performance--fibre channel has higher BW, and can use more of it.

ATA Over Ethernet AoE uses ethernet plus ATA bus commands rather than SCSI. Low cost; but since ethernet frames are not routable all devices must be on the same network

iSCSI iSCSI uses SCSI bus commands over ethernet, encapsulated inside of TCP/IP Cheap hardware! Well supported in Linux, Solaris, and Windows world Because the SCSI is inside of TCP/IP, it is routable-- which means you can do a SAN across wide area networks (with lower performance due to latency) and do things like mirror for disaster backup, or across campus on high performance networks Processing TCP/IP takes some overhead; some use TCP offload chips

iSCSI Each “disk”/LUN is a RAID array that understands iSCSI. NFS Client Public Ethernet Net

iSCSI The green network is a dedicated (usually) gigabit ethernet network that carries the SCSI commands encapsulated inside TCP/IP. The red network connects the SAN participants to other clients not on the SAN Important point: TCP/IP is routable. That means that (modulo latency) the devices can be located anywhere. We could have a iSCSI SAN participant in Root Hall, and one in Spanagel. The Root iSCSI server can access the disk pool in Spanagel We could also have a volume located at Fleet Numeric in the same SAN The price we pay for this is having to process the TCP/IP overhead as iSCSI commands go up the network protocol stack. This can be alleviated in part by TCP offload chips

Volume Software Remember, the iSCSI targets are just block devices. iSCSI says nothing about concurrent access or multiple hosts accessing the same devices For that we need a SAN Filesystem. This deconflicts concurrent access by hosts to the block devices

Volume Software NFS Client Public Ethernet Net Vol1 Vol2

SAN Software The “volume software” allows you to build a concurrent access filesystem out of one or more LUNs

iSCSI Example: Five compute servers need read access to one weather data set. If the servers are all on the SAN, they can directly access the data Example: backup. Copy disk blocks directly, then have a tape drive SAN participant copy to tape Example: storage expansion. Just add more disk, and it is available to all SAN participants

Competitors iSCSI’s competitor is for the most part fibre channel. The concept of fiber channel is almost identical, but the SCSI commands are simply encapsulated in a fibre channel frame Fibre channel is typically higher performance--more data can be pushed across FC, and there is much less overhead processing FC frames BUT it is higher cost ATA Over Ethernet is very similar to FC in concept-- directly inserting the ATA commands in ethernet frames. But it seems to have less market penetration