Presentation is loading. Please wait.

Presentation is loading. Please wait.

Fibre Channel Storage Area Networks (SAN)

Similar presentations


Presentation on theme: "Fibre Channel Storage Area Networks (SAN)"— Presentation transcript:

1 Fibre Channel Storage Area Networks (SAN)
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Fibre Channel Storage Area Networks (SAN) Module 3.3

2 Fibre Channel Storage Area Networks (SAN)
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Fibre Channel Storage Area Networks (SAN) Upon completion of this module, you will be able to: Describe the features and benefits of SAN. Describe the physical and logical elements of SAN. List common SAN topologies. Compare and contrast connectivity devices. Describe connectivity options of SAN. Describe the I/O flow in the SAN environment. List SAN management considerations. Describe applications of a SAN strategy. So far we have looked at DAS and NAS storage connectivity. This module introduces the Fibre Channel Storage Area Network (FC SAN). We will look at the components and connectivity methods for a SAN as well as management topics and applications of SAN technology. Storage Area Networks (SAN)

3 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
In this module … This module contains the following lessons: Fibre Channel SAN Overview. The Components of a SAN. FC SAN Connectivity. SAN Management. SAN Deployment Examples. Case Study and Applications of FC SAN. The lessons in this course will provide an introduction to FC SAN storage and are organized to help meet the module objectives. Storage Area Networks (SAN)

4 Lesson: Fibre Channel SAN Overview
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Lesson: Fibre Channel SAN Overview Upon completion of this lesson, you will be able to: Define a FC SAN. Describe the features of FC SAN based storage. Describe the benefits of an FC SAN based storage strategy. We will begin by looking at what is a FC SAN and what are the benefits of using a FC SAN. Storage Area Networks (SAN)

5 Business Needs and Technology Challenges
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Business Needs and Technology Challenges Information when and where the business user needs it Integrate technology infrastructure with business processes Flexible, resilient architecture There are many challenges for data center managers who are supporting the business needs of the users such as: Providing information when and where the business user needs it. Things that impact this challenge include: Explosion in on-line storage Thousands of servers through out organization Mission-critical data is not just in the data center 24x7 availability is a requirement Integrating technology infrastructure with business processes to: Eliminate stovepiped application environments Secure operational environments Providing a flexible, resilient architecture that: Responds quickly to business requirements Reduces the cost of managing information Storage Area Networks (SAN)

6 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
What is a SAN? Array Dedicated storage network Organized connections among: Storage Communication devices Systems Secure Robust Switches Server A Storage Area Network (SAN) is a dedicated network that carries data between computer systems and storage devices, which can include tape and disk resources. A SAN consists of a communication infrastructure, which provides physical connections, and a management layer, which organizes the connections, storage elements, and computer systems so that data transfer is secure and robust. Servers Storage Storage Area Networks (SAN)

7 Evolution of Fibre Channel SAN
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Evolution of Fibre Channel SAN Servers HUB Switches Switches Servers Storage Servers Storage Interconnected SANs FC Switched Fabric SAN Islands FC Arbitrated Loop Arrays Storage As business demand for data grew, DAS and NAS implementations allowed companies to store and access data effectively, but often inefficiently. Storage was isolated to the specific devices, making it difficult to manage and share. The effort to regain control over the dispersed assets caused the emergence of storage area networks (SANs). SANs had the advantage of centralization, resulting in improved efficiencies. The first implementation of SAN was a simple grouping of hosts and associated storage in a single network, often using a hub as the connectivity device. This configuration is called Fibre Channel Arbitrated Loop (FCAL). It could also be referred to as a SAN Island due to the fact that a) there is limited connectivity and b) there is still a degree of isolation. As demand increased and technology improved, Fibre channel switches replaced hubs. Switches greatly increased connectivity and performance allowing for interconnected SANs and ultimately enterprise level data accessibility of SAN applications and accessibility. Enterprise SANs FC Switched Fabric Storage Area Networks (SAN)

8 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Benefits of a SAN High bandwidth Fibre Channel SCSI extension Block I/O Resource Consolidation Centralized storage and management Scalability Up to 16 million devices Secure Access Isolation and filtering Some of the benefits of implementing a SAN are discussed here. A SAN uses the Fibre channel transport which is a set of standards which define protocols for performing high speed serial data transfer, up to 400 Megabytes per second. It provides a standard data transport medium over which computer systems communicate with devices such as disk storage arrays. SCSI over Fibre Channel implementations allow these devices to be connected in dynamic Fibre Channel topologies which span much greater distances and provide a greater level of flexibility and manageability while retaining the basic functionality of SCSI. Fibre Channel networks are often referred to as networks that perform channel operations. As it is a networked infrastructure, many devices and host can be attached seamlessly, upwards of 16 million devices in a SAN. This allows better utilization of corporate assets and ease of management both for configuration and security. Storage Area Networks (SAN)

9 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Lesson Summary Topics in this lesson included: Definition of a SAN Features and Benefits of SANs A SAN is a dedicated storage network that solves many of the complex business data storage needs. Storage Area Networks (SAN)

10 Lesson: The Components of a SAN
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Lesson: The Components of a SAN Upon completion of this lesson, you will be able to: Describe the elements of a SAN. Host Bus Adapter (HBA) Fiber Cabling Fibre Channel Switch /Hub Storage Array Management System This lesson provides an overview of the elements found in a SAN. It introduces each of the elements and provides a brief description of them. The objectives for this lesson are shown here. Please take a moment to read them. Storage Area Networks (SAN)

11 Components of a Storage Area Network
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Components of a Storage Area Network Host Bus Adapter (HBA) Fiber Cabling Fibre Channel Switch /Hub Storage Array Management System HBA Switches HBA SAN-attached Server As can be seen from the graphic on this page, a SAN consists of three basic components – server(s), the SAN infrastructure and the storage. Each of these components can be broken down into even more finite components such as: A Host Bus Adapter (HBA) which is installed in a server (including the device drivers needed to communicate within the SAN). Cabling which is usually optical but can be optical or copper. Fibre Channel switches or hubs – devices used to connect the nodes. Storage arrays. Management system to analyze and configure SAN components. SAN Arrays Storage Area Networks (SAN)

12 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Nodes, Ports, & Links Link Port 0 Rx Tx HBA Port 0 Port 1 A node can be considered any device that is connected to the SAN for purposes of requesting or supplying data (e.g. servers and storage). Nodes use ports to connect to the SAN and to transmit data. There are two connection points on a port, a transmit (Tx) link and a receive (Rx) link. Data traveling simultaneously through these links is referred to as Full Duplex. Port n Node Storage Area Networks (SAN)

13 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Host Bus Adapters HBAs perform low-level interface functions automatically to minimize the impact on host processor performance Switches Arrays HBA HBA The Hosts connect to the SAN via an HBA. As referenced on the previous slide, the host would be the node and the HBA would represent the port(s). HBAs can be compared to a NIC in a Local Area Network, as they provide a critical link between the SAN and the operating system and application software. An HBA: Sits between the host computer's I/O bus and a Fibre Channel network and manages the transfer of information between the two channels. Performs many low-level interface functions automatically or with minimal processor involvement, such as I/O processing and physical connectivity between a server and storage. Thus, the HBAs provide critical server CPU off-load, freeing servers to perform application processing. As the only part of a storage area network that resides in a server, HBAs also provide a critical link between the SAN and the operating system and application software. Server Storage Area Networks (SAN)

14 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Connectivity Single Mode Fiber Switches Multimode Fiber To connect the nodes, optical fiber cables are used. There are two types of cable employed in a SAN – Multimode and Single mode. Multimode fiber (MMF) can carry multiple light rays, or modes, simultaneously. MMF typically comes in two diameters – 50 micron and 62.5 micron ( a micron is a unit of measure equal to one millionth of a meter). MMF transmission is used for relatively short distances because the light tends to degrade, through a process called modal dispersion, over greater distance. MMF is typically used to connect nodes to switches or hubs. For longer distances, single mode fiber (SMF) fiber is used. It has a diameter of 7 – 11 microns with 9 microns being the most common and transmits a single ray of light as a carrier. As there is less bounce, the light does not disperse as easily, allowing long-distance signal transmission. This type of cable is used to connect two switches together in a SAN. Host Storage Storage Area Networks (SAN)

15 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Connectors Node Connectors: SC Duplex Connectors LC Duplex Connectors Patch panel Connectors ST Simplex Connectors Optical and Electrical connectors are used in SANs. The SC connector is the standard connector for fiber optic cables used for 1Gb. The LC connector is the standard connector for fiber optic cables used for 2Gb or 4 Gb. The ST connector is a fiber optic connector which uses a plug and socket which is locked in place with a half-twist bayonet lock. Often used with Fibre Channel Patch Panels. (Note: A Patch Panel is generally used for connectivity consolidation in a data center.) The ST connector was the first standard for fiber optic cabling. Storage Area Networks (SAN)

16 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Connectivity Devices Basis for SAN communication Hubs, Switches and Directors HBA Arrays Switches Server For Fibre Channel SANs, connectivity is provided by Fibre Channel hubs, switches and directors. These devices act as the common link between nodes within the SAN. Connectivity devices can be categorized as either hubs or switches. A hub is an communications device is used in FCAL and which physically connects nodes in a logical loop/physical star topology. This means that all nodes must share bandwidth , as data travels through all connection points. A Fibre Channel switch/ director is a more “intelligent” device. It has advanced services that can route data from one physical port to another directly. Therefore each node has a dedicated communication path, aggregating bandwidth in the process. Compared to switches, directors are larger devices deployed for data center implementations. They function similarly to switches but have higher connectivity capacity and fault-tolerant hardware. Storage Area Networks (SAN)

17 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Storage Resources Storage Array Provides storage consolidation and centralization Features of an array High Availability/Redundancy Performance Business Continuity Multiple host connect Arrays Switches HBA Server The fundamental purpose of any SAN is to provide access to storage – typically storage arrays. As discussed previously, storage arrays support many of the features required in a SAN such as: High Availability/Redundancy Improved Performance Business Continuity Multiple host connect Storage Area Networks (SAN)

18 SAN Management Software
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. SAN Management Software A suite of tools used in a SAN to manage the interface between host and storage arrays. Provides integrated management of SAN environment. Web based GUI or CLI SAN Management Software provides a single view of your storage environment. Management of the resources from one central console is simpler and more efficient. SAN management software provides core functionality, including: Mapping of storage devices, switches, and servers Monitoring and alerting for discovered devices Logical partitioning of the SAN Additionally, it provides management of typical SAN components such as HBAs, storage devices and switches. The management system of a SAN is a server, or console, where the objects in the SAN can be monitored and maintained. It offers a central location for a full view of the SAN, thereby reducing complexity. Storage Area Networks (SAN)

19 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Lesson: Summary Topics in this lesson included: The elements of a SAN: Host Bus Adapter (HBA) Fiber Cabling Fibre Channel Switch /Hub Storage Array Management System Topics in this lesson included the basic SAN elements: Host Bus Adapter (HBA) – connecting point to the host Fiber Cabling - connectivity Fibre Channel Switch/Hub – connectivity devices Storage Array – storage Management system – analyze and configure SAN components. Storage Area Networks (SAN)

20 Lesson: Fibre Channel SAN Connectivity
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Lesson: Fibre Channel SAN Connectivity Upon completion of this lesson, you will be able to: Describe the Fibre Channel SAN connectivity methods and topologies Describe Fibre Channel devices Describe Fibre Channel communication protocols Describe Fibre Channel login procedures This lesson provides an overview of Fibre Channel SAN connectivity including common devices and protocols. Storage Area Networks (SAN)

21 Fibre Channel SAN Connectivity
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Fibre Channel SAN Connectivity Core networking principles applied to storage Servers are attached to 2 distinct networks Back-end Front-end Users & Application Clients Storage & Application Data Servers & Applications SAN switches directors IP network SANs combine the basic functionality of storage devices and networks, consisting of hardware and software, to obtain a highly reliable, high-performance, networked data system. Services similar to those in any LAN (e.g. name resolution, address assignment etc.) allow data to traverse connections and be provided to end-users. When looking at an overall IT infrastructure, the SAN and LAN are mutually exclusive but serve similar purposes. The LAN allows clients, such as desktop work-stations, to request data from servers. This could be considered the front-end network. This is where the average user would connect typically across an Ethernet network. The SAN, or back-end network also connects to servers, but in this case, the servers are acting as clients. They are requesting data from their servers – the storage arrays. These connections are accomplished via a Fibre Channel network. (Note: FibRE refers to the protocol versus fibER which refers to a media!) By combining the two networks together, with the servers as the common thread, the end-user is supplied with any data they may need. Storage Area Networks (SAN)

22 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
What is Fibre Channel? SAN Transport Protocol Integrated set of standards (ANSI) Encapsulates SCSI A High Speed Serial Interface Allows SCSI commands to be transferred over a storage network. Standard allows for multiple protocols over a single interface. Fibre channel is a set of standards which define protocols for performing high speed serial data transfer. The standards define a layered model similar to the OSI model found in traditional networking technology. Fibre Channel provides a standard data transport frame into which multiple protocol types can be encapsulated. The addressing scheme used in Fibre Channel switched fabrics will support over 16 million devices in a single fabric. Fibre Channel has become widely used to provide a serial transport medium over which computer systems communicate with devices such as disk storage arrays. These devices have traditionally been attached to systems over more traditional channel technologies such as SCSI. SCSI over Fibre Channel implementations now allow these devices to be connected in dynamic Fibre Channel topologies which span much greater distances and provide a greater level of flexibility and manageability than found with SCSI. Fibre Channel networks are often referred to a networks that perform channel operations. Storage Area Networks (SAN)

23 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Fibre Channel Ports Servers NL Port NL Port NL Port Server Node Node HUB HUB Storage NL Port NL Port Node FL Port FL Port Server E Port F Port N Port Switch E Port F Port Node Fibre Channel ports are configured for specific applications Host Bus Adapters and Symmetrix FC Director ports are configured as either N-Ports or NL-Ports N_Port - Node port, a port at the end of a point-to-point link NL_Port - A port which supports the arbitrated loop topology Fibre Channel Switch ports are also configured for specific applications F_Port - Fabric port, the access point of the fabric which connects to a N_Port FL_Port - A fabric port which connects to a NL_Port E_Port - Expansion port on a switch. Links multiple switches G_Port - A switch port with the ability to function as either a F_Port or a E_port Note: Port Type is defined by Firmware / HBA Device Driver configuration Settings. Switch N Port Node F Port F Port Switch Node N Port N Port Node Array Array Storage Storage Area Networks (SAN)

24 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
World Wide Names Unique 64 bit identifier. Static to the port. Used to physically identify a port or node within the SAN Similar to a NIC MAC address Additionally, each node is assigned a unique port ID (address) within the SAN Used to communicate between nodes within the SAN Similar in functionality to an IP address on a NIC All Fibre Channel devices (ports) have 64 bit unique identifiers called World Wide Names (WWN). These WWNs are similar to the MAC address used on a TCP/IP adapter, in that they uniquely identify a device on the network and are burned into the hardware or assigned through software. It is a critical feature, as it used in several configurations used for storage access. However, in order to communicate in the SAN, a port also needs an address. This address is used to transmit data through the SAN from source node to destination node. Storage Area Networks (SAN)

25 World Wide Names: Example
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. World Wide Names: Example World Wide Name – Array 5 6 1 B 2 0101 0000 0110 0001 1011 0010 Company ID 24 bits Port Model seed 32 bits World Wide Name - HBA 1 c 9 2 d 4 Reserved 12 bits Company OUI 24 bits Company Specific 24 bits In this example: When a N_Port is connected to a SAN, an address is dynamically assigned to the port. The N_Port then goes through a login at which time it registers its WWN with the Name Server. Now the address is associated with the WWN. If the N_Port is moved to a different port on the fabric, its address will change. However, the login process is repeated so the WWN will become associated with the new N_Port address. This allows for configuration to take advantage of the fact that the WWN has remained the same even though the FC address has changed Storage Area Networks (SAN)

26 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Fibre Channel Logins Fabric N Port 1 F Port F Port N Port 2 Process a Process x Process b Process y Process c Process z In order for a device to communicate on the SAN it must authenticate or login to the storage network. There are three types of login supported in Fibre Channel: Fabric – All node ports must attempt to log in with the Fabric (A Fabric is the ‘complete’ SAN environment.) This is typically done right after the link or the Loop has been initialized. Port – Before a node port can communicate with another node port, it must first perform N_Port Login with that node port. Process – Sets up the environment between related processes on node ports. By completing this login process, nodes have the ability to transmit and receive data. Storage Area Networks (SAN)

27 Fibre Channel Addressing
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Fibre Channel Addressing Fibre Channel addresses are used for transporting frames from source ports to destination ports. Address assignment methods vary with the associated topology (loop vs switch) Loop – self assigning Switch – centralized authority Certain addresses are reserved FFFFFC is Name Server FFFFFE is Fabric Login As mentioned previously, FC addresses are required for node communication. Fibre Channel addresses are used to designate the source and destination of frames in the Fibre Channel network. These addresses could be compared to network IP addresses. They are assigned when the node either enters the loop or is connected to the switch. There are reserved addresses, which are used for services rather than interface addresses. Storage Area Networks (SAN)

28 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
What is a Fabric? Arrays Virtual space used by nodes to communicate with each other once they are joined. Component identifiers: Domain ID Worldwide Name (WWN) Fabric Servers Switches A fabric is a virtual space in which all storage nodes communicate with each other over distances. It can be created with a single switch or a group of switches connected together. Each switch contains a unique domain identifier which is used in the address schema of the fabric. In order to identify the nodes in a fabric, 24-bit fibre channel addressing is used. Fabric services: When a device logs into a fabric, its information is maintained in a database. The common services found in a fabric are: Login Service Name Service Fabric Controller Management Server Storage Storage Area Networks (SAN)

29 Fibre Channel Topologies
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Fibre Channel Topologies Arbitrated Loop (FC-AL) Devices attached to a shared “loop” Analogous to Token Ring Switched Fabric (FC-SW) All devices connected to a “Fabric Switch” – Analogous to an IP switch Initiators have unique dedicated I/O paths to Targets HUB Clients Storage Arrays The ANSI Fibre Channel Standard defines distinct topologies: Arbitrated loop (FC-AL), Switched fabric (FC-SW). Arbitrated loop (FC-AL) - Devices are attached to a shared “loop”. FC-AL is analogous to the token ring topology. Each device has to contend for performing I/O on the loop by a process called “arbitration” and at a given time only one device can “own” the I/O on the loop - resulting in a shared bandwidth environment. Switched Fabric - Each device has a unique dedicated I/O path to the device it is communicating with. This is accomplished by implementing a fabric switch. Switch Clients Storage Area Networks (SAN)

30 Switch versus Hub Comparison
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Switch versus Hub Comparison Switches (FC-SW) FC-SW architecture scalable to millions of connections. Bandwidth per device stays constant with increased connectivity. Bandwidth is scalable due to dedicated connections. Higher availability than hubs. Higher cost. Hubs (FC-AL) FC-AL is limited to 127 connections (substantially fewer connections can be implemented for ideal system performance). Bandwidth per device diminishes with increased connectivity due to sharing of connections. Low cost connection. The primary differences between switches and hubs are scalability and performance. The FC-SW architecture scales to support over 16 million devices. Expansion ports, explained within the next few pages, must be implemented on switches to allow them to interconnect and build large fabrics. The FC-AL protocol implemented in hubs supports a maximum of 126 nodes. As discussed earlier, fabric switches provide full bandwidth between multiple pairs of ports in a fabric. This results in a scalable architecture which can support multiple communications at the same time. The hub on the other hand provides shared bandwidth which can support only a single communication at a time. Hubs provide a low cost connectivity expansion solution. Switches, on the other hand, can be used to build dynamic, high-performance fabrics through which multiple communications can occur at one time and are more costly. Storage Area Networks (SAN)

31 How an Arbitrated Loop Hub Works
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. How an Arbitrated Loop Hub Works Node A Node D NL_Port #1 HBA NL_Port #1 HBA Hub_Pt Hub_Pt NL_Port #2 HBA NL_Port #2 HBA Transmit Receive Byp Byp Byp Byp Receive Transmit Node B Node C NL_Port #4 HBA NL_Port #4 HBA Transmit Receive NL_Port #3 FA NL_Port #3 FA Byp Byp FCAL transmission: A node attempts to transmit data via the FCAL. An arbitration frame (ARB) is sent to each node on the loop to request control of the loop If two nodes are attempting to gain control, the node with highest priority is allowed control to communicate with another node (lowest AL_PA / highest Loop ID determines priority) When an originator receives it’s ARB request, it has gained control of the loop and will transmit data to the node with which it has established a virtual connection. Byp Byp Receive Transmit Hub_Pt Hub_Pt Storage Area Networks (SAN) 24

32 How a Switched Fabric Works
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. How a Switched Fabric Works Node A Node D NL_Port #1 HBA Port Port NL_Port #2 HBA Transmit Receive N_Port #2 Storage Port N_Port #1 HBA Receive Transmit Node B Node C NL_Port #4 HBA N_Port #4 HBA Transmit Receive NL_Port #3 FA N_Port #3 Storage Port FCSW: At boot time, a node initializes and logs into the fabric. The node contacts Name Service to obtain list of nodes already logged in. Node attempts individual device logins and transmits data via the FCSW. This link is considered a dedicated connection between the initiator and the target. All subsequent exchanges between these nodes will make use of this "private" link. Receive Transmit Port Port Storage Area Networks (SAN)

33 Inter Switch Links (ISLs)
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Inter Switch Links (ISLs) Multimode Fiber 1Gb=500m 2Gb=300m Switch Switch Single-mode Fiber up to10 km Switch Switch Metro ring or point-to-point topologies with or without path protection Switches are connected to each other in a fabric using Inter-switch Links (ISL). This is accomplished by connecting them to each other through an expansion port on the switch (E_Port). ISLs are used to transfer host-to-storage data, as well as fabric management traffic, from one switch to another and, hence, they are the fundamental building blocks used in shaping the performance and availability characteristics of a fabric and the SAN. Switch Switch Router Router Storage Area Networks (SAN)

34 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Topology: Mesh Fabric Can be either partial or full mesh All switches are connected to each other Host and Storage can be located anywhere in the fabric Host and Storage can be localized to a single switch In this topology, all switches are connected to each other directly using ISLs. The purpose of this topology to promote increased connectivity within the SAN – the more ports that exists, the more nodes that can participate and communicate. Features of a partial mesh topology: Traffic may need to traverse several ISLs (hop) Host and storage can be located anywhere in the fabric. Host and storage can be localized to a single director or switch. Features of a full mesh topology: Maximum of one ISL link or hop for host to storage traffic. Partial Mesh Full Mesh Storage Area Networks (SAN)

35 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Full Mesh Benefits Benefits All storage/servers are a maximum of one ISL hop away. Hosts and storage may be located anywhere in the fabric. Multiple paths for data using the Fabric Shortest Path First (FSPF) algorithm. Fabric management made simpler. When implementing a mesh topology, follow these recommendations: Localize hosts and storage when possible - Remember traffic will be bi-directional for both read/write and host/storage on both switches. Evenly distribute access across ISLs. Attempt to minimize hops - Traffic from remote switches should represent no more than 50% of overall traffic locally. Fabric Shortest Path First (FSPF) is a protocol used for routing in Fibre Channel switched networks. It calculates the best path between switches, establishes routes across the fabric and calculates alternate routes in event of a failure or topology change. There are some tradeoffs to keep in mind when implementing mesh fabrics such as: Additional switches raise ISL port count and reduce user port count. Thought must be given to the placement of hosts/storage or ISLs can become overloaded or underutilized. Storage Area Networks (SAN)

36 Topology: Simple Core-Edge Fabric
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Topology: Simple Core-Edge Fabric Can be two or three tiers Single Core Tier One or two Edge Tiers In a two tier topology, storage is usually connected to the Core Benefits High Availability Medium Scalability Medium to maximum Connectivity Host Tier Storage Tier In this topology, several switches are connected in a “Hub and Spoke” configuration. It is called this as there is a central connection much like the wheel of a bicycle (Note: This DOES NOT refer to an FCAL hub, it is simply descriptive). There are two types of switch tiers in the fabric: Edge Tier Usually departmental switches. Offers an inexpensive approach to adding more hosts into the fabric. Fans out from the Core tier. Nodes on the edge tier can communicate with each other using the Core tier only. Host to Storage Traffic has to traverse a single ISL (two-tier) or two ISLs (three-tier). Core or Backbone Tier Usually Enterprise Directors. Ensures highest availability since all traffic has to either traverse through or terminate at this tier. With two-tier, all storage devices are connected to the core tier, facilitating fan-out. Any hosts used for mission critical applications can be connected directly to the storage tier, thereby avoiding ISLs for I/O activity from those hosts. This topology increases connectivity within the SAN while conserving overall port utilization. General connectivity is provided by the “core” while nodes will connect to the “edge”. If expansion is required, an additional edge switch can be connected to the core. This topology can have two variations: Two-tier topology (one Edge and one Core as shown) – All hosts are connected to the edge tier and all storage is connected to the core tier. Three-tier topology (two Edge and one Core) – All hosts are connected to one edge; all storage is connected to the other edge; and the core tier is only used for ISLs. Storage Area Networks (SAN)

37 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Core-Edge Benefits Simplifies propagation of fabric data. One ISL hop access to all storage in the fabric. Efficient design based on node type. Traffic management and predictability. Easier calculation of ISL loading and traffic patterns. A key benefit of the core/edge topology is the simplification of fabric propagation. Configurations are easily distributed throughout the fabric due to the common connectivity. Node workloads can be evenly distributed based on location—hosts on the edge, storage in the core. Performance analysis and traffic management is simplified since load can be predicted based on where each node resides. Increasing number of core switches grows ISL count. This is assumed to be a natural progression when growing the fabric but may cause additional hops, thus decreasing performance. Choosing the wrong switch for the core makes scaling difficult. High port-density directors are best suited at the core. Storage Area Networks (SAN)

38 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Lesson: Summary Topics in this lesson included: The Fibre Channel SAN connectivity methods and topologies Fibre Channel devices Fibre Channel communication protocols Fibre Channel login procedures Fibre Channel SAN is a set of nodes connected through ports. The nodes are connected into a fabric (either arbitrated loop or switched mesh) using hubs, switches and directors. A switched fabric can have different topologies such as Mesh or core-edge. Some of the benefits of a fabric include: Multiple paths between storage and hosts. One inter-switch link (ISL) access to all storage (in a core-edge topology). Fabric management is simplified. Storage Area Networks (SAN)

39 Lesson: SAN Management
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Lesson: SAN Management Upon completion of this lesson, you will be able to: Describe SAN management functions Infrastructure protection Provisioning Capacity Management Performance Management This lesson provides an overview the process and procedure that are required to manage a SAN. Storage Area Networks (SAN)

40 SAN Management Overview
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. SAN Management Overview Infrastructure protection Fabric Management Storage Allocation Capacity Tracking Performance Management There are several ways to look at managing a SAN environment; Infrastructure protection - One crucial aspect of SAN management is environmental protection, or security. In order to ensure data integrity, steps must be performed to secure data and prevent unauthorized access. This includes physical security (physical access to components) and network security. Fabric Management - Monitoring and managing the switches is a daily activity for most SAN administrators. Activities include accessing the specific management software for monitoring purposes and zoning. Storage Allocation - This process involves making sure the nodes are accessing the correct storage in the SAN. The major activity is executing appropriate LUN Masking and mapping utilities. Capacity Tracking - Knowing the current state of the storage environment is important for proper allocation. This process involves record management, performance analysis and planning. Performance Management - Applications must function equal to, if not better than a DAS environment. Performance Management assists in meeting this requirement as it allows the SAN admin to be aware of current environmental operations, as well as to avoid any potential bottlenecks. Storage Area Networks (SAN)

41 Infrastructure Security
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Infrastructure Security Corporate LAN Physical security Locked data center Centralized server and storage infrastructure Controlled administrator access Secure VPN or Firewall Servers Management LAN (Private) Control Station Switch Switch It is imperative to maintain a secure location and network infrastructure. The continuing expansion of the storage network exposes data center resources and the storage infrastructure to new vulnerabilities. Data aggregation increases the impact of a security breach. Fibre Channel storage networking potentially exposes storage resources to traditional network vulnerabilities. For example, it is important to: Ensure that the management network, typically IP based is protected via a firewall Password are strong Completely isolate the physical infrastructure of the SAN In-band (FC) Out-band (IP) Storage Arrays Storage Area Networks (SAN)

42 Switch/Fabric Management Tools
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Switch/Fabric Management Tools Vendor supplied management software Embedded within the switch Graphical User Interface (GUI) or Command Line Interface (CLI) Functionality Common functions Performance monitoring Discovery Access Management (Zoning) Different “look and feel” between vendors Additional third party software add-ons Enhanced functionality, such as automation Switch vendors embed their own management software on each of their devices. By connecting to the switch across the IP network, an administrator can access a graphical management tool (generally web-based) or issue CLI commands (via a telnet session). Once connected, the tasks are similar across vendors. The difference lies in the commands that are executed and the GUI. Some of the management activities include: Switch Hardware monitoring – ports, fans, power-supplies Fabric activity – node logins, data flow, transmission errors Fabric partitioning – creating managing and activating zones In addition to vendor specific software tools, there are newer SAN management packages being developed by third parties, such as Storage Resource Management (SRM) software. This software monitors a SAN and, based on policies, automatically performs administrative tasks. Storage Area Networks (SAN)

43 Fabric Management: Zoning
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Fabric Management: Zoning Servers Zoning is a switch function that allows nodes within the fabric to be logically segmented into groups that can communicate with each other. The zoning function controls this process at login by only letting ports in the same zone to establish link level services. Arrays Storage Area Networks (SAN)

44 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Zoning Components Zone Set Zones Sets (Library) Zone Zones (Library) Members (WWN’s) Member There are several configuration layers involved in granting nodes the ability to communicate with each other: Members - Nodes within the SAN which can be included in a zone. Zones - Contains a set of members that can access each other. A port or a node can be members of multiple zones. Zone Sets - A group of zones that can be activated or deactivated as a single entity in either a single unit or a multi-unit fabric. Only one zone set can be active at one time per fabric. Can also be referred to as a Zone Configuration. Storage Area Networks (SAN)

45 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Types of Zoning Servers Domain ID = 21 Port = 1 WWN 10:00:00:00:C9:20:DE:56 Array Switches Domain ID = 25 Port = 3 WWN 10:00:00:00:C9:20:DC:40 WWN 50:06:04:82:E8:91:2B:9E In general, zoning can be divided into three categories: WWN zoning (Soft) - WWN zoning uses the unique identifiers of a node which have been recorded in the switches to either allow or block access A major advantage of WWN zoning is flexibility. The SAN can be re-cabled without having to reconfigure the zone information since the WWN is static to the port. Port zoning (Hard) - Port zoning uses physical ports to define zones. Access to data is determined by what physical port a node is connected to. Although this method is quite secure, should re-cabling occur zoning configuration information must be updated. Mixed Zoning – Mixed zoning combines the two methods above. Using mixed zoning allows a specific port to be tied to a node WWN. This is not a typical method. Examples: WWN Zone 1 = 10:00:00:00:C9:20:DC:40; 50:06:04:82:E8:91:2B:9E Port Zone 1 = 21,1; 25,3 Mixed Zone 1 = 10:00:00:00:C9:20:DE:56; Port 21/1 Storage Area Networks (SAN)

46 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Single HBA Zoning Optimally, one HBA per zone. Nodes can only “talk” to Storage in the same zone Storage Ports may be members of more than one zone. HBA ports are isolated from each other to avoid potential problems associated with the SCSI discovery process. Also known as “chatter” Decreases the impact of a changes in a Fabric by reducing the amount of nodes that must communicate. Under single-HBA zoning, each HBA is configured with its own zone. The members of the zone consist of the HBA and one or more storage ports with the volumes that the HBA will use. Two reasons for Single HBA Zoning include: Cuts down on the reset time for any change made in the state of the fabric. Only the nodes within the same zone will be forced to log back into the fabric after a RSCN (Registered State Change Notification). Storage Area Networks (SAN)

47 Provisioning: LUN Masking
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Provisioning: LUN Masking Restricts volume access to specific hosts and/or host clusters. Servers can only access the volumes that they are assigned. Access controlled in the storage and not in the fabric Makes distributed administration secure Tools to manage masking GUI Command Line Switch Device (LUN) Masking ensures that volume access to servers is controlled appropriately. This prevents unauthorized or accidental use in a distributed environment. This is typically accomplished on the storage array using a dedicated masking database. A zone set can have multiple host HBAs and a common storage port. LUN Masking prevents multiple hosts from trying to access the same volume presented on the common storage port. The following describes how LUN Masking controls access: When servers log into the switched fabric, the WWNs of their HBAs are passed to the storage fibre adapter ports that are in their respective zones. The storage system records the connection and builds a filter listing the storage devices (LUNs) available to that WWN, through the storage fibre adapter port. The HBA port then sends I/O requests directed at a particular LUN to the storage fibre adapter. Each request includes the identity of their requesting HBA (from which its WWN can be determined) and the identity of the requested storage device, with its storage fibre adapter and logical unit number (LUN). The storage array processes requests to verify that the HBA is allowed to access that LUN on the specified port. Any request for a LUN that an HBA does not have access to returns an error to the server. Array Servers Storage Area Networks (SAN)

48 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Capacity Management Tracking and managing assets Number of ports assigned Storage allocated Utilization profile Indicates resource utilization over time Allows for forecasting SAN management software provides the tools Inventory databases Report writers Capacity planning is a combination of record management, performance analysis and planning. Ongoing management issues for SAN revolve around knowing how well storage resources are being utilized and proactively adjusting configurations based on application and usage needs. The key activity in managing capacity is simply to track the assets of the SAN. Objects that should be tracked include: all SAN components the allocation of assets known utilization. For example, if the amount of storage originally allocated to a host, current usage rate and the amount of growth over a period of time are tracked, we can ensure that hosts are not wasting storage. Whenever possible, reclaim unused storage and return it to the array free pool. Do not let devices remain on host ports using valuable address space. Know the capacity of the array, what is allocated, and what is free. With this data, a utilization profile can be created. This will enables report to be created based on current allocations and consumption, and allow you to project future requests. Almost all SAN management software has the capability to capture this type of data and generate either custom or “canned” reports. Storage Area Networks (SAN)

49 Performance Management
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Performance Management What is it? Capturing metrics and monitoring trends Proactively or Reactively responding Planning for future growth Areas and functions Host, Fabric and Storage Performance Building baselines for the environment In a networked environment, it is necessary to have an end -to-end view. Each component of the system performing either a read or write will need to be monitored and analyzed. Storage administrators need to be involved in all facets of system planning, implementation, and delivery. Databases that are not properly planned for and laid-out in an array’s backend will inevitably cause resource contention and poor performance. Performance bottlenecks may be difficult to diagnose. Common causes include: Database layout can cause disk overload Server settings impact data path utilization Shifting application loads create switch bottleneck Poor SQL code causes excess I/O Storage Area Networks (SAN)

50 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Lesson: Summary Topics in this lesson included: Infrastructure protection Provisioning Capacity Management Performance Management Storage Area Networks (SAN)

51 Lesson: SAN Deployment Examples
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Lesson: SAN Deployment Examples Upon completion of this lesson, you will be able to: Describe common SAN deployment considerations Explain SAN deployment examples Describe SAN Challenges This lesson provides an overview of considerations when implementing a new SAN environment or expanding an existing SAN one. Storage Area Networks (SAN)

52 When Should a SAN be Used?
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. When Should a SAN be Used? SANs are optimized for high bandwidth block level I/O Suited for the demands of real time applications Databases: OLTP (online transaction processing) Video streaming Any applications with high transaction rate and high data volatility Stringent requirements on I/O latency and throughput Used to consolidate heterogeneous storage environments Physical consolidation Logical consolidation Storage Area Networks can handle large amounts of block level I/O and are suited to meet the demands of high performance applications that need access to data in real time. In several environments, these applications have to share access to storage resources and implementing them in a SAN allows efficient use of these resources. When data volatility is high, a host’s needs for capacity and performance can grow or shrink significantly in a short period of time. The SAN architecture is flexible, so existing storage can be rapidly redeployed across hosts - as needs change - with minimal disruption. SANs are also used to consolidate storage within an enterprise. Consolidation can be at a physical or logical level. Physical consolidation involves the physical relocation of resources to a centralized location. Once these resources are consolidated, one can make more efficient use of facility resources such as HVAC (heating, ventilation and air conditioning), power protection, personnel, and physical security. Physical consolidations have a drawback in that they do not offer resilience against a site failure. Logical consolidation is the process of bringing components under a unified management infrastructure and creating a shared resource pool. Since SANs can be extended to span vast distances physically, they do not strictly require that logically related entities be physically close to each other. Logical consolidation does not allow one to take full advantage of the benefits of site consolidation. But it does offer some amount of protection against site failure, especially if well planned. Storage Area Networks (SAN)

53 Consolidation Example: DAS Challenge
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Consolidation Example: DAS Challenge Servers Servers Servers This example shows a typical networked environment, where the servers are utilizing DAS storage. This can be defined as “stove-piped” storage and is somewhat difficult to manage. There is no way of easily determining utilization and it is difficult to provision storage accurately. As an example, the server that hosts the black disks may be using 25 % of it’s overall capacity while the server hosting the blue disks may be at 90% capacity. In this model there is no way to effectively remedy this disparity. The only way information can be shared between platforms is over the user network -- this non-value-added bulk data transfer slows down the network and can consume up to 35% of the server processing capacity. This environment also does not scale very effectively and is costly to grow. Another issue in this model is administrative overhead. The individual server administrators are responsible for maintenance tasks, such as back-up. There is no way in this model, to guarantee consistency in the performance of such tasks. Storage Storage Area Networks (SAN)

54 Consolidation Example: SAN Solution
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Consolidation Example: SAN Solution Servers Servers Servers Array Implementing a SAN resolves many of the issues encountered in the DAS configuration. Using the SAN simplifies storage administration and adds flexibility. Note: SAN storage is still a one-to-one relationship, meaning that each device is "owned" by a single host due to zoning and LUN masking. This solution also increases storage capacity utilization, since multiple servers can share the same pool of unused resources. Switch Storage Area Networks (SAN)

55 Connectivity Example: Challenge
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Connectivity Example: Challenge Server Server Switches Array Array Server Server Server In this example, hosts and storage are connected to the same switch. This is a simple, efficient, and effective way to manage access. The entire fabric can be managed as a whole. Let us take an example of just one storage port. Access to the storage device must traverse the fabric to a single switch. As ports become needed on the fabric, the administrator may choose whatever port is open. Multiple hosts spread across the fabric are now contending for storage access on a remote switch. The initial design for the fabric may not have taken into account future growth such as this. This is only one example. Now imagine that there are dozens of storage ports being accessed by hundreds of hosts stretched across the fabric. Storage Area Networks (SAN)

56 Connectivity Example: Solution
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Connectivity Example: Solution Server Server Switches Server Server Server Array By moving storage to a central location, all nodes have the same number of hops to access storage. Traffic patterns are more obvious and deterministic. Scalability is made easy. Storage Area Networks (SAN)

57 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
FC SAN Challenges Infrastructure New, separate networks are required. Skill-sets As a relatively new technology, FC SAN administrative skills need to be cultivated. Cost Large investments are required for effective implementation. The traditional answer to Storage Area Networking is the implementation of FC SAN. However, with the emergence of newer SAN connectivity technology, namely IP, trends are changing. The investment required to implement FC SAN is often quite large. New infrastructure must be built and new technical skills must be developed. As a result, enterprises may find that utilizing an exiting IP infrastructure is a better option. The FC SAN challenge falls into the following categories : Infrastructure - An FC network demands FC switches, hubs and bridges along with specific GBICs and cabling. In addition, each host requires dedicated FC HBAs. Software - A variety of software tools is needed to manage all of this new equipment as well as the dedicated FC HBAs. Many of these tools do not interoperate. Human Resources - A dedicated group of FC storage and networking IT administrators is needed to manage the network. Cost – Ultimately, A good deal of time and capital must be outlayed to implement an FC SAN. Storage Area Networks (SAN)

58 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Lesson: Summary Topics in this lesson included: Common SAN deployment considerations. SAN Implementation Scenarios Consolidation Connectivity SAN Challenges This lesson provided an overview of deploying a SAN environment and common scenarios for consolidation and connectivity. Storage Area Networks (SAN)

59 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Module Summary Topics in this module included: The features and benefits of SAN. The physical and logical elements of SAN. The common SAN topologies. Comparison of SAN connectivity devices. The connectivity options of SAN. The I/O flow in the SAN environment. SAN management considerations. Applications of a SAN strategy. This module introduced the Fibre Channel Storage Area Network (FC SAN). We looked at the components and connectivity methods for a SAN as well as management topics and applications of SAN technology. Storage Area Networks (SAN)

60 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Check Your Knowledge Name three key features of a SAN implementation. What is a Switch? Describe how a SAN can be connected? What is one common SAN topology? What are two management considerations for a SAN environment? What is a Fabric? What is a Core-Edge Fabric? Storage Area Networks (SAN)

61 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Apply Your Knowledge Connectrix™ Family of SAN Switches and Directors Storage Area Networks (SAN)

62 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Apply Your Knowledge… Upon completion of this topic, you will be able to: Describe EMC’s product implementation of the Connectrix™ Family of SAN Switches and Directors. Storage Area Networks (SAN)

63 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
The Connectrix Family High-speed Fibre Channel connectivity- 1 to 10 gigabits per second Highly resilient switching technology, and options for IP storage networking. configure to adapt to any business need MDS-9509 ED-10000M MDS-9506 AP-7420B ED-48000B ED-140M MP-2640M MP-1620M MDS-9216i/A Switches and Directors are key components of a SAN environment. The Connectrix family represents a wide range of products that can be used in departmental and enterprise SAN solutions. This slide displays the entire product family. The Connectrix family has the following overall strengths: Connectrix B-series (Brocade)—From the low-cost, customer-installable eight-port switch to the robust 256-port, 4 Gb ED-48000B director, the Connectrix B-series is an extensive product line. B-Series also offers a broad set of optional features such as customer-installable setup wizards, a familiar GUI, and a powerful CLI. Connectrix M-series (McData) – delivers multi-protocol storage networking for SAN-routing and SAN-extension applications. With an extensive history in mainframe environments, where high availability and performance are critical, it’s the natural choice for FICON storage networking. Connectrix MDS-series (Cisco) products provide a unified network strategy that combines data networks with SAN. For example, there’s IP across the MDS product line, and there are such data-network functionality and features as virtual SANs (VSANs), Inter-VSAN Routing (IVR), and PortChannel in the SAN. MDS-9140 DS-4100B MDS-9120 DS-4700M DS-4400M DS-220B Storage Area Networks (SAN)

64 Switches versus Directors
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Switches versus Directors Connectrix Switches High availability through redundant deployment Redundant fans and power supplies Departmental deployment or part of Data Center deployment Small to medium fabrics Multi-protocol possibilities Connectrix Directors “Redundant everything” provides optimal serviceability and highest availability Data center deployment Maximum scalability Maximum performance Large fabrics Multi-protocol Functionally, an FC switch and an FC director perform the same tasks; enabling end-to-end communication between two nodes on the SAN. However, there are some differences such as scalability and availability. Directors are deployed for extreme availability and/or large scaling environments—that’s where they fit best. Connectrix directors have up to 256 ports per device; however, the SAN can scale much larger by connecting the products with ISLs (Inter-Switch Links). Directors let you consolidate more servers and storage with fewer devices and therefore less complexity. The disadvantages of directors include higher cost and larger footprint. Switches are the choice for smaller environments and/or environments in which 100% availability may not be required; price is usually a driving factor. Thus, switches are ideal for departmental or mid-tier environments. Each switch may have 16–48 ports, but, as with directors, SANs may be expanded through ISLs. Fabrics built with switches require more switches to consolidate servers and storage, which means there will be more devices and more complexity in your SAN. The disadvantages of switches include fewer ports, as well as complexity to scale. Storage Area Networks (SAN)

65 Connectrix Switch - DS-220B
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Connectrix Switch - DS-220B Provides eight, 12, or 16 ports Auto-detecting 1, 2, and 4 Gb/s Fibre Channel ports Single, fixed power supply Field-replaceable optics Redundant cooling Simplified setup—no previous SAN experience needed Eliminates the need for advanced skills to manage IP addressing or Zoning The departmental switches offer several key features, some of these include: buffering based on non-blocking algorithms; high-bandwidth including full-duplex serial data transfer at a rate of up to 4 Gbps; low latency with low communication overhead using the fibre channel protocol; and the ability to support multiple topologies from the same switch. However, as can be seen on the slide, they limited connectivity. The Connectrix DS-220B is an excellent example. This switch is well-suited for entry-level SANs, as well as for edge deployments in core-to-edge topologies. Storage Area Networks (SAN)

66 Connectrix Director – MDS 9509
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Connectrix Director – MDS 9509 Multi-transport switch—Fibre Channel, FICON, iSCSI, FCIP 16 to 224 Fibre Channel ports 4–56 Gigabit Ethernet ports for iSCSI or FCIP Non-blocking fabric 1 / 2 Gb/s auto-sensing ports All components are fully redundant MDS-9509 A high-availability director, the MDS-9509 has a nine-slot chassis that supports up to seven switching modules for a total of up to / 2 Gb/s auto-sensing Fibre Channel ports in a single chassis. The IP Services Blades can also be inserted to support iSCSI and/or FCIP. And with 1.44 Tb/s of internal bandwidth, the MDS-9509 is 10 Gb/s-ready. Storage Area Networks (SAN)

67 Connectrix Management Interfaces
Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved. Connectrix Management Interfaces MDS-Series Fabric Manager M-Series Web Server The Connectrix platform offers a variety of interfaces and choices to configure and manage the SAN environment. Everything from secure CLI to GUI based device management tools are offered bundled with the hardware. Take a moment to look over the options listed here. Console Port Initial switch configuration and out-of-band troubleshooting Command Line Interface (CLI) Telnet and Secure Shell (SSH) supported Graphical User Interface (GUI) Embedded Java client-server applications Secure communication Displays topology map Configure and monitor individual switches or the entire fabric B-Series Web Tools Storage Area Networks (SAN)

68 Copyright © 2006 EMC Corporation. Do not Copy - All Rights Reserved.
Module Summary The Connectrix Family of Switches and Directors; Has three product sets: Connectrix B-Series Connectrix MDS 9000 Series Connectrix M-Series Provides highly available access to storage. Connects a wide range of host and storage technologies. This module introduces the Fibre Channel Storage Area Network (FC SAN) Connectrix family. We have looked at the components and connectivity methods for a SAN as well as management topics and applications of SAN technology. Storage Area Networks (SAN)


Download ppt "Fibre Channel Storage Area Networks (SAN)"

Similar presentations


Ads by Google