Download presentation
Published byBrittany Fleming Modified over 9 years ago
1
Module 9 PS-M4110 Overview <Place supporting graphic here>
Welcome to the Dell 12G version of the PowerEdge M1000e training course. This training provides an overview of the components and features of the Dell PowerEdge M1000e Blade Server enclosure.
2
Module Objectives: Describe the PS-M4110 solution
Describe the features of the PS-M4110 Install the PS-M4110 in the M1000 Chassis Operate the drawer of the PS-M4110 Describe the airflow features of the M1000 Identify whether the M1000 chassis has a version 1.1 Midplane Describe how the PS-M4110 connects to the Servers via the Fabric modules . 9-2
3
Introduction
4
PS Series Continuous advancements of inclusive software features
Early releases PS100 PS3000 PS-M4110 PS4000 Series FS7500 Sync Rep IPSEC SED Product timeline ASM/ME for Hyper-V™ Auto-Snapshot Manager/ VMware® Edition SAN Headquarters VMware® vStorage Data Center Bridging (DCB) Core File Capability 2003 + Thin Provisioning VMware® Site Recovery Manager Continuous advancements of inclusive software features Start of Solution innovation
5
Dell PS-M4110 Blade Array CMC provides M1000e chassis management
EqualLogic iSCSI SAN for the Dell M1000e Blade chassis 14 hot-plug 2.5” disk drives 10G controllers PS Series firmware / Group Manager M1000e CMC provides M1000e chassis management M1000e chassis provides power, cooling, integrated networking and density Blade server storage array. Managed by the M1000e CMC (Chassis Management Controller). Minimum version CMC firmware that supports the PS-M4110 is 4.11-A01.
6
PS-M4110 Blade Array - Installed in the M1000e blade chassis.
Two control modules Two 10 GbE ports - Configurable Management port - 14 hot pluggable 2.5” disks - 6Gb/s SAS drives M4110 Blade array. Slides into two M1000e slots for power, cooling, and management. Connects to the M1000e midplane. No cables.
7
Dell PS M4110 and the PowerEdge M1000e 12G Enclosure
The Dell PowerEdge M1000e Modular Server Enclosure is a breakthrough in enterprise server architecture. The enclosure and its components spring from a revolutionary, ground up design incorporating the latest advances in power, cooling, I/O, and management technologies. Each blade server often is dedicated to a single application. The blades are literally servers on a card, containing processors, memory, integrated network controllers, an optional fiber channel host bus adaptor (HBA) and other input/output (IO) ports. The PowerEdge M1000e solution is designed for customers that require high performance, high availability, and manageability in the most rack-dense form factor including: Corporate Public Small/Medium Business (SMB) Modular blade servers are typically implemented in: Virtualization environments SAN applications (Exchange, database) High performance cluster and grid environments Front-end applications (web apps/Citrix/terminal services) File sharing access Web page serving and caching SSL encrypting of Web communication Audio and video streaming Like most clustering applications, blade servers can also be managed to include load balancing and failover capabilities. PS M4110 inserts into two M1000e slots. M1000e chassis provides power, cooling, and management for the M4110 and server blades.
8
PS-M4110 Blade Array Two 10Gbe iSCSI Ports wired internally to the M-series Ethernet fabrics M-series Blade Server and M4110 Chassis Environment Chassis provides power, cooling, Ethernet switch configuration CMC provides chassis management, configuration and monitoring. 10G Ethernet fabric selection - M4110 defaults to B fabric - Can be on the A fabric with version 1.1 midplane Support’s DCB (Data Center Bridging) M4110 performance is similar to PS6100E PS-M4110 can be installed in slots 1-2, 3-4, 5-6, 7-8, 9-10, 11-12, 13-14, 15-16, or RAID types 6, 6 accelerated 50 and 10 are all supported.
9
The power to do more in less space M1000e and the PS-M4110 EqualLogic Blade Array
4 Switches 2 PS Series Array’s 24 Servers 10 U 2 PS-M4110’s Consider the following: Rack space saved = 22 U’s Network cables not needed: - 54 Power cables not needed: 4 Switches 24 Servers Physical Convergence = 3x space savings
10
Installation and Drawer Operation
11
Installing The PS M4110 The PS M4110 installs in the top slots using the rail guides on the top. The PS M4110 installs in the bottom slots using the slot guides on the bottom. Slide the array into the M1000e available slots. Seat the array by sliding the handle firmly in place Before installing the PS-M4110 in an M1000e enclosure, note the following: • You should wear an electrostatic strap to prevent electrostatic damage. • When shipped by itself, the PS-M4110 includes a retaining clip on the front to prevent the array drawer from sliding out of the array. It also includes protective plastic covers on the back to protect the rear connectors from damage. You must remove the retaining clip and protective covers before installing the array in the M1000e enclosure. Optionally, you can also remove the protective caps covering the serial ports on the front. Save the clip and protective covers for future use. After you have installed the PS-M4110 into the M1000e enclosure, you can verify proper installation by turning on power to the M1000e enclosure. If the PS-M4110 is properly installed, the Blade System Status LED on its front panel will light up shortly after the M1000e is powered on.
12
PS M4110 Drawer To open the array inner drawer:
Push the array's front panel and release it quickly to unlatch the array’s inner drawer. When the drawer is unlatched will be a “Caution” label visible Caution: The front panel is not designed as a handle. It can break if treated roughly. When opening the array inner drawer, don't pull on the front panel. Grip and pull the drawer by its top, bottom, or sides. The inner drawer can be opened while the member is operating to gain access to the hot swap components.
13
PS M4110 Drawer When the PS-M4110 is out of the M1000 enclosure, the array's drawer cannot be opened unless its safety locking mechanism is released. There is a release button located on the side of the PS-M4110 array that releases the latch that secures the array's drawer to its outer housing. This prevents the array's drawer from opening accidentally during handling when outside of the M1000e enclosure. To open the array drawer, press and hold the release button to manually unlock the safety latch.
14
PS-M4110 / M1000e
15
EqualLogic PS-M4110 Ecosystem
M1000E Chassis PS-M4110 Dell Force10 or PowerConnect Switch PowerEdge Blades 11G or 12G PS-M4110 Components EqualLogic Host Software Dual, hot-pluggable 10GbE controllers 4GB of memory per controller 1 x dedicated 10/100 management port – accessible through CMC 6Gb/s SAS backend 14x 2.5” drives Auto-Snapshot Manager – Microsoft®, VMware®, Linux® Multi-Path I/O PowerShell Tools Datastore manager SAN HQ PS-M4110 Design PS-M4110 Scalability EqualLogic Array Software Drawer in Drawer Double-wide, half-height storage blade Operate/interoperate with servers inside/outside chassis Up to 4 PS-M4110 per blade chassis Up to 2 PS-M4110 per EqualLogic group Scale up to 16 EqualLogic arrays per group by joining arrays outside chassis Peer storage architecture Advanced load balancing Snapshots, cloning, replication Thin provisioning Thin clones
16
PS-M4110 Configuration Option #1 Fabric B Switch (Default)
Midplane Fabric A1 Ethernet I/O Module Fabric A2 Ethernet I/O Module Fabric B1 Switch Fabric B2 Switch Fabric C1 I/O Module Fabric C2 I/O Module External Fabric Connections PS-M4110 Fabric Interface Module Half-Height Blade Server (16 of 16) Fabric A E’net LOM Fabric B Mezz Fabric C Mezz Server Logic CM0 (Active) CM1 For increased availability, the Ethernet ports on both PS-M4110 control modules are automatically connected to each redundant M1000e IO module (IOM) of the configured fabric. (Assuming that both IO modules are installed.) One port is active and one port is passive. For example, if a PS-M4110 is configured for Fabric B, and both the B1 IOM and B2 IOM are installed, the ethernet ports from each control module are connected to both B1 and B2 IOMs. This provides a total of four potential ethernet paths. However, only one ethernet path is active at any given time. In the above example, if the B1 IO module fails, both active and passive PS-M4110 ports will automatically failover to the B2 IO module.
17
PS-M4110 Configuration Option #2 Fabric A Switch
Midplane PS-M4110 Fabric A1 Ethernet Switch Fabric A2 Ethernet Switch Fabric B1 I/O Module Fabric B2 I/O Module Fabric C1 I/O Module Fabric C2 I/O Module External Fabric Connections Fabric Interface Module CM0 CM1 To install the M4110 on the A Fabric you must have a midplane version 1.1 installed. Half-Height Blade Server (16 of 16) Fabric A E’net LOM Fabric B Mezz Fabric C Mezz Server Logic
18
External Fabric Connections
PS-M4110 Configuration Option #3/4 Fabric A/B Pass-Thru to External Switch Stack Midplane Fabric A1 Pass-Thru Fabric A2 Pass-Thru Fabric B1 I/O Module Fabric B2 I/O Module Fabric C1 I/O Module Fabric C2 I/O Module External Fabric Connections PS-M4110 Fabric Interface Module Half-Height Blade Server (16 of 16) Fabric A E’net LOM Fabric B Mezz Fabric C Mezz Server Logic CM0 CM1 Ethernet Switch To install the M4110 on the A Fabric you must have a midplane version 1.1 installed.
19
PS-M4110 Configuration PS-M4110 provides a single 10Gb Ethernet port that can be connected to one of two redundant Fabrics (A or B) in the M1000e chassis. Above is a depiction of the internal links between the FIOM and the fabric IO modules. When a customer configures Colossus for Fabric B, the 2xFIOM 10G ports (one active and one passive) will establish links with the Fabric B IO module, both ports either link to B1 or both ports link to B2. This is shown in RED in the diagram above (in the example above both active and passive ports establish links to B1). The port from active FIOM is active and the other port is passive. But user can see both ports as being connected in the fabric B1 switch configurations. If fabric B1 were to fail, both active and passive FIOM ports will automatically link to fabric B2. When a user configures Colossus to work with fabric B, the user cannot specify if to use B1 or B2 fabric. Colossus will automatically choose either B1 or B2 fabric. So when the user has multiple Colossus inside a chassis configured for fabric B, then B1 and B2 must be stacked, or else the two Colossus arrays may not talk to one another.
20
M1000e I/O Module Placement Options
Single Fabric Split Fabric Blade Server Mezz B Mezz C Blade Server Mezz B Mezz C Put the same kinds of modules in the same fabric, for example: green modules in “B” fabric and Yellow modules in “C” Fabric. Do not put green and yellow in “B” Fabric and Green and Yellow in the “C” fabric. That would be considered a Split Fabric.
21
10G IOM fabric 10G K supports PS-M4110
Pass through KR with top of rack (TOR) 8024F or TOR NX-5020 Force 10 (Navasota MLX) M8024-K (10 gig) (Lavaca) Brocade M8428-K (Brazos) Other M-Series IOM fabrics – not supported for PS-M4110 10G XAUI 1GE FC Infiniband The PS-M4110 only operates with 10GbE I/O modules 10GbE K Based I/O module (Must be K for PS-M4110) 10GbE Pass through I/O module All M4110 connections will be 10GbE
22
Switch Configurations
Stack or create a LAG between switches Stack Switches Together When using a PS-M4110 inside an M1000e enclosure, the IO modules must be interconnected (stacked or LAG’d together). For example, if fabric B is configured, the B1 and B2 IOMs must be stacked or LAG’d together. The redundant fabric IO modules must be connected using interswitch links (stack interfaces or link aggregation groups (LAG). The links must have sufficient bandwidth to handle the iSCSI traffic.
23
Multiple M1000e Chassis Switch IOM’s
Stacked IOMs LAG m1000e For multiple chassis Stacking should be done in two rings, a left side ring and a right side ring LAGs should be connected between the rings so that each port on the storage devices and can communicate to any other port on the storage devices. DCB Support Requires: DCB external switch (B8000) IOM: Navasota only Intel or Brocade CNA Mezz
24
Module Summary Now that you have completed this module you should be able to: Describe the PS-M4110 solution Describe the features of the PS-M4110 Install the PS-M4110 in the M1000 Chassis Operate the drawer of the PS-M4110 Describe the airflow features of the M1000 Identify whether the M1000 chassis has a version 1.1 Midplane Describe how the PS-M4110 connects to the Servers via the Fabric modules 9-24
25
Questions? Nashua site do a Lab walk-thru
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.