Presentation is loading. Please wait.

Presentation is loading. Please wait.

VMAX 101 with vSphere Recommended Practices

Similar presentations

Presentation on theme: "VMAX 101 with vSphere Recommended Practices"— Presentation transcript:

1 VMAX 101 with vSphere Recommended Practices
David Robertson vSpecialist

2 Agenda History of Symmetrix VMAX Hardware/Software Overview
What’s new with Enginuity 5875 (Danube) Recommended Practices for Using VMAX with VMware Q&A

3 Evolution of Symmetrix
Continuous Improvement in Efficiency and Effectiveness Centralized Single servers Single applications Distributed Multiple servers Multiple applications Virtualized Virtualized servers Consolidated applications 1985–1995 1995–2005 2005–2010 The Symmetrix VMAX delivers a platform that is purpose-built to deliver infrastructure services within the data center. VMAX is the latest version ofr Symmetrix. Due to Symmetrix’s flexible Mosaic architecture, EMC has been able to adapt Symmetrix over the years to address the changing needs of our customers. This includes functional software for Remote replication, local replication, automated optimization, high speed data transfer, data sharing, Virtual Provisioning, Thin Provisioning, and Federation. This includes changing form Motorola controller class processors, to PowerPC to Intel Multi-core with increased performance each time and no loss of functionality. This includes changing from a bus architecture to a direct Matrix to a Virtual Matrix for increased performance AND reliability. Direct Attached Storage Storage Area Networks Virtual Storage Symmetrix 4400 Symmetrix 5500 Symmetrix 3000/5000 Symmetrix 8000 Symmetrix DMX-1/-2 Symmetrix DMX-3 Symmetrix DMX-4 Symmetrix VMAX Flash Drives World’s first integrated cached disk array (ICDA) World’s first terabyte ICDA disk array “Open Symmetrix” managed data from all major server platforms Created new “enterprise storage” market category Direct Matrix architecture revolutionizes high-end storage World’s first incrementally scalable 1 PB disk array World’s most cost-effective, high-performing, secure, energy- efficient high-end array Purpose-built to deliver infrastructure services within the data center Symmetrix is the enabler to evolving high-end infrastructure

4 Symmetrix Evolution 20 Years of Storage Leadership! ‘90 ‘91 ‘92 ‘93
DMX-4 Dynamic Cache Partitions Virtual LUN RAID CU mainframe Symmetrix 6 DMX Symmetrix 2 Symmetrix 3 3330/5330 3430/5430 3700/5700 Symmetrix 4.8 Symmetrix 4400 4Mb DRAM 256MB Cache Board 5.25” HDA Symmetrix 16Mb DRAM 1GB Cache Board DMX800 DMX1000 DMX2000 3630/5630 3830/5830 3900/5900 Symmetrix DMX-2 Virtual Provisioning Instant VTOC Flash Drives 1TB SATA Drives DMSP System Calls Ultra SCSI SA-Failover PowerPath Fibre Channel SDDF Symmetrix 3000* Symmetrix 4800 Symmetrix 5 FAST VP FLM VAAI The Symmetrix represents 20 years of development and refinement. As we see in the chart above, each generation has added performance, connectivity, and robust advanced features and usability enhancements. Symmetrix ESP SDMS 8130 8430 8730 Hyper Volume Extension 3.5” HDA EOS EDM FAST ‘90 ‘91 ‘92 ‘93 ‘94 ‘95 ‘96 ‘97 ‘98 ‘99 ‘00 ‘01 ‘02 ‘03 ‘04 ‘05 ‘06 ‘07 ‘08 ‘09 ‘10 Symmetrix 4200 Symmetrix Symmetrix RAID-S FWD SCSI SRDF Ext Dist SRDF Host Comp Symmetrix Manager MPLF 3380/3390 Intermix 3GB/9GB Intermix InfoMover Celerra FDRSOS >4GB Cache SymmOptimizer File SMMF New TF Splits Switch SRDF R1/R2 FC Fabric Support Dynamic Sparing Symmetrix 5.5 Symmetrix 7 DMX-3 Non- Disruptive Microcode Load 8230 8530 8830 DataReach TimeFinder DMX-3 950 VMAX Series DMX1500 DMX2500 DMX3500 DMX4500 Virtual Matrix Autoprovisioning Groups Enhanced Virtual LUN SRDF PDS Assist SDS HDC 20 Years of Storage Leadership! Copyright © 2009 EMC Corporation. All Rights Reserved.

5 Symmetrix VMAX Series with Enginuity
Title Month Year Symmetrix VMAX Series with Enginuity Supports from 96 to 2400 drives Up to 240 drives per Storage Bay Up to 10 Storage Bays Connectivity: Fibre FICON iSCSI GigE More host connection ports 128 Fibre Channel host/SAN 32 Fibre Channel remote replication 64 iSCSI 64 FICON host 32 GigE remote replication

6 EMC Symmetrix: The World’s Most Trusted Storage Platforms
EMC Symmetrix Family February 2010 96 to 2,400 drives for up to 2 PB of usable capacity One to eight VMAX Engines (16 directors) Fibre Channel, iSCSI, Gigabit Ethernet, FICON connectivity Up to 1 TB (512 GB usable) global mirrored memory 8 Gb/s Fibre Channel, FICON, and Fibre Channel SRDF Symmetrix VMAX: Virtual Matrix Architecture The Symmetrix DMX series includes the Symmetrix DMX and the Symmetrix DMX-4 models, and meets a wide range of high-end requirements for scalability, performance, and cost. The incremental scalability of the Symmetrix DMX-4 allows you to meet your growth requirements by adding Drive Adapters, drive channels, and drives nondisruptively to the existing frame. This enables true pay-as-you-grow economics for high-growth storage environments. Symmetrix DMX-4 continues to be the market leader versus other high-end alternatives such as Hitachi USP and IBM DS8000. It offers support for Flash and SATA drives with key management features such as Virtual Provisioning, Quality of Service, and Virtual LUN—all key capabilities to supporting a tiered storage strategy. The Symmetrix VMAX SE and Symmetrix VMAX models combine the core high-end capabilities that have made Symmetrix the industry’s most trusted storage platform with capabilities that are purpose- built for virtual data centers. These capabilities are the foundation to enabling IT to provide high-end storage capabilities as a value-added service within the data center. With scale-out and tiering capabilities, the Symmetrix VMAX array includes the ability to further reduce cost and deliver high service levels. Virtualizing storage resources in the array enables unmatched levels of ease, speed, and automation in a high-end array. The Symmetrix VMAX system also leverages the Symmetrix ability to support fully nondisruptive operations required to deliver 24x7, “always-on” application availability. Symmetrix VMAX and DMX also have FAST, which automatically moves volumes between storage types to optimize performance and reduce costs. Symmetrix DMX: The Direct Matrix Architecture 32 to 2,400 drives Two to 12 front-end/back-end directors FICON, ESCON, Fibre Channel, iSCSI, Gigabit Ethernet connectivity Up to 512 (256 usable) GB global memory CAPACITY AND PERFORMANCE SCALABILITY Note: Combinations may be limited or restricted based on configuration

7 Symmetrix Architecture – “Big Picture”
Host Reads “Cache Hit” if data is already in cache “Cache Miss” is staged from disk Host Writes All host writes are to cache “Write Pendings”are asynchronous written to disk Host Systems Front-end Host Adapters The architecture framework of the Symmetrix allows rapid integration of new storage technology, while supporting existing configurations. There are three functional areas: Shared Global Memory - provides cache memory Front-end - the Symmetrix connects to the hosts systems using Channel Directors. Back-end – is how the Symmetrix controls and manages its physical disk drives, referred to as Disk Directors. All host IO operations pass through cache. Intelligent cache management algorithms are designed to optimize performance. When a host does a read operation and the data is in cache, this is called a cache hit and is processed at memory speeds. If the requested data is not in cache, the back-end director will read the data from physical disk and stage it into cache. And the front-end director will take the data from cache and pass it on to the host. All write operations are to cache and the back-end directors will asynchronously write the “write pending” data to it’s persistent location on disk. This fundamental architecture has remained unchanged since the first generation Symmetrix. What differentiates the different generations and models is the number, type, and speed of the various processors, and the technology used to interconnect the front-end and back-end with cache. Global Memory (Cache) Back-end Disk Adapters Front-end connects hosts to the system Back-end connects the physical drives Director Physical drives Copyright © 2009 EMC Corporation. All Rights Reserved.

8 Symmetrix VMAX Virtual Matrix Architecture
EMC Symmetrix VMAX Series with Enginuity November 2009 High-Availability Symmetrix Director Pair Up to 128 ports per system, two hot-pluggable modules per director Back-end connections for Flash, Fibre Channel, and SATA disks (up to 128 ports, 2,400 disks, 2.1 PB) Four Quad-core Intel processors at 2.33 GHz per core (16 cores per engine) This slide shows the hardware architecture of Symmetrix VMAX Engine. Note to Presenter: Click now in Slide Show mode for animation. Start with the front-end and back-end ports, which scale up to 128 ports per system. The back-end ports provide connections to enterprise Flash drives, Fibre Channel drives, and SATA drives. The core processing consists of four Quad-core Intel processors. The global memory is shared across Symmetrix engines. The Virtual Matrix interface connects and shares resources across the entire system. Global mirrored memory and redundant CPU complexes that can be accessed and shared across Symmetrix engines Virtual Matrix interface connects and shares resources, providing massive scalability in a single system Up to eight engines per Symmetrix system

9 VMAX Hardware Overview

10 VMAX Engine Overview

11 VMAX Engine Overview

12 VMAX Engine Overview

13 Engine IO module layout

14 Engine Numbering and Population Order
Title Month Year Engine Numbering and Population Order Population Order Engine 1 Engine 2 Engine 3 Engine 4 Engine 5 Engine 6 Engine 7 Engine 8 Engine # System Bay Dir 1 Dir 2 Dir 3 Dir 4 Dir 5 Dir 6 Dir 7 Dir 8 Dir 9 Dir 10 Dir 11 Dir 12 Dir 13 Dir 14 Dir 15 Dir 16 Director # Populated 8 Populated 6 Populated 4 Populated 2 Populated 1 Populated 3 Populated 5 Populated 7

15 VMAX Storage Bay Configuration Standard Configuration
Title Month Year VMAX Storage Bay Configuration Standard Configuration Up to four directly connected Storage Bays Up to six daisy chained Storage Bays Up to eight Disk Director pairs (Octants) Daisy Chained Direct Connect Daisy Chained Direct Connect Direct Connect Daisy Chained Direct Connect Daisy Chained Engine 8 Dir 15/16 Engine 7 Dir 13/14 Engine 6 Dir 11/12 Engine 5 Dir 9/10 Engine 4 Dir 7/8 Engine 3 Dir 5/6 Engine 2 Dir 3/4 Engine 1 Dir 1/2 Octant 6 Engine 7 Octant 2 Engine 5 Octant 4 Engine 6 Octant 8 Engine 8 Octant 5 Engine 2 Octant 1 Engine 4 Octant 3 Engine 3 Octant 7 Engine 1

16 VMAX Drive and Protection Types

17 Supported Drive Types

18 Data Protection Options
Characteristics Protection Performance Cost RAID 1 Write to two separate physical drives Read from single drive - DMSP 2 1 RAID 5 Parity based protection Striped data and parity 3+1 and 7+1 Configurations 3 RAID 6 Two parity drives 6 + 2 and Data Availability is primary Performance is a secondary consideration New with Enginuity 5772 Unprotected Not recommended N/A RAID 5 is based on the industry standard algorithm and can be configured with three data and one parity, or seven data and one parity. While the latter will provide more capacity per $, there is a greater performance impact when in degrade mode where a drive has failed and all surviving drives must be read in order to rebuild the missing data. RAID 6 was new with 5772 and is focused on availability. With the new larger capacity disk drives, rebuilt times may take multiple days increasing the exposure to a second disk failure. Works nicely with tiered storage and where you have data that is written once and read occasionally. Random read performance is similar across all protection types, assuming you are comparing the same number of drives. The major difference is write performance. With mirrored devices for every host write there are two writes on the backend. With RAID 5, each host write results in two reads and two writes. For RAID 6, each host write results in three reads and three writes. Other data protection schemes include remote replication using SRDF. Understanding Symmetrix Configuration

19 Choose RAID Protection That Makes Sense for Your Workload
High Highest Performance Requirement Availability High Highest MIRRORED RAID 6 RAID 5 Color indicates performance overhead

20 RAID types and Recommendations

21 VMAX Service Processor

22 Symmetrix Service Processor
EMC personnel interface to the Symmetrix Rack-mounted server with KVM & UPS Windows XP Operating System Runs SymmWin application Used to: Configure the Symmetrix Run diagnostics, procedures, and other maintenance utilities Launch Inlines commands Service processor also runs Solutions Enabler SYMCLI and Symmetrix Management Console server Secure Service Credentials ensure only authorize access EMC Remote allows Support Engineers to access the service processor remotely Secure IP network connection Modem The Service Processor is a 1U rack-mounted server, with a Keyboard, Video, Mouse (KVM) unit and a UPS. The UPS allows call-home even during a power down due to loss of AC power. Copyright © 2009 EMC Corporation. All Rights Reserved.

23 SymmWin Graphical-based tool for configuring and monitoring a Symmetrix System Runs locally on the service processor May also run on stand-alone PC Provides interface to Inlines Automatically runs periodic checks and performs “Call Home” Can be accessed remotely using EMCRemote SymmWin is an EMC written graphical-based application for managing a Symmetrix. Capabilities include: Building and modifying system configuration files (IMPL.bin) Issuing Inlines commands, diagnostic, and utility scripts Monitoring performance statistics Automatically performs periodic error polling for errors and events. Certain errors will cause the service processor to “Call Home”. SymmWin runs locally on a Symmetrix Service Processor or on a standalone PC. Running on the service processor allows communications with an operational Symmetrix. Running it on a standalone system allows you to build a new configuration or view and modify an archived configuration file.

24 Enginutiy Release Numbering
Service Processor Code (Minor Release Level) Major Release Level [03] Family Number Supported Hardware: Emulation Code Level (Build #) Revision Number of QA test code 8 Symm8 VMAX 7 Symm7 DMX-3, DMX-4 6 Symm6 DMX, DMX-2

25 VMAX Device Configuration

26 Symmetrix Device Logical abstraction of a disk drive
EMC terms often used are hyper-volume, slice, split, device, or volume Industry term is LUN – Logical Unit Approximately 64,000 devices per system Each devices is assigned a unique hex identifier Emulates either a FBA and CKD device Maximum size = ~240 GB TDEV Meta Devices can be created to go beyond this limit Mapped to persistent locations on physical disk Protection from media failure RAID 1 Mirroring RAID 5 and RAID 6 parity based protection Replication Local Remote Symmetrix Device DEV 1bc4 The object that is accessed by a host system is a Symmetrix Logical Volume. This is an logical abstraction of a disk drive. That is, to the host, views a Symmetrix Logical Volume as a disk drive. One of the most confusing aspects of learning the Symmetrix is the different terms that are used to describe the same thing. The industry typically uses the term LUN but we often use the term device, volume or hyper-volume, and sometimes split or slice. For this training we will attempt to use the term Symmetrix Logical Volume or SLV when referring to the object that is presented to the host. For mainframe environments, a SLV emulates a Count Key Device (CKD) which uses a variable block size. For Open Systems, a SLV emulates a Fixed Block Architecture (FBA) device with each block being a fixed size of 512 bytes. The one exception is FBA for IBM iSeries emulates a fixed block size of 520 bytes. Regardless of the emulation type, internally data is stored in FBA format with 520 byte blocks (528) for iSeries). .The extra 8 bytes of data is used for CRC and other data used to validate data integrity through the Symmetrix. The size of a SLV is configurable with the maximum SLV size of approximately 250 GB. For host requirements that require larger device sizes, multiple Symmetrix device can be aggregated together and presented to the host as a single device. This is referred to as a Meta Device. The maximum number of Symmetrix device per system is 64,000 devices, however the number of devices supported on a system depends on the number of back-end directors and physical disk drives. When a SLV is created it is assigned a unique hex identifier. When it is presented to a host it is assigned a channel address. For data availability, Symmetrix device can be protected using various RAID schemes and can be remotely replicated for even greater data protection. Symmetrix device can also be locally replicated and copies can be used for backup, decision support and other applications. The protection scheme and local or remote replication is transparent to an attached host. Copyright © 2009 EMC Corporation. All Rights Reserved.

27 Symmetrix Device, continued
Symmetrix Devices are provisioned to hosts Physical connectivity Map to front-end director port Assign Channel Address Mask device to specific HBA Feature referred to as Auto-provisioning Groups Symmetrix Device DEV 1bc4 Symmetrix Devices are made available to a host through a process referred to as provisioning. This consists of mapping a device to a specific set of front-end directors (Typically two or more for availability and performance), and masking the devices to specific hosts that share those ports. With the Symmetrix VMAX, the feature used to provision storage is called Auto-provisioning Groups. The approach used on the VMAX is considerably different from previous generation Symmetrix. Auto-provisioning Groups make provisioning operations faster and easier by allowing the storage administrator to logically group HBAs (initiators), front-end ports, and devices together, and to build masking views that associate the devices with the ports and initiators. When a masking view is created, the necessary mapping and masking operations are performed automatically to provision storage. Once a masking view has been created, any changes to the grouping of initiators, ports, or storage devices are propagated throughout the view and the mapping and masking are automatically updated as required. Copyright © 2009 EMC Corporation. All Rights Reserved.

28 Hypervolumes

29 Physical Disk and Hyper Volumes
10 GB 8 GB 9 GB 36 GB 6 GB 11 GB Hyper Volumes 146 GB The software used to “split” physical disks into volumes is called Hyper Volume Extension. Symmetrix physical disks are split into logical Hyper Volumes. Hyper Volumes (disk slices) are then defined as Symmetrix Logical Volumes (SLV). SLVs are internally labeled with hexadecimal identifiers (0000-FFFF). The maximum number of host addressable logical volumes per Symmetrix configuration is 64,000 using Enginuity While “Hyper Volume” and “split” refer to the same thing (a portion of a Symmetrix physical disk), a “Symmetrix Logical Volume” is a slightly different concept. A Symmetrix Logical Volume is an abstraction of a disk drive that is presented to a host via a Symmetrix channel director port. As far as the host is concerned, the Symmetrix Logical Volume is a physical drive. Symmetrix Logical Volumes are defined in the Symmetrix Configuration (BIN File). From the Symmetrix perspective, physical disk drives are partitioned into hyper volumes. A hyper volume could be used as an unprotected Symmetrix logical volume, a mirror of another hyper volume, a Business Continuance Volume (BCV), a member for RAID 5 or RAID 6 volume, a remote mirror using SRDF, and other uses. Volume Table of Contents (VTOC) on disk are used to map logical volumes to physical disks. These data structures are created during initial installation. Maximum hyper volumes per physical disk varies with software version - currently 255 maximum Hyper volumes can be of variable size Understanding Symmetrix Configuration

30 Metavolumes A metavolume is two or more Symmetrix system hypervolumes presented to the host as a single addressable device.

31 Gatekeeper Devices Gatekeeper (Typically <10MB)
A Gatekeeper can be any volume accessible to the host Appear like any other volume Usually configured as a small device Should not be used by the host for normal data processing Best practice is to dedicate devices as Gatekeepers When a SYMCLI session is started, gatekeeper and database locks are used to avoid conflicts Semaphore Once the CDB sequence is processed, the gatekeeper is closed and the lock released, freeing the device for other processing Solutions Enabler commands executed on the Symmetrix Service Processor uses a pseudo Gatekeeper device Storage Processor does not have direct access to any devices Gatekeeper (Typically <10MB) When a Symmetrix array is installed, it is usually configures a number of Symmetrix small devices for use as gatekeeper devices. By standard convention, these are 6 cylinders in size, but need to be at least as large as the minimum volume size accessible by your host. Consult your host documentation for the minimum device size accessible by your particular host to determine the minimum gatekeeper device size for your environment. Symmetrix HW & SW Architecture

32 In-Band Communicates Host to the Symmetrix communication is performed using standard SCSI Write Buffer/Read Buffer commands Devices designated to receive commands are called Gatekeepers Typically minimum sized volumes Simply used to pass commands and return response Gatekeeper The SCSI command architecture was originally defined for parallel SCSI. Today, the same command structure is also leveraged for Fibre Channel and iSCSI communications. Basically, in SCSI protocol, the initiator sends a SCSI command to the target which then responds. SCSI commands are sent in a Command Descriptor Block (CDB). The CDB consists of a one byte operation code followed by five or more bytes containing command-specific parameters. Solutions Enabler communicates to a device using the standard Write Buffer (3B) and Read Buffer Commands (3C). This device is called a Gatekeeper device but in reality, a Logical Unit (LUN) with a address like any other device. Once a gatekeeper has been successfully obtained, SYMCLI determines if a semaphore exists; if so, a lock is obtained on the device. Once the CDB sequence is processed, the gatekeeper is closed and the lock released, freeing the device for other processing. SYMCLI Commands Symmetrix HW & SW Architecture

33 VMAX Virtual Provisioning

34 Virtual Provisioning Device Types
Thin Device Host-addressable device Seen by the operating system as a “normal” device Used in the same way as other host-addressable devices Can be replicated both locally and remotely Physical storage need not be completely allocated at device creation Physical storage is allocated from a pool of DATA devices Remains Not Ready until bound to a pool, though can be addressed and configured by a host DATA Device An internal, non-addressable device Provides the physical storage that is used to supply disk space for a thin device Placed in a thin storage pool to which the thin device has been associated or “bound” Multiple RAID protection types RAID 1, RAID 5, RAID 6 RAID 5 support in DMX is for 3+1 only

35 Virtual Provisioning with VMAX
Title Month Year Virtual Provisioning with VMAX Virtual Provisioning Virtual Provisioning enables a large volume to be presented to a host while consuming physical storage from a shared pool only as needed. TDAT 8 TDAT 7 TDAT 6 TDAT 5 TDAT 4 TDAT 3 TDAT 2 TDAT 1 Data Devices Allocated Application perceived Virtual Devices Reported Capacity

36 Virtual Provisioning Steps
Step 1 create TDAT’s Data Devices or TDAT’s “TDAT” Same Drive Type Same RAID Level Step 2 create Pool add TDAT”s Step 3 create TDEV’s Thin Devices Storage Pool “TDAT” TDEV

37 Thin Pool Concepts To Hosts / Storage Groups Thin Devices …..
Metahead Thin Devices TDEV TDEV TDEV TDEV TDEV TDEV TDEV TDEV ….. TDEV size and number is independent of TDAT and drive partitions Metavolume Data Devices TDAT Thin Pools 1 2 3 4 5 Volumes With Protection 7+1 or 6+2 in this example 7+1 Pool 3+1 Pool Volumes With Protection 3+1 Physical Disks 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 5 2 3 1 5 2 3 1 5 2 3 1 5 2 3 1

38 Thin Pool Concepts with AutoProvisioning
Initiator Group HBA HBA HBA HBA HBA HBA Auto-Provisioning Groups Masking View Port Group FA - XY FA - XY FA - XY FA - XY Storage Group Metahead Thin Devices TDEV TDEV TDEV TDEV ….. TDEV TDEV TDEV TDEV TDEV size and number is independent of TDAT and drive partitions. TimeFinder , SRDF and Open Replicator relationships are configured at the TDEV level. Metavolume Data Devices TDAT 1 2 3 4 5 Thin Pools Volumes With Protection 7+1 or 6+2 in this example 7+1 Pool 3+1 Pool Volumes With Protection 3+1 1 4 3 2 1 4 3 2 1 4 3 2 1 4 3 2 5 2 3 1 5 2 3 1 5 2 3 1 5 2 3 1 Physical Disks

39 Virtual Provisioning Thin Pools
Pool type specific to Virtual Provisioning Thin pools can be created at the same time as DATA devices They can also be created with no associated DATA devices There is no default thin pool The first DATA device added to the thin pool defines its protection type Thin pools can contain devices of only a single protection type If first device added is RAID 1, all additional devices must be RAID 1 Thin Pool With DATA Devices

40 Host Writes to Thin devices
The initial bind of a thin device to a pool causes 1 thin device extent to be allocated per thin device When a write is performed to a logical block address that does not exist on a previously allocated extent, a new thin device extent will be allocated from the pool A round-robin mechanism is used to balance the allocation of DATA device extents across all available DATA devices in the pool A 4 member thin meta would cause 4 extents to be allocated Host Write I/Os to Thin Devices Thin Pool Thin Devices

41 Extent Allocation on Host Writes to Logical Block Addresses
Logical Block Addresses on a Thin Device The host sees the thin device the same as a regular device The disk still contains a range of logical block addresses The LBA range will be equal to the number of 512 byte blocks in the entire device The extent allocation mechanism does not change this LBA 0 LBA N-1 N = Number of blocks in the thin device reported capacity

42 Extent Allocation on Host Writes to Logical Block Addresses (Continued)
Logical Block Addresses in a 12 Track Thin Device Extent Thin device extents contain 768 KB or 12 Symmetrix Tracks Each extent contains 1536 x 512 byte blocks Extent allocation is only triggered if the host writes to an LBA that is not part of a previously allocated extent 0 KB KB LBA 0 LBA 1535

43 Extent Allocation on Host Writes to Logical Block Addresses (Continued)
Initial bind causes one thin extent to be allocated Host sends an 8 KB write to LBA 872 which is part of a previously allocated extent The write is accepted and acknowledgement is returned with no allocation performed Host sends an 8 KB write to LBA 72588 Extent allocation is performed, the write is accepted, and an ack is returned Host sends an 8 KB write to LBA No extent allocation is performed for the first 2 KB, extent allocation is performed for the second 6 KB, the write is accepted, and an ack is returned Example of Extent Allocation

44 VMAX Management Overview

45 EMC Solutions Enabler Introduction
Symmetrix Command Line Interface (SYMCLI) Nearly all host environments Also installed on Service Processor Provides a host with a comprehensive command set for managing a Symmetrix storage environment Invoked from the host OS command line Scripts that may provide further integration with OS and application Separate components licenses Security and access controls Monitor only Host-based and user-based controls Detailed Configuration Information Status On-line Configuration Changes Performance Control SYMCLI can be used to perform ad-hoc operations or incorporated into user developed scripts to integrate Symmetrix management and control with the application and host environment. . Symmetrix HW & SW Architecture

46 EMC Symmetrix Management Console
EMC Symmetrix Family February 2010 EMC Symmetrix Management Console Simplifies storage management for virtual data centers Quickly provision resources on demand Fully Automated Storage Tiering (FAST) Auto-provisioning Groups Virtual Provisioning Enhanced Virtual LUN technology Ease of use Tasks view Wizards Templates Management integration Symmetrix Management Console and SMI-S on Service Processor Complements EMC Symmetrix Manager and SYMCLI Lightweight software requiring minimal host resources Web-based GUI SMC/SPA also is available as a vApp virtual appliance on PowerLink Symmetrix Management Console is used by Symmetrix for device management for both the Symmetrix VMAX and Symmetrix DMX products. There are several key features introduced to simplify storage management in virtual data centers and cluster environments. As data centers continue to embrace virtualization, management tools are required to tier, consolidate, and scale physical resources. Symmetrix Management Console manages the following features: Auto-provisioning Groups—Map and mask Initiator Groups, Storage Ports, and Storage Groups Virtual Provisioning—Also known as thin provisioning Enhanced Virtual LUN technology—Data mobility within the array and movement between tiers FAST (Fully Automated Storage Tiering) Symmetrix Management Console has several ease-of-use functions such as wizards that help streamline the process for Auto-provisioning, SRDF replication configuration, and enhanced Virtual LUN technology. Additionally, there is the ability to create storage templates for reuse in provisioning storage. Symmetrix Management Console is loaded on the Service Processor, eliminating the need for another server host. Symmetrix Management Console complements both Ionix ControlCenter and SYMCLI; it is a lightweight software package with a web-based GUI. Symmetrix Management Console “Ease-of-use” capabilities

47 EMC Symmetrix Performance Analyzer
EMC Symmetrix Family February 2010 EMC Symmetrix Performance Analyzer Simplify storage management Intuitive, automated trending of Key Performance Indicators (KPIs) Improve long-term planning for upgrades and consolidation Monitor Symmetrix operations Real-time data collection—five-second intervals Diagnostics Easily drill down to device-level views of performance Choose time period to track 24 hours, one week, one month, six months, year to date, custom FAST monitoring Monitor storage type usage and performance Monitor Storage Groups and volume level metrics Symmetrix Performance Analyzer is an add-on package to Symmetrix Management Console that helps Storage Administrators… Simplify storage management with intuitive, automated trending of key performance indicators (KPIs) Improve long-term planning for upgrades and consolidation Symmetrix Performance Analyzer is launched from Symmetrix Management Console and helps Storage Administrators manage Symmetrix operations. It has the ability to forecast and trend performance based on historical data. Note that Symmetrix Performance Analyzer can track performance at extended periods of time—as long as one year—to understand growth trends and help the user really manage consolidation and growth. Symmetrix Performance Analyzer 2.0 has real-time data collection to view events by device level. Additionally, the ability to use Symmetrix Performance Analyzer in diagnostics mode allows a Storage Administrator to drill down to the device-level views. Symmetrix Performance Analyzer also supports FAST monitoring, and the ability to view storage type and Storage Group information.

48 VMAX Provisioning Overview

49 Symmetrix VMAX: Easy, Quick, and Automated Storage Provisioning
EMC Symmetrix Family February 2010 Auto-provisioning Groups simplifies initial configurations and all future changes and additions 1 3 5 Note to Presenter: View in Slide Show mode for animation. Auto-provisioning Groups simplifies the initial setup of storage and greatly improves the ability to add and modify the storage configuration. In the previous example, if the Storage Administrator needed to add a new device, he or she would have to repeat the mapping process, requiring another 160 clicks. With Auto-provisioning Groups, the Storage Administrator simply clicks the “Add” button, selects the new device, and clicks “OK.” The software automatically performs the new mapping in the background. 2 4 6 Create Initiator Group 1 Create Port Group 3 Create Storage Group 5 2 Add/Remove Initiator 4 Add/Remove Port 6 Add/Remove Device

50 EMC Symmetrix Family February 2010
Symmetrix VMAX: Easy, Quick, and Automated Storage Provisioning for Virtual Servers with a Single Action Traditional Mapping and Masking Auto-provisioning Groups 15 clicks to complete Simplifies initial configurations and all future changes and additions Port A Port B Port C Port D HBA DEV Single Setup to Build and Associate Groups HBA HBA Note to Presenter: View in Slide Show mode for animation. Auto-provisioning Groups simplifies and automates storage provisioning. On the left is a traditional storage system, such as the Hitachi USP V, where storage mapping and masking is a manual process. When provisioning storage in a virtual environment, all VMware ESX services in the cluster must be mapped to the storage. This is required to leverage mobility features such a VMware VMotion, where a virtual server is dynamically moved between physical servers. The server that the virtual image is being moved to must be mapped to the storage, otherwise the VM cannot access the data. With five ESX servers, with two HBAs on each, being mapped to four storage ports, the result is 40 individual masking operations to complete the process. If four clicks are required to complete each operation, the result is 160 clicks. If the example was for twice as many ESX servers, the number of clicks would double to 320. Auto-provisioning Groups significantly simplifies the process by allowing the Storage Administrator to create host groups, port groups, and device groups, and automates the process down to about 15 clicks. And if the number of ESX servers doubles, the same amount of steps would still be required. HBA HBA HBA HBA Port A Port B Port C Port D DEV DEV DEV HBA HBA HBA HBA HBA HBA HBA HBA HBA HBA 40 Individual Masking Operations Five ESX servers x two HBAs x four storage ports ~160 clicks to complete Includes initial configuration and repeated for every change or add HBA HBA HBA HBA

51 EMC VMAX Storage Scale-out architecture for unmatched performance and hyper-consolidation New “ease-of-use” capabilities to provision thousands of virtual and/or physical servers in minutes Support for Enterprise Flash, Fibre Channel, and SATA FAST for tiering storage in VMware environments Enhanced Virtual LUN Technology for nondisruptive mobility Robust enterprise class BC and DR solutions with TimeFinder and SRDF

52 What’s new with Enginuity 5875 (Danube)


54 VMware vStorage APIs for Array Integration
Support for VAAI Block Zero 10-times less I/O for VMware vStorage Virtual Machine File System (VMFS) formatting/reallocation Hardware-Assisted Locking Block-level locking allows up to 10-times more virtual machines per data store Full Copy Offload replication to array for 10-times faster virtual machine deployments, clones, snapshots, and VMware Storage vMotion Enginuity 5875 adds integration with VMware vSphere 4.1 with support for VMware vStorage APIs for Array Integration (VAAI), making Symmetrix VMAX the only high-end platform available that’s fully integrated with VMware. VAAI includes a set of key APIs that intelligently offload storage functions to Symmetrix VMAX to increase scalability by enabling: 10-times less I/O for formatting commands and reallocating operations (with Block Zero) 10-times more virtual machines per storage array and no effect on adjacent hosts (with Hardware-Assisted Locking) 10-times faster replication of virtual machines to create snapshots and clones that can be used to speed deployments (with Full Copy).


56 EMC Virtual Storage Integrator
More Efficiency Virtual Storage Integrator is a key capability that addresses the management control concerns of storage administrators who want to allow virtual server administrators to access storage resources for provisioning capacity. It includes support for VMware and Microsoft Hyper-V and allows storage administrators to easily leverage the advanced efficiency, scalability, and security capabilities of Symmetrix VMAX for virtual servers. Virtual Storage Integrator’s key benefit is that the storage administrator continues to maintain control of the infrastructure and can effectively manage virtual server storage in combination with other applications sharing the same storage array. Some other key advantages include: Virtual storage pools can now be carved out of the array by the storage administrator and provided to the virtual server administrator to use in a self-service fashion. Virtual Storage Integrator is unique, available today, free, and downloadable through Powerlink. Note to Presenter: Virtual Storage Integrator for Symmetrix is an important differentiator versus other competitors who rely on simple plug-ins that allow virtual machine administrators to manage storage, take the control away from the storage administrators, and create potential risks that can impact other applications. VIRTUAL STORAGE INTEGRATOR MORE Scale Microsoft Hyper-V VMware MORE Security Virtual Storage Integrator simplifies virtual server management while preserving architectural control of the storage infrastructure

57 FAST VP: Sub-LUN Optimization
EMC Symmetrix VMAX Series with Enginuity November 2009 FAST VP: Sub-LUN Optimization Flash drives: optimize performance SATA: optimize cost and capacity ~5% is active ~95% is inactive Device Pool Storage Pool Allows most inactive data to move to SATA drives with no performance impact

58 FAST with Virtual Pools (FAST VP)
A new level of Fully Automated Storage Tiering Automated tiered storage for virtual pools Sub-LUN data movement for virtual pools Automatically responds to changes in the production workload More efficient use of capacity Place very busy data on enterprise Flash drives Place mostly idle data on Fibre Channel and SATA drives Enginuity 5875 introduces a new level of Fully Automated Storage Tiering with Virtual Pools (FAST VP). FAST VP provides automated storage tiering for virtually provisioned devices. Building on the original version of FAST, EMC now offers sub-LUN data movement for virtual pools, which provides for dramatically increased capacity utilization. The use of customer-configurable policies, coupled with the granularity of sub-LUN data movement, allows FAST to be more responsive to changes in the production workload activity. Because the Symmetrix VMAX system is continually monitoring activity and performance levels and optimizing data placement, high-performance enterprise Flash drives become part of a lowest-cost configuration. Putting the majority of the data on Fibre Channel or low-cost SATA drives and only a small portion on the enterprise Flash drives results in an overall lower-cost solution.

59 Virtual Provisioning Levels of Granularity

60 Elements of FAST Storage Type – a shared storage resource with common technologies FAST Policy – manages data placement and movement across Storage Types to achieve service levels for one or more Storage Groups Storage Group – logical grouping of devices for common management Storage Class – combination of Storage Types and FAST Policies to meet service level objectives for Storage Groups Storage Class Storage Types FAST Policies Storage Groups R53_200_EFD 200 GB EFD RAID 5 (3+1) Production 25% 50% ProductionApp1_SG There are three main elements related to the use of FAST. These are: Storage Type — A shared resource with common technologies FAST Policy — A policy that manages data placement and movement across Symmetrix tiers to achieve service levels and for one or more storage groups Storage Group — A logical grouping of devices for common management The combination of a FAST Policy with Storage Types is referred to as a Storage Class. R57_146_FC 146 GB 15K FC RAID 5 (7+1) ProductionApp2_SG Development 25% 100% R614_1000_SATA 1 TB SATA RAID 6 (14+2) Dev_SG

61 FAST VP Implementations
FAST VP tasks split between microcode and FAST controller Performance data collected by microcode Data collected at LUN and sub-LUN level Performance data analyzed by FAST controller Controller generates a “Performance Movement Policy” Microcode applies policy Executes sub-LUN data movements between tiers using Virtual LUN VP Mobility Microcode FAST Controller Collect Performance Data Analyze Performance Data FAST VP The FAST controller runs as a service on the Symmetrix VMAX Service Processor. When active, the FAST controller has four primary responsibilities: Note to Presenter: Click now in Slide Show mode for animation. Collect performance data Analyze performance data and capacity usage Generate performance movement policy Execute data movement Performance statistics are collected at 10-minute intervals and are stored in a database file on the Service Processor. At regular intervals, approximately hourly, the data collected by the FAST controller is analyzed, and determinations are made as to whether data, under FAST control, needs to be moved between Symmetrix tiers. The generated list of data movements created by FAST is known as a performance movement policy. When a performance movement policy exists, the FAST controller is responsible for executing the plan, committing the required changes to the Symmetrix back-end configuration. When created, a performance movement policy can be executed automatically without user interaction, or execution can be delayed until user approval is granted manually. Execute Data Movement Generate Performance Movement Policy

62 Federated Live Migration
Complete technology refreshes in days versus months Fully nondisruptive data migration for faster deployment of new technologies Leverages intelligence of storage array and host multi-pathing Supports Zero Space Reclaim for savings and efficiencies No remediation when migrating pre-qualified configurations Manage both array and host redirection from EMC Symmetrix Management Console or CLI Note to Presenter: View in Slide Show mode for animation. First let’s look at the new Federated Live Migration capability, an innovative breakthrough that solves one of the biggest problems faced by organizations with large storage environments—the necessity of host and application downtime to enable a migration to a new storage array. Federated Live Migration eliminates the complexities of host-based or SAN-based migration strategies by leveraging the intelligence of the arrays themselves, affording the first and only truly nondisruptive migration solution—one that does not require “insertion” or addition of new hardware or software to the organization’s SAN or hosts. Federated Live Migration ties together the array-based migration of the data, provided by EMC Open Replicator for Symmetrix, with the host-level application redirection, provided by EMC PowerPath. It does this by using a set of coordinated commands through EMC Symmetrix Management Console or SYMCLI to initiate the migration session and coordinate the host application redirection from one central point, making the migration truly nondisruptive. Additionally, Federated Live Migration supports a number of pre-qualified stacks of arrays, PowerPath, and host operating systems that help eliminate time-consuming remediation processes. Federated Live Migration is quite flexible; it’s capable of supporting migration combinations such as thick-to-thick, thick-to-thin, and thin-to-thin, as well as consolidating multiple systems to one Symmetrix. Note to Presenter: Federated Live Migration requires that either EMC Open Replicator/LM or Open Replicator/DM is installed and operational since it leverages one of these products to perform the migration. Most migrations will include the removal of the old array, which is what EMC recommends. However, if the customer must redeploy the old the array within the same SAN, the storage administrator should manually switch to the native/true identity of the Symmetrix VMAX device(s). This is a disruptive activity as the application must be stopped prior to removing the spoofed ID. The host must then be rebooted. After the reboot, the application is pointed at the new, native ID. Federated Live Migration

63 Zero Space Reclaim Reclaim allocated, but unused capacity when migrating to a new Symmetrix VMAX Runs “in-line” as data is migrated from old Symmetrix DMX to VMAX Supports Federated Live Migration and EMC Open Replicator for Symmetrix migration solutions Also included with Enginuity 5875 is the ability to reclaim capacity when implementing Virtual Provisioning and converting “thick” devices to “thin” pools. Note to Presenter: Click now in Slide Show mode for animation. Zero Space Reclaim is a new feature with Enginuity 5875 on Symmetrix VMAX that enables users to reclaim valuable array capacity during a migration to Symmetrix VMAX. Zero Space Reclaim can be used when migrating from Symmetrix DMX, CLARiiON, and third-party arrays to Symmetrix VMAX. In planning for migrations, IT organizations often use the opportunity to clean up their environment, and trying to reclaim valuable capacity that might have been initially over allocated is a common desire. Now with Zero Space Reclaim, organizations can perform the migration using Federated Live Migration or EMC SRDF/DM while Zero Space Reclaim identifies space that can be reclaimed for other uses. Combining the two processes saves you both time and money. When performing a migration using Federated Live Migration or EMC Open Replicator for Symmetrix (Hot Pull with Donor Update) to Symmetrix VMAX, upon arrival of the data to the Symmetrix VMAX, the data is inspected for zeros, and any all-zero blocks/tracks are marked “should-be-zero and never- been-written-by-host” while still in cache. Any data is de-staged to disk while any “zeros” are released and not de-staged. This enables you to reclaim the “zero” capacity to use for other purposes upon completion of the migration. 0101 0000 0101 0000 Migration 0101 All allocated, but unwritten space is reclaimed after zero detection check is performed on incoming data

64 10 Gigabit Ethernet Support
Storage infrastructure to enable and accelerate data center network convergence Hot pluggable I/O modules to easily intermix and upgrade connectivity Supports 10 Gigabit Ethernet for iSCSI and EMC SRDF replication Also included with the release of Enginuity 5875 are new I/O modules that add support for 10 Gigabit Ethernet that can be used for iSCSI and SRDF connectivity. The new I/O modules can be intermixed with other I/O modules, such as Fibre Channel or FICON. The modules are available for new Symmetrix VMAX Engines as well as an upgrade option for existing Engines and require Enginuity

65 Enginuity Performance Enhancements
Software optimization improves performance for large-block sequential I/O Accelerates data warehousing/ business intelligence analytics Faster EMC TimeFinder/Clone and VLUN with less application impact Enabler for FAST VP sub-LUN relocations New and existing Symmetrix VMAX systems with Enginuity 5875 Enginuity 5875 allows you to get higher performance from your existing Symmetrix VMAX arrays as well as protect your Symmetrix VMAX investment by gaining performance improvements at no additional cost. Enginuity 5875 optimizes performance by reducing the number of internal operations required to move data between global memory and the back-end disks. The result is increased engine bandwidth that provides up to twice the throughput for specific workload types, including large-block sequential I/Os that are representative of data warehousing and business intelligence applications. The additional throughput obtained by the reduction of internal operations can also lead to increased performance and reduce the impact of cloning and virtual LUN operations, as well as help with FAST VP optimization. Note to Presenter: The performance increase is available for new Symmetrix VMAX systems with Enginuity 5875 and existing users when they upgrade to Enginuity Additional performance details will be made available to support the launch, and will include performance testing for specific applications and workloads. Stay tuned for more information. Gb/s Enginuity 5874 5875 2X Up to two-times more I/O bandwidth

66 VMAX and vSphere Recommended Practices

67 VMAX FA Flag Settings for vSphere
SPC-2 SCSI 3 (Optional) No effect on ESX anymore. Unique WWN (UWN) Common Serial Number (C) OS2007 (Optional) Screenshot from within VirtualCenter using the VSI4

68 Connectivity Considerations with VMAX
VMware ESX Servers should have multiple physical HBA VMware Servers should be connected to multiple directors Directors 7 and 8 in single engine configurations Connections to different directors in different engines in multiple engine configuration Connect each HBA to a minimum of two ports on different directors Not a requirement but strongly recommended I/O intensive workload will benefit Servicing of the array is less impacting

69 Connectivity Considerations with VMAX− Cont.
Ideal Configuration Minimum Configuration VMware vSphere Servers HBA1 HBA2 HBA1 HBA2

70 Connectivity Considerations with VMAX− Cont.
VMware vSphere Servers HBA1 HBA2 HBA1 HBA2

71 Path Management with VMAX
PowerPath/VE is strongly recommended for vSphere environment Avoid POC that do not represent real environments NMP policy is available with vSphere Use Round Robin policy for Symmetrix arrays esxcli nmp satp setdefaultpsp -P VMW_PSP_RR -s VMW_SATP_SYMM May need additional tuning. Depends on workload

72 Using the VSI to configure Multipathing policy

73 VMAX FA Configuration for SPC-2
Turned on per Fibre Channel port or per initiator Do not activate in a live system if not previously set Default from 5773 (DMX-4)

74 Performance and Storage Layout – vSphere with VMAX
Physical Disk Size and Protection Depends on the IO characteristics of the workload Do not present SAN storage to ESX server farm as one large SCSI disk LUN Layout Avoid using same set of disks for applications with different I/O characteristics Use Virtual Provisioning Always provides optimal balance in VMware environments Configuration for I/O intensive application data Follow best practices recommendations for a physical server

75 Partition Alignment (VMFS and Guest OS)
Intel-based systems are misaligned due to metadata written by the BIOS to handle LBA to CHS translation Partition Misalignment affects VMFS and Guest OS partitions Host-based Partition Utilities can be used to Align Partitions: For Linux and VMFS Alignment use fdisk Offset partition to 64 KB boundary Aligned VMFS partitions are now automatically created by vSphere Client For Windows Operating Systems (2003 in particular) use the diskpart utility Create partition aligned on 64 KB boundary Use diskpar for versions of pre-Windows 2003 System at SP1 Earlier versions of diskpart will show partitions as aligned, even if they are not For Metavolumes (MetaLUNs) only the base device needs to be aligned Copyright © 2009 EMC Corporation. All Rights Reserved.

76 Effects of Partition Misalignment
Symmetrix uses either 32 or 64K track size In an Aligned System, the 64 KB write would be serviced by a single drive File-system misalignment affects performance in two ways: Misalignment causes disk crossings: I/O broken across two drives Misalignment causes stripe crossings: I/O broken across stripe elements Even if disk operations are buffered by cache, there is performance impact Larger I/O sizes are most affected. For example, assuming the Symmetrix stripe element size of 64 KB was used, all I/O of 64 KB would cause disk crossings. For I/O smaller than the stripe element size, the percentage of I/O that causes a disk crossing can be computed with this equation: Percent of data crossing = (I/O size) / (Stripe Element Size) File-system misalignment affects performance in several ways: Misalignment causes disk crossings: an I/O broken across two drives (where normally one would service the I/O). Misalignment causes stripe crossings: I/O broken across stripe elements. Misalignment makes it hard to stripe-align large uncached writes. Even if the disk operations are buffered by cache, the effect can be detrimental, as misalignment will slow flushing from cache. Copyright © 2009 EMC Corporation. All Rights Reserved.

77 VMAX Virtual Provisioning in VMware vSphere Environments
vSphere provides native thin provisioning Either one can be used Both features can be used but increases risk VMAX Virtual Provisioning simplifies drive and DA workload distribution Provides additional benefits besides optimizing storage use Ensure enough paths and TDEVs to support the workload VMAX Virtual Provisioning provides additional benefits Zero Reclaim and Rebalancing

78 VMAX Virtual Provisioning for vSphere– Performance Considerations
RAID protection of data devices Balance between performance versus resiliency Fully allocate Virtually Provisioned devices if Applications sensitive to latency The risk of oversubscription is too high Optimum performance when IOs are track aligned VMware File System is aligned on 64 KB boundary Virtual disks should be aligned Including boot volumes

79 VMAX Virtual Provisioning for vSphere– Performance Considerations – Cont.
Striped versus Concatenated thin metavolumes Depends on type of workload Small block versus large block Random versus sequential Reads versus writes Influenced by presence or absence of SRDF Concatenated thin metavolumes can be grown Frequently exploited feature in VMware environments Striped metas can be grown as of 5875 and SE 7.2 Most VMware environments have small block random read workload Beware of customer tests that do not represent reality

80 VMAX Zero Space Reclamation
Title Month Year VMAX Zero Space Reclamation Reclaims thin pool storage by deallocating unnecessary track groups Scans each track group and discards those containing all zeros Deallocated tracks are presented as all zeros by Symmetrix to host Primary use is post migration from “thick” to “thin” Migration performed using TimeFinder/Clone or Open Replicator for Symmetrix Reclamation should be run prior to configuring any replication relationships Thin devices in active TimeFinder or SRDF relationships will be skipped Thin Pool

81 VMAX Zero Space Reclamation – Cont.
Very useful tool in VMware environments Relevant if customer upgraded to vSphere VMware Thin Zeroedthick Eagerzeroedthick

82 VMAX Virtual Provisioning Automated Pool Rebalancing
Title Month Year VMAX Virtual Provisioning Automated Pool Rebalancing Rebalances allocated tracks across data devices contained within thin pool Levels out imbalances caused by thin pool expansion Or unbinding thin devices from the thin pool Scheduled process that runs at given intervals User defines imbalance as a percentage utilization difference within the pool (user configurable 1% to 50%) In VMware environments, storage requirements can increase rapidly Mass VM deployment (VDI, testing environments, etc…) Virtual environments are very dynamic Adding datastores, removing datastores Automated Pool Rebalancing maintains performance and gives best TCO Thin Pool

83 VMware vSphere and FAST
FAST currently operates at the sub-LUN level Great value in vSphere environments RDMs, dedicated datastores Considerations are the same as that for physical servers Configure a single standard size of device on all tiers Increases probability of like-sized devices being available to perform swap Meta devices are moved/swapped as a complete entity For optimal system performance use Optimizer in concert with FAST Optimizer will balance load within a tier Utilize EMC Virtual Storage Integrator to identify the Storage Type

84 Summary EMC VMAX is an ideal platform for the private cloud
EMC VMAX provides highly available and scalable platform for virtualized datacenters Future evolution of the platforms will support the virtualized data centers of the future

85 Q&A


Download ppt "VMAX 101 with vSphere Recommended Practices"

Similar presentations

Ads by Google