Presentation on theme: "EMC INFRASTRUCTURE FOR VMWARE CLOUD ENVIRONMENTS"— Presentation transcript:
1 EMC INFRASTRUCTURE FOR VMWARE CLOUD ENVIRONMENTS EMC Symmetrix VMAX 40K, EMC Symmetrix FAST VP, EMC SRDF,and VMware vSphere 5This presentation shows how EMC Symmetrix VMAX 40K supports data center provisioning for critical databases and applications in a VMware cloud.Our solution combines VMAX 40K, FAST VP, and SRDF in a virtual environment based on VMware vSphere 5.EMC Solutions Group
2 Agenda Solution & Technology Overview Configuration Details Deploying Storage on VMAX 40K with Unisphere for VMAXVirtualization and Application ProfilesValidation and TestingSummary and Solution BenefitsAgenda
3 Solution and Technology Overview Let’s take a look our EMC solution for a fast, flexible, and capable infrastructure to support VMware cloud environments.
4 Business CaseSupport for all of your mission- critical applications and databasesAssure performance at production and remote sites with FAST VP and FAST VP coordination with SRDFReduce time to manage storage resources with Unisphere for VMAXIncrease your ROI—Maximize use of Tier 1, 2, and 3 storage assets with federated tiered storageSupport mission-critical applications and databases on your VMware cloud with EMC Symmetrix VMAX 40K, FAST VP, SRDF, and the new Unisphere for VMAX.
5 Introducing Symmetrix VMAX 40K NewBigger, better, faster2X performance and 2X capacityNew 2.5” SAS drive support400 drives in standard 24” cabinetSystem Bay DispersionFlexible expansion for space- constrained data centersEMC introduces the next generation in high-end storage—the Symmetrix VMAX 40K and Enginuity 5876 operating environment, which builds on the Symmetrix VMAX 20K and 10K foundation of powerful, trusted, smart storage to provide even higher levels of performance, availability, and intelligence in the virtual data center.Symmetrix VMAX 40K is the highest-performing, most scalable storage platform in the industry. With VMAX 40K, you get twice the performance and twice the capacity—within the same space and power footprint. You can get the most performance in the smallest footprint with VMAX 40K, along with more performance per watt.Symmetrix VMAX 40K incorporates many innovative enhancements, including high-density storage where space-constrained data centers can accommodate 33 percent more drives in the same footprint. That’s up to 3200 high-density drives on one array.Symmetrix VMAX 40K also delivers System Bay Dispersion, so the array can be separated by up to 82 feet (25 meters) in data centers with floor-loading issues. This feature also helps you work around data center obstructions, such as columns, during installation.
6 Powerful Scale and Consolidation NewEngine 1Massive scale-outTransform your hybrid cloud environmentsLet’s take a look at how Symmetrix VMAX 40K provides the scale and consolidation required for hybrid cloud deployments. As with the VMAX 20K, you start with one Engine and as additional performance and scale is required, a new Engine is added.Note to Presenter: Click now in Slide Show mode for animation.Similar to the Symmetrix VMAX 20K, VMAX 40K scales up to eight Engines, and Engine resources are networked and shared through the Virtual Matrix.VIRTUALMATRIXUp to8 Engines
7 Online Transactional Databases Solution OverviewDeploy multiple critical database applications on a VMware private cloud enabled by a VMAX 40K array running Enginuity 5876 Code and optimized by FAST VP with site protection enabled by EMC SRDF replicationSimulate a highly active customer database environment, serving Tier-1 ApplicationsThis solution looks at implementing multiple applications hosted on the VMAX 40K platform, protecting against a site failure using SRDF, and optimized for performance using EMC FAST VP.Microsoft SQL server OLTP and DSS, Oracle OLTP, and SAP OLTP environments are deployed in virtual machines on VMware ESXi 5.0 update 1Online Transactional DatabasesOracle Database 11g R2SAP ERP 6.0 EHP 4Microsoft SQL Server 2012Data Warehouse
8 Solution Overview, continued Managed, monitored, and optimized through EMC Unisphere for VMAXSimplified array management of FAST VP using “Allocation by Policy” feature of FAST VPAssured performance in DR scenarios enabled by SRDF- aware FAST VP (Enginuity 5876) with constant, automated tuning of application storage at both production and disaster recovery sitesVirtualized with VMware vSphere 5 update 1EMC Unisphere for VMAX provides an intuitive array configuration and monitoring solution, enabling fast and easy provisioning of storage as well as advanced performance monitoring and diagnostic capability.With the latest features of Enginuity 5876 operating system on the VMAX 40K array, deployment of FAST VP is simpler. By turning on Allocation by Policy, considerations for initial placement of data are eliminated as FAST VP will manage based on the policy set and capacity available in storage tiers.With SRDF awareness for FAST VP enabled, FAST VP transfers sub-LUN statistics to the remote array once per hour. The FAST controller on the target array then has up to date information about read activity on the production devices and ensures placement of hot and cold data across tiers similar to the source array. This coordination of movement means that production and target arrays will be closely matched in terms of data placement and in the event a failover the time before performance on the target array matches that of the source is reduced as less data needs to be relocated.
9 Solution Architecture and Design Two ESXi servers configured at each siteOracle, SAP, and SQL Server deployed on VMsEMC VMAX 40K storage arrays optimized by FAST VP protected by SRDF/S8 GBs FC SAN running between hosts and sites1 GB Ethernet network connection between hosts and sitesThe architecture diagram shown here depicts the environment setup:Two ESXi servers were deployed in a cluster at each siteOracle, SAP, and MS SQL were deployed on VMs provisioned on LUNS from the EMC VMAX 40K array, replicated and protected using EMC Symmetrix remote data facility.Replication is over synchronous distance across 2 x 8-GB FC links.All hosts are connected to the arrays using 8 GB Fibre ChannelFailover is controlled from a separate management host with connections to both environments.
10 Solution Hardware Storage array 2 EMC Symmetrix VMAX 40K, with: ComponentQuantityConfigurationStorage array2EMC Symmetrix VMAX 40K, with:3 Engines384 GB Cache32 x 200 GB Flash drives126 x 600 GB 10K FC drives64 x 450 GB 15K FC drives72 x 2TB 7.2 K SATA drivesVMware ESXi servers (Production virtual environment)ESXi Server with:8 x Ten-core Intel Xeon E7 GHz1 TB RAM2 x dual Port Brocade GB FC HBAVMware ESXi servers (Disaster recovery virtual environment)4 x Ten-core CPU Intel Xeon128 GB RAMDual 1 Gb NICsDual 10 Gb CNAsFC switchesBrocade DCX 4S 8GB FC Director Class switchEthernet switches1 Gb/s Ethernet switchesThe following hardware components were used to build this solution:Two EMC VMAX 40K arrays, each with:3 Engines276 GB Cache32 x 200 GB Flash drives126 x 600 GB 10k FC drives (vault)64 x 450 GB 15K FC drives72 x 2TB SATA Drives24 x 8 GB FC portsTwo VMware ESXi servers for the Production virtual environment, each with:8 x Ten-core Intel Xeon E7 1 TB RAM2 x Dual Port Brocade 825 8GB FC HBATwo VMware ESXi servers for the DR virtual environment, each with:4 x Ten-core CPU Intel Xeon128 GB RAMDual 1 Gb NICsDual 10 Gb CNAsSAN infrastructure provided by:2 x Brocade DCX 4S 8 GB FC Director Class Switches2 x 1 Gb/s Ethernet switches for IP connectivity
11 Solution Software Software Version EMC Symmetrix VMAX Enginuity code 5876EMC PowerPathPowerPath/VE V5.7 for VMwareEMC Unisphere for VMAX1.0EMC Solutions Enabler7.4VMware vSphereESXi 5 Update 1ApplicationsSAP ERP 6 EHP 4Oracle ASM Lib 2.0.5Oracle Database 11gRSQL Server 2012 RTMOperating systemsWindows Server 2008 R2SuSE Enterprise Linux 11Red Hat Linux 5.7Workload simulation test toolsSQL Server 2012: MSTPCE (OLTP) toolkit and Quest BMF (DSS/DW)Oracle: SwingBench 2.3HP LoadRunner 9.52 Build 3188The table lists the software resources used in the solution environment.
12 Key Technology Components EMC components:Symmetrix VMAX 40K with Enginuity 5876Unisphere for VMAXSymmetrix Remote Data Facility (SRDF)Symmetrix FAST VP with SRDF coordinationPowerPath/VEVMware Storage Integrator(VSI)ApplicationsOracle Database 11g R2 Enterprise EditionMicrosoft SQL Server 2012SAP ERP 6.0 EHP 4The solution is built on these key elements:EMC components:Symmetrix VMAX 40K with Enginuity 5876Unisphere for VMAXSymmetrix Remote Data Facility (SRDF)Symmetrix FAST VP with SRDF coordinationPowerPath/VEApplicationsOracle Database 11g R2 Enterprise EditionMicrosoft SQL Server 2008SAP ERP 6.0 EHP 4
13 Symmetrix VMAX 40K Array with Enginuity 5876 Operating Environment FLASHFCSATAHigh-end enterprise storage array with simple, intelligent, modular design that allows system to grow seamlessly and cost-effectively from entry-level configuration to world’s largest storage arrayNew dense configuration options with 2.5” Flash, FC, SAS drivesUp to 33% more drives in the same footprintMore performance per WattVirtual provisioning provides non-disruptive, on-demand thin provisioningFAST VP provides automatic storage tiering at the sub-LUN levelFAST VP SRDF coordination ensures performance at disaster recovery sitesFederated tiered storage, maximizing your ROI on Tier 1, 2, and 3 storageCapacity efficient snapshot capabilities with TimeFinder VP SNAPSimplified configuration and management with Unisphere for VMAXHigh-end, enterprise storage array, with simple, intelligent, modular design that enables system to grow seamlessly and cost-effectively from entry-level configuration to world’s largest storage array.Supports Flash, and FC and SATA drives within single array, and extensive range of RAID types.Virtual Provisioning provides non-disruptive, on-demand thin provisioning.FAST VP provides automatic storage tiering at the sub-LUN level.Virtual LUN VP Mobility enables manual moving of thin LUNs between pools, transparently and with no host or application impact, including ability to re-gather a thin volume’s many thin device extents from multiple thin pools and move all to a single pool, regardless of underlying disk technology or RAID type.EMC PowerPath/VE supports multiple I/O paths to logical devices and intelligently distributing I/O requests across all available paths.Configured and managed by Unisphere for VMAX.
14 EMC Virtual Provisioning Overview ESXi 3 10 TBESXi 2 10 TBESXi TB3 TB4 TBPhysical allocationCommonStorage PoolImproves storage utilizationReduces storage provisioning complexity and overheadAutomates processes to easily grow storageOverprovision storage to last lifetime of application, without providing all the physical storage up frontAdd capacity non-disruptively and on demandAutomatically rebalance thin pools to maintain performanceSimplifies storage management with Unisphere for VMAXEMC Virtual Provisioning is EMC’s implementation of thin provisioning and is designed to simplify storage management, improve capacity utilization, and enhance performance.Virtual Provisioning provides for the separation of physical storage devices from the storage devices as perceived by host systems. This enables non-disruptive provisioning and more efficient storage utilization.Virtual Provisioning makes it possible to provision storage for applications without providing all of the physical storage up front. This means that administrators can assign enough storage to last the lifetime of the application without needing to purchase all the physical storage in advance.This approach has the following benefits:Initial acquisition costs can be reduced, because storage is added only as required.There are fewer disruptions to the application to add or change storage devices.
15 EMC Symmetrix FAST VP – Overview Automatic storage tiering for Virtual Provisioning thin poolsAnalysis and data movement at sub-LUN level:Spreads data from a single thin device across multiple poolsPlaces very active parts of a LUN on high- performing Flash DrivesPlaces less active parts of a LUN on higher- capacity, more cost-effective FC or SATA drivesMoves data at the extent group level (7,680 KB)Moves data based on user-defined policies and application performance needsData movement is automatic and nondisruptiveFlashFCFAST VP provides support for sub-LUN data movement in thin-provisioned environments.It combines the advantages of Virtual Provisioning with automatic storage tiering at the sub- LUN level to:Optimize performance and costRadically simplify storage managementIncreasing storage efficiencyFAST VP uses intelligent algorithms to continuously analyze devices at the sub-LUN level. This enables it to identify and relocate the specific parts of a LUN that are most active and would benefit from being moved to higher-performing storage such as Flash. It also identifies the least active parts of a LUN and relocates that data to higher-capacity, more cost-effective storage such as SATA, without altering performance.Data movement between tiers is based on performance measurement and user-defined policies, and is executed automatically and non-disruptively by FAST VP.SATARadically simplifies storage managementOptimizes performanceIncreases storage efficiencySymmetrix VMAX with FAST VP – Getting the right data, to the right place, at the right time
16 Symmetrix FAST VP – Components Storage tier: A set of one or more virtual pools containing devices with the same technology type, drive speed and RAID protection level.Storage group: A logical grouping of storage devices for common management.FAST policies: A set of tier usage rules that is applied to associated storage groups. A policy assigns an upper usage limit for each tier, specifying how much data from a storage group can reside on each tier.A storage group is a logical grouping of storage devices used for common management. A storage group is associated with a FAST policy, which determines how the storage group’s devices are allocated across tiers.A FAST policy is a set of tier usage rules that is applied to associated storage groups. A FAST policy can specify up to three tiers and assigns an upper usage limit for each tier. These limits determine how much data from a storage group can reside on each tier included in the policy.Administrators can set high-performance policies that use more Flash drive capacity for critical applications, and cost-optimized policies that use more SATA drive capacity for less- critical applications.A storage tier is made up of one or more virtual pools. To be a member of a tier, a virtual pool must contain only data devices that match the technology type, drive speed and RAID protection type of the tierAdministrators can set:High-performance policies that use more Flash drive capacity for critical applicationsCost-optimized policies that use more SATA drive capacity for less-critical applications
17 FAST VP SRDF Coordination FAST VP continually monitors and adjusts data placement at production sitePerformance stats captured at both sites and exchanged every hourDecisions on data placement made on both sites using data from the active siteFAST VP now has coordination with SRDF. Coordination is enabled per storage group, and enables FAST VP to transmit performance stats packaged with the SRDF data.The FAST controller on the R2 array is able to use current performance stats from the last hour to update it’s scoring for devices and extents to make decisions on the R2.In real terms this means that in the event of a disaster the placement of the data more closely matches the R1 array.This feature is tested as part of this solution.Symmetrix VMAX with FAST VP and SRDF Co-Ordination– Getting the right data, to the right place, at the right time at both Production and DR Sites
18 Configuration Details Next we’ll look at some of the specific configuration and set up characteristics of the solution.
19 Storage Configuration-Virtual Pools Thin Pool NameDrive Size/ Technology /RPMRAID ProtectionNumber of DrivesTDAT SizeNumber of DATA Device (TDAT)Pool CapacityFLASH_3RAID5200GB FlashRAID5 (3+1)3268.8 GB644.2TBFC10K_RAID1600GB FC 10KRAID112666 GB50432TBFC15K_RAID1450GB FC 15K49.2 GB25612.2TBSATA_6RAID62TB SATA 7.2KRAID6 (6+2)72240GB60TBEMC virtual provisioning greatly simplifies the storage design.Thin pools were created on each array based on the drive types available:A Flash Tier was created and protected RAID5.FC Tiers were created and protected with Raid 1A SATA Tier was created and protected with RAID6.This configuration was mirrored on the target array, to ensure the same levels of performance can be achieved at both sites.Both source and target arrays are configured with the same number of drives and pools.The following slide details how these pools are utilized by the applications and FAST tiers.
20 Virtual Pool Use by Application SAP, Oracle, and MS SQL OLTP share common Flash, FC, and SATA poolsMS SQL DSS application shares a common SATA pool with all applications but is bound to a separate FC poolThe SAP, Oracle, and Microsoft SQL Server OLTP applications are configured to share common Fibre Channel, SATA, and Flash Virtual pools.The smaller FC pool is utilized only by the MS SQL DSS devices. Because DSS traffic can be long sequential reads, segregating it from the highly random OLTP traffic stops this potentially disruptive application from affecting other shared resources. This also reduces the load on the shared FC pool.
21 Front End Port UsageIsolate write intensive applications to dedicated front end ports to ensure consistent performanceCommon traffic on common portsPowerPath/VE managing load balancing and failoverIn some instances, for example in the setup detailed here, we have multiple applications that will run high workloads peaking at the same time with very different workload patterns. Providing a logical separation of workloads eliminates any possible contention for hardware resources such as HBA and/or array front-end port, and guarantees service levels.This is done by using separate port groups for masking views, and through SAN zoning. Each workload domain used different port groups as shown in the diagram. SAP, although physically running on the same server as Oracle, was segregated to use different front-end ports and HBAs. Whereas both MSSQL OLTP workloads running similar workloads used the same ports and were separated from DSS workloads.PowerPath VE manages failover and load balancing of the host I/O at the ESXi server level, ensuring even utilization of host HBA resources by applications running in the virtual environment as well as providing industry leading failover and proactive path monitoring.PowerPath VE considers array front-end port utilization and queue depths when making decisions about which path to direct data onto. This creates efficiencies in path selection that result in a well tuned implementation with very even distribution of load across directors.The net result of this advance planning means that there is limited contention for resources from competing applications.
22 Configuring FAST VP FAST VP is either enabled or disabled Data Movement should be set to automaticRelocation Rate controls aggressiveness of FASTNew Feature Allocate- by-Policy simplifies capacity management of thin Provisioned environmentsDepicted here are the main FAST VP configuration settings.These are:State: Either ON or Off. FAST is either on or off.Movement Mode: Either automatic or Off. There is no user interaction once FAST VP is turned on, FAST VP moves at a granular level approximately 8 MB chunks, there is no way a user can identify each chunk being moved.Relocation Rate: This controls how aggressive FAST VP will be in its data movements. The lower the value, the more aggressive FAST VP will be. The minimum value is 1, the maximum value is 10, and the default value is 5. For the testing in our environment we used a value of 2.This setting affects the amount of data that will be moved at any given time, and the priority given to moving the data between pools – it does not affect the speed of the data movement.Reserved Capacity Limit: A percentage of the capacity of each virtual pool that is reserved for non-FAST activities. If the free space in a given virtual pool (as a percentage of pool-enabled capacity) falls below this value, the FAST controller does not move any more data into that pool.To further simplify the management and capacity planning of FAST VP environments, Enginuity and Solutions Enabler 7.4 provides FAST VP allocation by policy. This system-wide setting ensures that new allocations for thin devices associated with FAST VP policies no longer only come from the pool to which a thin device is bound but from any one of the tiers associated with the FAST policy. In the event that one tier can’t service a new allocation because it is full, the tracks will be allocated from one of the remaining tiers.Allocate-by-Policy greatly simplifies the capacity management as tier demand reports can easily be generated via command line or visually through Unisphere to ascertain how storage is being utilized and plan for future needs.As a recommended practice, it is recommended that the VP allocation by FAST policy be enabled.Time windowsCreate time windows to specify when data can be collected for performance analysis and when data movements can be executed.Recommend to have windows always open so FAST VP can use most recent analysis to optimize data placement.
23 Configuring FAST VP Create FAST VP Tiers from VP pools Define policies, by specifying percentage of Flash, FC, SATAApply to applicationConfiguring FAST VP takes minutes.There are three main steps. From Unisphere select Storage, from here you:Create storage tiers from existing VP poolsDefine FAST VP policiesAssociate to applications
24 Tiers Demand Reports Capacity management is simplified Excess GB shows how much space is left in each tierMaximum Demand from FAST VP SG per tierWhen using Allocate-by-Policy, managing oversubscription in the array is simplified with the knowledge that FAST will automatically distribute devices across tiers according to the policies set.The tier demand reports from Unisphere can be used to accurately see at a glance how much space is left in each tier. Storage administrators can use this as a gauge to figure out when they need to order additional storage. It is also possible to provide limited access accounts for management and auditing teams to view this level of information.The Used GB column shows the current pool usage.The Max SG Demand shows the absolute maximum demand that FAST enabled storage groups place on a particular tier, whereas the Excess tells you how much space is left accounting for the maximum potential usage from FAST VP.The storage admin can set up accounts with view-only access so management and auditing teams can be self sufficient when it comes to generating reports on storage usage.How much storage is currently usedUse Tier Demand reports from Unisphere to monitor tier usage and demand
25 Deploying Storage on VMAX 40K Using Unisphere for VMAX Next we’ll look at how simple it is to deploy storage on the VMAX 40K using the new Unisphere for VMAX.
26 Simplified Storage Provisioning with Unisphere for VMAX Unisphere for VMAX provides the user with an intuitive interface for storage provisioning as well as enabling comprehensive performance analysis, monitoring, and trending of storage assets. Built with simplicity in mind Unisphere’s common tasks guide a new user through the flow of creating hosts, provisioning storage and applying FAST policies.With Unisphere customers can monitor and manage Multiple VMAX arrays from a single management interface. Unisphere for VMAX is built on Adobe Flex technology and has been designed to be consistent with the existing range of EMC Unisphere management products to provide a familiar look and feel for management across the entire EMC product stack.The Unisphere task-orientated design makes provisioning and deploying storage from the VMAX array very easy.Unisphere’s common tasks wizards guides new users through the process of provisioning storage to hosts.Prompts guide the user to next steps, resulting in an intuitive flow and learning process.
27 Cascaded Storage Groups New feature with providing the ability to nest multiple storage groups in a single masking viewEnginuity 5876 offers a new feature called cascaded storage groups. Essentially it is the ability to nest storage groups within storage groups, so a parent storage groups is associated with a masking view and contains a number of nested child groups.This provides the ability to manage ESXi clusters with FAST VP enabled storage much easier. Prior to 5876 it was necessary to create and manage separate storage groups for FAST and for masking which meant processes were more complicated, now applications can be managed.Simplifies FAST VP configurations in virtual environmentsMultiple application within a single masking view managed with its own storage group and FAST policySimplified monitoring and management at application level made easier
28 Enable FAST VP Coordination on Storage Groups With this feature enabled, FAST VP sends device usage statistics from R1 to R2 to ensure that the FAST engine at the standby/failover site has up-to-date metrics on which to base decisions.Enabled under storage group management in Unisphere:Check Box: Enable FAST VP RDF CoordinationCoordination must to be enabled on storage groups at both source and target VMAXSRDF coordination needs to be enabled in order to send FAST statistics between the R1 and R2 storage arrays. This enables the feature allowing for R1 and R2 to send and receive performance statistics for FAST VP movement for the associated storage group.SRDF/S and SRDF/A supported as well as concurrent SRDF configurations.With this feature enabled, FAST VP sends device usage statistics from R1 to R2 to ensure that FAST engine at the standby/failover site has up to date metrics on which to base decisions.
29 Virtualization and Application Profiles This section looks at the virtualization and application configuration profiles we used in this solution.
30 VMware ConfigurationReduce HBA queuing for vSphere ESXi servers by changing the queue depthesxcli system module parameters set -p bfa_lun_queue_depth=64 -m bfaVirtual Machine boot LUNs configured to use LSI SAS adapterVMware Paravirtual SCSI (PVSCSI) adapters are used to configure DATA LUNs for high performancevCenter screenshotsTwo ESXi Servers were deployed at each site. Storage was deployed from storage presented from VMAX 40K to ESXi nodes at either site. Virtual Machines were deployed on storage from VMAX 40K array with FAST policies applied.We performed minimal tuning to the ESXi operating environment.We reduced HBA queuing for the vSphere ESXi servers by changing the queue depth using the esxcli command shown.The VM boot LUNs were configured to use the LSI SAS adapter.VMware Paravirtual SCSI (PVSCSI) adapters are used to configure DATA LUNs for high performance.
31 EMC Virtual Storage Integrator and VMAX 40K EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in to the VMware vSphere client that provides a single management interface for managing EMC storage within the vSphere environment.It provides enhanced visibility into VMAX 40K directly from the vCenter GUI. The Storage Viewer and Path Management features are accessible through the EMC VSI tab.From here you can gather information about the Datastores configured, as well as the underlying storage pools they are bound too. It is also to view LUN-level statistics through the performance tab.In the solution, VMAX volumes host all datastores, and Storage Viewer provides details of the data store’s virtual volumes, storage volumes, and paths.The Storage Viewer is particularly useful when configuring the environment EMC VSI can export LUN listings.
32 Application Configuration Oracle OLTP1 Oracle DB instance, 12 vCPUs with 53 GB RAM1 DB per VM, 2 TB capacitySwingBench entry workload on 400 users, 60:40 R/W ratioMSSQL OLTP (TPC-E Like)2 SQL instances, 16 vCPUs with 32 GB RAM1 DB per VM, 1 TB capacityMixed workloads to simulate hot, warm applications, 85:15 R/W ratioVMDK stands for Virtual Machine Disk. A VMDK instance is a VM instance using a VMDK datastore.OracleOur Oracle configuration included one OLTP database on one VM.The VM had 12 vCPUs with 48 GB of memory.The workload scaled to 1000 users with a database read/write ratio of 60/40.SAPSAP is using IDES release 7.01 for testing ON Oracle 11gTo generate the load, we used HP LoadRunner to simulate normal activity, and an SAP local client copy (copying company specific tables from one SAP partition to another).DetailsApplication LayerSAP Enhancement Package 4 (EHP4) for SAP ERP 6.0 IDESSAP NetWeaver Application Server for ABAP release 7.01Database Layer: Oracle 11gOperating System Layer: Suse Linux Enterprise Server (SLES) for SAP Applications 11 SP1Simulator Layer: 1 x HP Load Controller and HP Virtual User Controller4 x HP Load Runner GeneratorAll SAP and database instances are installed on VMware vSphere virtual machines.MS-SQLThree separate MSSQL instances were loaded:2 x OLTP running TPC-E like workload1 x DSS running TPC-H like workloadThe virtual machine configuration is shown in the table.Virtual Memory was dedicated for all hosts, no over-commitment of memory.For OLTP mixed workloads were simulated to represent hot and warm applications, for DSS a single concurrent user load was driven against the database to drive high read activity.SAP OLTP3 SAP ERP 6 IDES EHP 4 instances, 16 vCPUs with 32 GB RAM1 Oracle DB instance at 845 GB capacity1000 LoadRunner Update users + local client copy simulation,80:20 R/W ratioMSSQL OLAP (DSS, TPC-H Like)1 SQL instance, 32 vCPUs with 128 GB RAM1 DB per VM, 2 TB capacity2 concurrent loads, 100% Read
33 Validation and Testing Next we’ll take a look at the validation and testing that we performed.Application Performance ValidationOnline Policy TuningSRDF CoordinationApplication Failover with Rapid Ramp-Up
34 Application Performance Summary Four mission-critical, high-transaction applications running mixed workloads operating in a VMware vSphere 5.0 private cloud deployed on VMAX 40KSAP OLTP2 x SQL Server OLTP Instances1 x SQL Server Data Warehouse InstanceOracle OLTPWe deployed these four applications on our VMAX 40K environment:SAP2 x SQL Server OLTP Instances1 x SQL Server Data Warehouse InstanceOracle OLTP
35 FAST VP PoliciesThe table below details the FAST VP Policies set for our baseline configuration:Storage groupFAST policy nameFlashFCSATAMSSQL1_OLTPMSSQL_OLTP5%40%100%MSSQL2_OLTPMSSQL_DSS0%Oracle15%35%50%SAP10%80%The table shown here depicts the FAST VP policies that were set for each applications storage group.Both MSSQL1 and MSSQL2 applications share the same FAST Policy.The policies shown here restrict the usage of the Flash tier for applications to prevent any single application from dominating Flash resources. This means as more applications come online Flash resources are available to accommodate the workloads for these policies.Both MSSQL1 and MSSQL2 applications share the same FAST policyPolicies restrict Flash tier to prevent any single application from dominating Flash resources
36 Build Validation—All Applications Running Unisphere’s real time analysis and diagnostic performance charts can be used to give insight as to how the VMAX 40K and deployed applications are functioning from the storage side.Shown here is the array Host IO per second broken down per storage group, you can see that the MSSQL DSS application in orange generated the bulk of the load on the array noted in orange. With the rest of the applications making up the rest of the load.The DSS workload has is a cylclical one that creates a sawtooth effect on the chart shown, this is normal for the workload and doesn’t affect other running applications.The diagnostic chart shown displays the R1 array Host I/O per second for each application.The chart shown here shows all applications deployed and running steady state. FAST VP is enabled and tuning.MSSQL2 is running at approx 8000 IOPS with TPC-e Like profileMSSQL1 is running at approx 4000 IOPS with TPC-e Like profileOracle is running a user count of 400 highly active users in the SwingBench order entry benchmarkSAP is running at approx 4000 IOPS with 1000 active users.Shown here running at relatively high IO profiles, each of the four applications are able to co- exist utilizing shared storage from the underlying virtual pool technologies managed by FAST VP. Each application has acceptable performance and did not affect the others with well- tuned FAST VP policy.OLTP ApplicationServer Transactions Per MinuteAverage Read Response TimeOracle97,8465.5 msMSSQL137,92010 msMSSQL 2139,02011 msDSS/OLAP ApplicationAverage ThroughputMSSQL DSS808 MB/sApplicationServer Transactions Per MinuteAverage Dialogue Response timeSAP2894.8ms
37 Validating FAST VP SRDF Coordination Balance at Source and targetSimilar tier usage at both sitesSolid lines are source devicesDotted lines are target devicesTo validate SRDF co-ordination with FAST VP, all applications were migrated to the FC tier on both arrays and a full SRDF establish was done to ensure a neutral starting point.The graph above looks at the utilization of one application over the period it took to balance.This slide tracks the rebalance taking place of one of the data devices for the Oracle storage group, Symmetrix Device This device is one of the Oracle data devices which is highly active during the rebalance.The solid lines represent the tier usage on the R1 and the dotted lines represent the storage tiers on the R2. At the starting point all data resided on FC, and FAST Policies were associated with the applications. Over time the utilization of tiers on the R1 and R2 are very closely matched.This shows us that capacity is being balanced across this device in a similar manner on both R1 and R2, meaning the same decisions for data placement on R1 and R2 have been made by the FAST VP controller at each site.Continued on next slide.
38 SRDF Coordination, continued ~]# symcfg list -tdev -range 426:426 -bound -detail -v -sid 542Symmetrix ID:Enabled Capacity (Tracks) :Bound Capacity (Tracks) :S Y M M E T R I X T H I N D E V I C E SPool Pool TotalBound Flags Total Subs Allocated WrittenSym Pool Name ESPT Tracks (%) Tracks (%) Tracks (%)0426 FC10K_RAID1 F..BFLASH_3RAIDSATA_6RAIDTotalTracksSource ArrayA detailed listing can be shown using the symcfg list command set as shown here.The balanced usage on the same devices at both sites reflect that the FAST VP engine on the R2 is making its decisions based on the same information as the R1 array.The total written tracks on R1 and R2 are a different due to zero-block detection on the RDF eliminating tracks that are all zero instead of writing to R2.~]# symcfg list -tdev -range 426:426 -bound -detail -v -sid 541Symmetrix ID:Enabled Capacity (Tracks) :Bound Capacity (Tracks) :S Y M M E T R I X T H I N D E V I C E SPool Pool TotalBound Flags Total Subs Allocated WrittenSym Pool Name ESPT Tracks (%) Tracks (%) Tracks (%)0426 FC10K_RAID1 F..BFLASH_3RAIDSATA_6RAIDTotalTracksDisaster Recovery ArrayDevice has been balanced across all tiers in a similar distribution on both R1 and R2
39 Failover TestApplication loads were run for extended periods at R1 site and FAST VP balanced storage groups on both arraysRDF links split to simulate failoverSAP and Oracle were brought up on remote ESXi ServerPerformance was monitored before and after failoverOracle & SAP Running at R1After SRDF synchronization was complete and data appeared to have balanced on both sites:Applications were shut down at the R1 site400 users were loaded against Oracle1000 users were loaded against SAP landscapeThe workload was left to run for 2 hoursThe ramp up time on the host IOs were noted.The applications were then shutdown and a failover simulated. Oracle/SAP were started at the DR site with the same user loads and ramp up times were compared.Application performance data was also captured to verify that similar performance metrics were observed.The applications were restarted on the R2 array using the remote copy. Within minutes of starting the applications and generating the same user load the applications were generating the same amount of IO as they were before the failover.Continued on next slide.Oracle & SAP Failed over and running at R2
40 Failover Test, continued Following failover performance at application levels were on par with productionTransactions per minute and response times observed were equalThis chart shows the number of transactions per minute processed by the Oracle application at the source site while synchronizing SRDF traffic to the R2 devices at the target site. It also shows the number of transactions per minute processed at the target site following the failover to the remote site.A slight improvement in the number of transactions per minute was observed at the recovery site. Slight improvements were observed at the failover site due to the FACT SRDF links were suspended and writes did not have to traverse the SRDF links to remote array cache.
41 FAST VP Responsiveness and Flexibility At 7:40am SQL OLTP policy adjusted to add additional Flash capabilityAfter 30 minutes performance improvedSQL txn/sec increased and latency droppedAfter 2 hours the SQL performance stabilizedZero impact observed by other running applicationsTo demonstrate the flexibility and responsiveness of FAST VP, the MSSQL OLTP policy was changed from 5% to 30% Flash. With a TPC-E -like workload running and the application running in a steady state for a number of hours, the Flash percentage of the policy was increased.Within 30 minutes of changing the policy the number of host IO and server transactions being processed started to increase, after approximately 1.5 hours the response time had dropped from an average of 9 ms to under 3 ms — a 63% improvement.The IOPS on the two SQL Server instance increased from 5330 to 8640 with 62% improvement in the same time frame.From a storage perspective the IOPS on the two SQL Server instance increased from 5330 to a 62% improvement, the SQL Server database transaction/sec increased from 2897 to 5215, a 80% improvement, while the latency decreased from 9 ms to 2 ms, with around 78% improvement. This slide shows the SQL OLTP performance improvement of the two SQL Server OLTP instances after changing the policy.During this time all other applications running observed no impact and continued to run steady state, with small improvements observed due to less IO being directed to FC tier.
42 Summary and Solution Benefits VMAX 40K provides an ideal platform for virtualized critical database applications, running varied workloads.End-to-end visibility with VMware Storage Integrator from EMC enables simplified management and identification of VMAX devices direct from vCenterFAST VP Cascaded storage groups enable simplified management of ESX environments with FAST.Allocate by policy reduces time to balance FAST VP storage groups and the amount of data FAST VP needs to move.FAST VP is tunable and responsive. By adjusting FAST policies, performance can be tuned to meet changing business performance needs. Increased performance within 30 minutes with no negative impact to other running workloads.SRDF coordination with FAST VP ensures production level performance at the R2 site in FAST VP environment in the event of a failover when configurations are similar. This delivers the performance and cost benefits of FAST VP at both production and disaster recovery sites.EMC Symmetrix VMAX 40K provides an ideal platform for virtualized critical database applications.Our testing has shown that multiple critical applications with dissimilar workloads can run simultaneously on VMAX 40K array being managed by FAST VP in a virtual environment.Cascaded storage groups enable simplified management of ESXi environments with FAST, reducing the complexity of managing storage groups for FAST and for masking.Allocate by Policy reduces the time to balance FAST VP storage groups and the amount of data FAST VP needs to move.FAST VP is tuneable and responsive. By adjusting FAST policies, storage performance can be tuned to meet changing business performance needs.By tuning MSSQL OLTP policy, response times performance was increased within 30 minutes with no change other than increasing Flash percentage in FAST policy.With SRDF coordination FAST VP ensures production level performance at the R2 site in FAST VP environment in the event of a failover when configurations are similar. This ensures the performance and cost benefits of FAST VP can be achieved at both Production and Disaster recovery sites.