Download presentation
Presentation is loading. Please wait.
Published byKeshawn Wilby Modified over 9 years ago
1
EMC INFRASTRUCTURE FOR VMWARE CLOUD ENVIRONMENTS
EMC Symmetrix VMAX 40K, EMC Symmetrix FAST VP, EMC SRDF, and VMware vSphere 5 This presentation shows how EMC Symmetrix VMAX 40K supports data center provisioning for critical databases and applications in a VMware cloud. Our solution combines VMAX 40K, FAST VP, and SRDF in a virtual environment based on VMware vSphere 5. EMC Solutions Group
2
Agenda Solution & Technology Overview Configuration Details
Deploying Storage on VMAX 40K with Unisphere for VMAX Virtualization and Application Profiles Validation and Testing Summary and Solution Benefits Agenda
3
Solution and Technology Overview
Let’s take a look our EMC solution for a fast, flexible, and capable infrastructure to support VMware cloud environments.
4
Business Case Support for all of your mission- critical applications and databases Assure performance at production and remote sites with FAST VP and FAST VP coordination with SRDF Reduce time to manage storage resources with Unisphere for VMAX Increase your ROI—Maximize use of Tier 1, 2, and 3 storage assets with federated tiered storage Support mission-critical applications and databases on your VMware cloud with EMC Symmetrix VMAX 40K, FAST VP, SRDF, and the new Unisphere for VMAX.
5
Introducing Symmetrix VMAX 40K
New Bigger, better, faster 2X performance and 2X capacity New 2.5” SAS drive support 400 drives in standard 24” cabinet System Bay Dispersion Flexible expansion for space- constrained data centers EMC introduces the next generation in high-end storage—the Symmetrix VMAX 40K and Enginuity 5876 operating environment, which builds on the Symmetrix VMAX 20K and 10K foundation of powerful, trusted, smart storage to provide even higher levels of performance, availability, and intelligence in the virtual data center. Symmetrix VMAX 40K is the highest-performing, most scalable storage platform in the industry. With VMAX 40K, you get twice the performance and twice the capacity—within the same space and power footprint. You can get the most performance in the smallest footprint with VMAX 40K, along with more performance per watt. Symmetrix VMAX 40K incorporates many innovative enhancements, including high-density storage where space-constrained data centers can accommodate 33 percent more drives in the same footprint. That’s up to 3200 high-density drives on one array. Symmetrix VMAX 40K also delivers System Bay Dispersion, so the array can be separated by up to 82 feet (25 meters) in data centers with floor-loading issues. This feature also helps you work around data center obstructions, such as columns, during installation.
6
Powerful Scale and Consolidation
New Engine 1 Massive scale-out Transform your hybrid cloud environments Let’s take a look at how Symmetrix VMAX 40K provides the scale and consolidation required for hybrid cloud deployments. As with the VMAX 20K, you start with one Engine and as additional performance and scale is required, a new Engine is added. Note to Presenter: Click now in Slide Show mode for animation. Similar to the Symmetrix VMAX 20K, VMAX 40K scales up to eight Engines, and Engine resources are networked and shared through the Virtual Matrix. VIRTUAL MATRIX Up to 8 Engines
7
Online Transactional Databases
Solution Overview Deploy multiple critical database applications on a VMware private cloud enabled by a VMAX 40K array running Enginuity 5876 Code and optimized by FAST VP with site protection enabled by EMC SRDF replication Simulate a highly active customer database environment, serving Tier-1 Applications This solution looks at implementing multiple applications hosted on the VMAX 40K platform, protecting against a site failure using SRDF, and optimized for performance using EMC FAST VP. Microsoft SQL server OLTP and DSS, Oracle OLTP, and SAP OLTP environments are deployed in virtual machines on VMware ESXi 5.0 update 1 Online Transactional Databases Oracle Database 11g R2 SAP ERP 6.0 EHP 4 Microsoft SQL Server 2012 Data Warehouse
8
Solution Overview, continued
Managed, monitored, and optimized through EMC Unisphere for VMAX Simplified array management of FAST VP using “Allocation by Policy” feature of FAST VP Assured performance in DR scenarios enabled by SRDF- aware FAST VP (Enginuity 5876) with constant, automated tuning of application storage at both production and disaster recovery sites Virtualized with VMware vSphere 5 update 1 EMC Unisphere for VMAX provides an intuitive array configuration and monitoring solution, enabling fast and easy provisioning of storage as well as advanced performance monitoring and diagnostic capability. With the latest features of Enginuity 5876 operating system on the VMAX 40K array, deployment of FAST VP is simpler. By turning on Allocation by Policy, considerations for initial placement of data are eliminated as FAST VP will manage based on the policy set and capacity available in storage tiers. With SRDF awareness for FAST VP enabled, FAST VP transfers sub-LUN statistics to the remote array once per hour. The FAST controller on the target array then has up to date information about read activity on the production devices and ensures placement of hot and cold data across tiers similar to the source array. This coordination of movement means that production and target arrays will be closely matched in terms of data placement and in the event a failover the time before performance on the target array matches that of the source is reduced as less data needs to be relocated.
9
Solution Architecture and Design
Two ESXi servers configured at each site Oracle, SAP, and SQL Server deployed on VMs EMC VMAX 40K storage arrays optimized by FAST VP protected by SRDF/S 8 GBs FC SAN running between hosts and sites 1 GB Ethernet network connection between hosts and sites The architecture diagram shown here depicts the environment setup: Two ESXi servers were deployed in a cluster at each site Oracle, SAP, and MS SQL were deployed on VMs provisioned on LUNS from the EMC VMAX 40K array, replicated and protected using EMC Symmetrix remote data facility. Replication is over synchronous distance across 2 x 8-GB FC links. All hosts are connected to the arrays using 8 GB Fibre Channel Failover is controlled from a separate management host with connections to both environments.
10
Solution Hardware Storage array 2 EMC Symmetrix VMAX 40K, with:
Component Quantity Configuration Storage array 2 EMC Symmetrix VMAX 40K, with: 3 Engines 384 GB Cache 32 x 200 GB Flash drives 126 x 600 GB 10K FC drives 64 x 450 GB 15K FC drives 72 x 2TB 7.2 K SATA drives VMware ESXi servers (Production virtual environment) ESXi Server with: 8 x Ten-core Intel Xeon E7 GHz 1 TB RAM 2 x dual Port Brocade GB FC HBA VMware ESXi servers (Disaster recovery virtual environment) 4 x Ten-core CPU Intel Xeon 128 GB RAM Dual 1 Gb NICs Dual 10 Gb CNAs FC switches Brocade DCX 4S 8GB FC Director Class switch Ethernet switches 1 Gb/s Ethernet switches The following hardware components were used to build this solution: Two EMC VMAX 40K arrays, each with: 3 Engines 276 GB Cache 32 x 200 GB Flash drives 126 x 600 GB 10k FC drives (vault) 64 x 450 GB 15K FC drives 72 x 2TB SATA Drives 24 x 8 GB FC ports Two VMware ESXi servers for the Production virtual environment, each with: 8 x Ten-core Intel Xeon E7 1 TB RAM 2 x Dual Port Brocade 825 8GB FC HBA Two VMware ESXi servers for the DR virtual environment, each with: 4 x Ten-core CPU Intel Xeon 128 GB RAM Dual 1 Gb NICs Dual 10 Gb CNAs SAN infrastructure provided by: 2 x Brocade DCX 4S 8 GB FC Director Class Switches 2 x 1 Gb/s Ethernet switches for IP connectivity
11
Solution Software Software Version EMC Symmetrix VMAX Enginuity code
5876 EMC PowerPath PowerPath/VE V5.7 for VMware EMC Unisphere for VMAX 1.0 EMC Solutions Enabler 7.4 VMware vSphere ESXi 5 Update 1 Applications SAP ERP 6 EHP 4 Oracle ASM Lib 2.0.5 Oracle Database 11gR SQL Server 2012 RTM Operating systems Windows Server 2008 R2 SuSE Enterprise Linux 11 Red Hat Linux 5.7 Workload simulation test tools SQL Server 2012: MSTPCE (OLTP) toolkit and Quest BMF (DSS/DW) Oracle: SwingBench 2.3 HP LoadRunner 9.52 Build 3188 The table lists the software resources used in the solution environment.
12
Key Technology Components
EMC components: Symmetrix VMAX 40K with Enginuity 5876 Unisphere for VMAX Symmetrix Remote Data Facility (SRDF) Symmetrix FAST VP with SRDF coordination PowerPath/VE VMware Storage Integrator(VSI) Applications Oracle Database 11g R2 Enterprise Edition Microsoft SQL Server 2012 SAP ERP 6.0 EHP 4 The solution is built on these key elements: EMC components: Symmetrix VMAX 40K with Enginuity 5876 Unisphere for VMAX Symmetrix Remote Data Facility (SRDF) Symmetrix FAST VP with SRDF coordination PowerPath/VE Applications Oracle Database 11g R2 Enterprise Edition Microsoft SQL Server 2008 SAP ERP 6.0 EHP 4
13
Symmetrix VMAX 40K Array with Enginuity 5876 Operating Environment
FLASH FC SATA High-end enterprise storage array with simple, intelligent, modular design that allows system to grow seamlessly and cost-effectively from entry-level configuration to world’s largest storage array New dense configuration options with 2.5” Flash, FC, SAS drives Up to 33% more drives in the same footprint More performance per Watt Virtual provisioning provides non-disruptive, on-demand thin provisioning FAST VP provides automatic storage tiering at the sub-LUN level FAST VP SRDF coordination ensures performance at disaster recovery sites Federated tiered storage, maximizing your ROI on Tier 1, 2, and 3 storage Capacity efficient snapshot capabilities with TimeFinder VP SNAP Simplified configuration and management with Unisphere for VMAX High-end, enterprise storage array, with simple, intelligent, modular design that enables system to grow seamlessly and cost-effectively from entry-level configuration to world’s largest storage array. Supports Flash, and FC and SATA drives within single array, and extensive range of RAID types. Virtual Provisioning provides non-disruptive, on-demand thin provisioning. FAST VP provides automatic storage tiering at the sub-LUN level. Virtual LUN VP Mobility enables manual moving of thin LUNs between pools, transparently and with no host or application impact, including ability to re-gather a thin volume’s many thin device extents from multiple thin pools and move all to a single pool, regardless of underlying disk technology or RAID type. EMC PowerPath/VE supports multiple I/O paths to logical devices and intelligently distributing I/O requests across all available paths. Configured and managed by Unisphere for VMAX.
14
EMC Virtual Provisioning Overview
ESXi 3 10 TB ESXi 2 10 TB ESXi TB 3 TB 4 TB Physical allocation Common Storage Pool Improves storage utilization Reduces storage provisioning complexity and overhead Automates processes to easily grow storage Overprovision storage to last lifetime of application, without providing all the physical storage up front Add capacity non-disruptively and on demand Automatically rebalance thin pools to maintain performance Simplifies storage management with Unisphere for VMAX EMC Virtual Provisioning is EMC’s implementation of thin provisioning and is designed to simplify storage management, improve capacity utilization, and enhance performance. Virtual Provisioning provides for the separation of physical storage devices from the storage devices as perceived by host systems. This enables non-disruptive provisioning and more efficient storage utilization. Virtual Provisioning makes it possible to provision storage for applications without providing all of the physical storage up front. This means that administrators can assign enough storage to last the lifetime of the application without needing to purchase all the physical storage in advance. This approach has the following benefits: Initial acquisition costs can be reduced, because storage is added only as required. There are fewer disruptions to the application to add or change storage devices.
15
EMC Symmetrix FAST VP – Overview
Automatic storage tiering for Virtual Provisioning thin pools Analysis and data movement at sub-LUN level: Spreads data from a single thin device across multiple pools Places very active parts of a LUN on high- performing Flash Drives Places less active parts of a LUN on higher- capacity, more cost-effective FC or SATA drives Moves data at the extent group level (7,680 KB) Moves data based on user-defined policies and application performance needs Data movement is automatic and nondisruptive Flash FC FAST VP provides support for sub-LUN data movement in thin-provisioned environments. It combines the advantages of Virtual Provisioning with automatic storage tiering at the sub- LUN level to: Optimize performance and cost Radically simplify storage management Increasing storage efficiency FAST VP uses intelligent algorithms to continuously analyze devices at the sub-LUN level. This enables it to identify and relocate the specific parts of a LUN that are most active and would benefit from being moved to higher-performing storage such as Flash. It also identifies the least active parts of a LUN and relocates that data to higher-capacity, more cost-effective storage such as SATA, without altering performance. Data movement between tiers is based on performance measurement and user-defined policies, and is executed automatically and non-disruptively by FAST VP. SATA Radically simplifies storage management Optimizes performance Increases storage efficiency Symmetrix VMAX with FAST VP – Getting the right data, to the right place, at the right time
16
Symmetrix FAST VP – Components
Storage tier: A set of one or more virtual pools containing devices with the same technology type, drive speed and RAID protection level. Storage group: A logical grouping of storage devices for common management. FAST policies: A set of tier usage rules that is applied to associated storage groups. A policy assigns an upper usage limit for each tier, specifying how much data from a storage group can reside on each tier. A storage group is a logical grouping of storage devices used for common management. A storage group is associated with a FAST policy, which determines how the storage group’s devices are allocated across tiers. A FAST policy is a set of tier usage rules that is applied to associated storage groups. A FAST policy can specify up to three tiers and assigns an upper usage limit for each tier. These limits determine how much data from a storage group can reside on each tier included in the policy. Administrators can set high-performance policies that use more Flash drive capacity for critical applications, and cost-optimized policies that use more SATA drive capacity for less- critical applications. A storage tier is made up of one or more virtual pools. To be a member of a tier, a virtual pool must contain only data devices that match the technology type, drive speed and RAID protection type of the tier Administrators can set: High-performance policies that use more Flash drive capacity for critical applications Cost-optimized policies that use more SATA drive capacity for less-critical applications
17
FAST VP SRDF Coordination
FAST VP continually monitors and adjusts data placement at production site Performance stats captured at both sites and exchanged every hour Decisions on data placement made on both sites using data from the active site FAST VP now has coordination with SRDF. Coordination is enabled per storage group, and enables FAST VP to transmit performance stats packaged with the SRDF data. The FAST controller on the R2 array is able to use current performance stats from the last hour to update it’s scoring for devices and extents to make decisions on the R2. In real terms this means that in the event of a disaster the placement of the data more closely matches the R1 array. This feature is tested as part of this solution. Symmetrix VMAX with FAST VP and SRDF Co-Ordination– Getting the right data, to the right place, at the right time at both Production and DR Sites
18
Configuration Details
Next we’ll look at some of the specific configuration and set up characteristics of the solution.
19
Storage Configuration-Virtual Pools
Thin Pool Name Drive Size/ Technology /RPM RAID Protection Number of Drives TDAT Size Number of DATA Device (TDAT) Pool Capacity FLASH_3RAID5 200GB Flash RAID5 (3+1) 32 68.8 GB 64 4.2TB FC10K_RAID1 600GB FC 10K RAID1 126 66 GB 504 32TB FC15K_RAID1 450GB FC 15K 49.2 GB 256 12.2TB SATA_6RAID6 2TB SATA 7.2K RAID6 (6+2) 72 240GB 60TB EMC virtual provisioning greatly simplifies the storage design. Thin pools were created on each array based on the drive types available: A Flash Tier was created and protected RAID5. FC Tiers were created and protected with Raid 1 A SATA Tier was created and protected with RAID6. This configuration was mirrored on the target array, to ensure the same levels of performance can be achieved at both sites. Both source and target arrays are configured with the same number of drives and pools. The following slide details how these pools are utilized by the applications and FAST tiers.
20
Virtual Pool Use by Application
SAP, Oracle, and MS SQL OLTP share common Flash, FC, and SATA pools MS SQL DSS application shares a common SATA pool with all applications but is bound to a separate FC pool The SAP, Oracle, and Microsoft SQL Server OLTP applications are configured to share common Fibre Channel, SATA, and Flash Virtual pools. The smaller FC pool is utilized only by the MS SQL DSS devices. Because DSS traffic can be long sequential reads, segregating it from the highly random OLTP traffic stops this potentially disruptive application from affecting other shared resources. This also reduces the load on the shared FC pool.
21
Front End Port Usage Isolate write intensive applications to dedicated front end ports to ensure consistent performance Common traffic on common ports PowerPath/VE managing load balancing and failover In some instances, for example in the setup detailed here, we have multiple applications that will run high workloads peaking at the same time with very different workload patterns. Providing a logical separation of workloads eliminates any possible contention for hardware resources such as HBA and/or array front-end port, and guarantees service levels. This is done by using separate port groups for masking views, and through SAN zoning. Each workload domain used different port groups as shown in the diagram. SAP, although physically running on the same server as Oracle, was segregated to use different front-end ports and HBAs. Whereas both MSSQL OLTP workloads running similar workloads used the same ports and were separated from DSS workloads. PowerPath VE manages failover and load balancing of the host I/O at the ESXi server level, ensuring even utilization of host HBA resources by applications running in the virtual environment as well as providing industry leading failover and proactive path monitoring. PowerPath VE considers array front-end port utilization and queue depths when making decisions about which path to direct data onto. This creates efficiencies in path selection that result in a well tuned implementation with very even distribution of load across directors. The net result of this advance planning means that there is limited contention for resources from competing applications.
22
Configuring FAST VP FAST VP is either enabled or disabled
Data Movement should be set to automatic Relocation Rate controls aggressiveness of FAST New Feature Allocate- by-Policy simplifies capacity management of thin Provisioned environments Depicted here are the main FAST VP configuration settings. These are: State: Either ON or Off. FAST is either on or off. Movement Mode: Either automatic or Off. There is no user interaction once FAST VP is turned on, FAST VP moves at a granular level approximately 8 MB chunks, there is no way a user can identify each chunk being moved. Relocation Rate: This controls how aggressive FAST VP will be in its data movements. The lower the value, the more aggressive FAST VP will be. The minimum value is 1, the maximum value is 10, and the default value is 5. For the testing in our environment we used a value of 2. This setting affects the amount of data that will be moved at any given time, and the priority given to moving the data between pools – it does not affect the speed of the data movement. Reserved Capacity Limit: A percentage of the capacity of each virtual pool that is reserved for non-FAST activities. If the free space in a given virtual pool (as a percentage of pool-enabled capacity) falls below this value, the FAST controller does not move any more data into that pool. To further simplify the management and capacity planning of FAST VP environments, Enginuity and Solutions Enabler 7.4 provides FAST VP allocation by policy. This system-wide setting ensures that new allocations for thin devices associated with FAST VP policies no longer only come from the pool to which a thin device is bound but from any one of the tiers associated with the FAST policy. In the event that one tier can’t service a new allocation because it is full, the tracks will be allocated from one of the remaining tiers. Allocate-by-Policy greatly simplifies the capacity management as tier demand reports can easily be generated via command line or visually through Unisphere to ascertain how storage is being utilized and plan for future needs. As a recommended practice, it is recommended that the VP allocation by FAST policy be enabled. Time windows Create time windows to specify when data can be collected for performance analysis and when data movements can be executed. Recommend to have windows always open so FAST VP can use most recent analysis to optimize data placement.
23
Configuring FAST VP Create FAST VP Tiers from VP pools
Define policies, by specifying percentage of Flash, FC, SATA Apply to application Configuring FAST VP takes minutes. There are three main steps. From Unisphere select Storage, from here you: Create storage tiers from existing VP pools Define FAST VP policies Associate to applications
24
Tiers Demand Reports Capacity management is simplified
Excess GB shows how much space is left in each tier Maximum Demand from FAST VP SG per tier When using Allocate-by-Policy, managing oversubscription in the array is simplified with the knowledge that FAST will automatically distribute devices across tiers according to the policies set. The tier demand reports from Unisphere can be used to accurately see at a glance how much space is left in each tier. Storage administrators can use this as a gauge to figure out when they need to order additional storage. It is also possible to provide limited access accounts for management and auditing teams to view this level of information. The Used GB column shows the current pool usage. The Max SG Demand shows the absolute maximum demand that FAST enabled storage groups place on a particular tier, whereas the Excess tells you how much space is left accounting for the maximum potential usage from FAST VP. The storage admin can set up accounts with view-only access so management and auditing teams can be self sufficient when it comes to generating reports on storage usage. How much storage is currently used Use Tier Demand reports from Unisphere to monitor tier usage and demand
25
Deploying Storage on VMAX 40K Using Unisphere for VMAX
Next we’ll look at how simple it is to deploy storage on the VMAX 40K using the new Unisphere for VMAX.
26
Simplified Storage Provisioning with Unisphere for VMAX
Unisphere for VMAX provides the user with an intuitive interface for storage provisioning as well as enabling comprehensive performance analysis, monitoring, and trending of storage assets. Built with simplicity in mind Unisphere’s common tasks guide a new user through the flow of creating hosts, provisioning storage and applying FAST policies. With Unisphere customers can monitor and manage Multiple VMAX arrays from a single management interface. Unisphere for VMAX is built on Adobe Flex technology and has been designed to be consistent with the existing range of EMC Unisphere management products to provide a familiar look and feel for management across the entire EMC product stack. The Unisphere task-orientated design makes provisioning and deploying storage from the VMAX array very easy. Unisphere’s common tasks wizards guides new users through the process of provisioning storage to hosts. Prompts guide the user to next steps, resulting in an intuitive flow and learning process.
27
Cascaded Storage Groups
New feature with providing the ability to nest multiple storage groups in a single masking view Enginuity 5876 offers a new feature called cascaded storage groups. Essentially it is the ability to nest storage groups within storage groups, so a parent storage groups is associated with a masking view and contains a number of nested child groups. This provides the ability to manage ESXi clusters with FAST VP enabled storage much easier. Prior to 5876 it was necessary to create and manage separate storage groups for FAST and for masking which meant processes were more complicated, now applications can be managed. Simplifies FAST VP configurations in virtual environments Multiple application within a single masking view managed with its own storage group and FAST policy Simplified monitoring and management at application level made easier
28
Enable FAST VP Coordination on Storage Groups
With this feature enabled, FAST VP sends device usage statistics from R1 to R2 to ensure that the FAST engine at the standby/failover site has up-to-date metrics on which to base decisions. Enabled under storage group management in Unisphere: Check Box: Enable FAST VP RDF Coordination Coordination must to be enabled on storage groups at both source and target VMAX SRDF coordination needs to be enabled in order to send FAST statistics between the R1 and R2 storage arrays. This enables the feature allowing for R1 and R2 to send and receive performance statistics for FAST VP movement for the associated storage group. SRDF/S and SRDF/A supported as well as concurrent SRDF configurations. With this feature enabled, FAST VP sends device usage statistics from R1 to R2 to ensure that FAST engine at the standby/failover site has up to date metrics on which to base decisions.
29
Virtualization and Application Profiles
This section looks at the virtualization and application configuration profiles we used in this solution.
30
VMware Configuration Reduce HBA queuing for vSphere ESXi servers by changing the queue depth esxcli system module parameters set -p bfa_lun_queue_depth=64 -m bfa Virtual Machine boot LUNs configured to use LSI SAS adapter VMware Paravirtual SCSI (PVSCSI) adapters are used to configure DATA LUNs for high performance vCenter screenshots Two ESXi Servers were deployed at each site. Storage was deployed from storage presented from VMAX 40K to ESXi nodes at either site. Virtual Machines were deployed on storage from VMAX 40K array with FAST policies applied. We performed minimal tuning to the ESXi operating environment. We reduced HBA queuing for the vSphere ESXi servers by changing the queue depth using the esxcli command shown. The VM boot LUNs were configured to use the LSI SAS adapter. VMware Paravirtual SCSI (PVSCSI) adapters are used to configure DATA LUNs for high performance.
31
EMC Virtual Storage Integrator and VMAX 40K
EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in to the VMware vSphere client that provides a single management interface for managing EMC storage within the vSphere environment. It provides enhanced visibility into VMAX 40K directly from the vCenter GUI. The Storage Viewer and Path Management features are accessible through the EMC VSI tab. From here you can gather information about the Datastores configured, as well as the underlying storage pools they are bound too. It is also to view LUN-level statistics through the performance tab. In the solution, VMAX volumes host all datastores, and Storage Viewer provides details of the data store’s virtual volumes, storage volumes, and paths. The Storage Viewer is particularly useful when configuring the environment EMC VSI can export LUN listings.
32
Application Configuration
Oracle OLTP 1 Oracle DB instance, 12 vCPUs with 53 GB RAM 1 DB per VM, 2 TB capacity SwingBench entry workload on 400 users, 60:40 R/W ratio MSSQL OLTP (TPC-E Like) 2 SQL instances, 16 vCPUs with 32 GB RAM 1 DB per VM, 1 TB capacity Mixed workloads to simulate hot, warm applications, 85:15 R/W ratio VMDK stands for Virtual Machine Disk. A VMDK instance is a VM instance using a VMDK datastore. Oracle Our Oracle configuration included one OLTP database on one VM. The VM had 12 vCPUs with 48 GB of memory. The workload scaled to 1000 users with a database read/write ratio of 60/40. SAP SAP is using IDES release 7.01 for testing ON Oracle 11g To generate the load, we used HP LoadRunner to simulate normal activity, and an SAP local client copy (copying company specific tables from one SAP partition to another). Details Application Layer SAP Enhancement Package 4 (EHP4) for SAP ERP 6.0 IDES SAP NetWeaver Application Server for ABAP release 7.01 Database Layer: Oracle 11g Operating System Layer: Suse Linux Enterprise Server (SLES) for SAP Applications 11 SP1 Simulator Layer: 1 x HP Load Controller and HP Virtual User Controller 4 x HP Load Runner Generator All SAP and database instances are installed on VMware vSphere virtual machines. MS-SQL Three separate MSSQL instances were loaded: 2 x OLTP running TPC-E like workload 1 x DSS running TPC-H like workload The virtual machine configuration is shown in the table. Virtual Memory was dedicated for all hosts, no over-commitment of memory. For OLTP mixed workloads were simulated to represent hot and warm applications, for DSS a single concurrent user load was driven against the database to drive high read activity. SAP OLTP 3 SAP ERP 6 IDES EHP 4 instances, 16 vCPUs with 32 GB RAM 1 Oracle DB instance at 845 GB capacity 1000 LoadRunner Update users + local client copy simulation, 80:20 R/W ratio MSSQL OLAP (DSS, TPC-H Like) 1 SQL instance, 32 vCPUs with 128 GB RAM 1 DB per VM, 2 TB capacity 2 concurrent loads, 100% Read
33
Validation and Testing
Next we’ll take a look at the validation and testing that we performed. Application Performance Validation Online Policy Tuning SRDF Coordination Application Failover with Rapid Ramp-Up
34
Application Performance Summary
Four mission-critical, high-transaction applications running mixed workloads operating in a VMware vSphere 5.0 private cloud deployed on VMAX 40K SAP OLTP 2 x SQL Server OLTP Instances 1 x SQL Server Data Warehouse Instance Oracle OLTP We deployed these four applications on our VMAX 40K environment: SAP 2 x SQL Server OLTP Instances 1 x SQL Server Data Warehouse Instance Oracle OLTP
35
FAST VP Policies The table below details the FAST VP Policies set for our baseline configuration: Storage group FAST policy name Flash FC SATA MSSQL1_OLTP MSSQL_OLTP 5% 40% 100% MSSQL2_OLTP MSSQL_DSS 0% Oracle 15% 35% 50% SAP 10% 80% The table shown here depicts the FAST VP policies that were set for each applications storage group. Both MSSQL1 and MSSQL2 applications share the same FAST Policy. The policies shown here restrict the usage of the Flash tier for applications to prevent any single application from dominating Flash resources. This means as more applications come online Flash resources are available to accommodate the workloads for these policies. Both MSSQL1 and MSSQL2 applications share the same FAST policy Policies restrict Flash tier to prevent any single application from dominating Flash resources
36
Build Validation—All Applications Running
Unisphere’s real time analysis and diagnostic performance charts can be used to give insight as to how the VMAX 40K and deployed applications are functioning from the storage side. Shown here is the array Host IO per second broken down per storage group, you can see that the MSSQL DSS application in orange generated the bulk of the load on the array noted in orange. With the rest of the applications making up the rest of the load. The DSS workload has is a cylclical one that creates a sawtooth effect on the chart shown, this is normal for the workload and doesn’t affect other running applications. The diagnostic chart shown displays the R1 array Host I/O per second for each application. The chart shown here shows all applications deployed and running steady state. FAST VP is enabled and tuning. MSSQL2 is running at approx 8000 IOPS with TPC-e Like profile MSSQL1 is running at approx 4000 IOPS with TPC-e Like profile Oracle is running a user count of 400 highly active users in the SwingBench order entry benchmark SAP is running at approx 4000 IOPS with 1000 active users. Shown here running at relatively high IO profiles, each of the four applications are able to co- exist utilizing shared storage from the underlying virtual pool technologies managed by FAST VP. Each application has acceptable performance and did not affect the others with well- tuned FAST VP policy. OLTP Application Server Transactions Per Minute Average Read Response Time Oracle 97,846 5.5 ms MSSQL1 37,920 10 ms MSSQL 2 139,020 11 ms DSS/OLAP Application Average Throughput MSSQL DSS 808 MB/s Application Server Transactions Per Minute Average Dialogue Response time SAP 2894.8 ms
37
Validating FAST VP SRDF Coordination
Balance at Source and target Similar tier usage at both sites Solid lines are source devices Dotted lines are target devices To validate SRDF co-ordination with FAST VP, all applications were migrated to the FC tier on both arrays and a full SRDF establish was done to ensure a neutral starting point. The graph above looks at the utilization of one application over the period it took to balance. This slide tracks the rebalance taking place of one of the data devices for the Oracle storage group, Symmetrix Device This device is one of the Oracle data devices which is highly active during the rebalance. The solid lines represent the tier usage on the R1 and the dotted lines represent the storage tiers on the R2. At the starting point all data resided on FC, and FAST Policies were associated with the applications. Over time the utilization of tiers on the R1 and R2 are very closely matched. This shows us that capacity is being balanced across this device in a similar manner on both R1 and R2, meaning the same decisions for data placement on R1 and R2 have been made by the FAST VP controller at each site. Continued on next slide.
38
SRDF Coordination, continued
~]# symcfg list -tdev -range 426:426 -bound -detail -v -sid 542 Symmetrix ID: Enabled Capacity (Tracks) : Bound Capacity (Tracks) : S Y M M E T R I X T H I N D E V I C E S Pool Pool Total Bound Flags Total Subs Allocated Written Sym Pool Name ESPT Tracks (%) Tracks (%) Tracks (%) 0426 FC10K_RAID1 F..B FLASH_3RAID SATA_6RAID Total Tracks Source Array A detailed listing can be shown using the symcfg list command set as shown here. The balanced usage on the same devices at both sites reflect that the FAST VP engine on the R2 is making its decisions based on the same information as the R1 array. The total written tracks on R1 and R2 are a different due to zero-block detection on the RDF eliminating tracks that are all zero instead of writing to R2. ~]# symcfg list -tdev -range 426:426 -bound -detail -v -sid 541 Symmetrix ID: Enabled Capacity (Tracks) : Bound Capacity (Tracks) : S Y M M E T R I X T H I N D E V I C E S Pool Pool Total Bound Flags Total Subs Allocated Written Sym Pool Name ESPT Tracks (%) Tracks (%) Tracks (%) 0426 FC10K_RAID1 F..B FLASH_3RAID SATA_6RAID Total Tracks Disaster Recovery Array Device has been balanced across all tiers in a similar distribution on both R1 and R2
39
Failover Test Application loads were run for extended periods at R1 site and FAST VP balanced storage groups on both arrays RDF links split to simulate failover SAP and Oracle were brought up on remote ESXi Server Performance was monitored before and after failover Oracle & SAP Running at R1 After SRDF synchronization was complete and data appeared to have balanced on both sites: Applications were shut down at the R1 site 400 users were loaded against Oracle 1000 users were loaded against SAP landscape The workload was left to run for 2 hours The ramp up time on the host IOs were noted. The applications were then shutdown and a failover simulated. Oracle/SAP were started at the DR site with the same user loads and ramp up times were compared. Application performance data was also captured to verify that similar performance metrics were observed. The applications were restarted on the R2 array using the remote copy. Within minutes of starting the applications and generating the same user load the applications were generating the same amount of IO as they were before the failover. Continued on next slide. Oracle & SAP Failed over and running at R2
40
Failover Test, continued
Following failover performance at application levels were on par with production Transactions per minute and response times observed were equal This chart shows the number of transactions per minute processed by the Oracle application at the source site while synchronizing SRDF traffic to the R2 devices at the target site. It also shows the number of transactions per minute processed at the target site following the failover to the remote site. A slight improvement in the number of transactions per minute was observed at the recovery site. Slight improvements were observed at the failover site due to the FACT SRDF links were suspended and writes did not have to traverse the SRDF links to remote array cache.
41
FAST VP Responsiveness and Flexibility
At 7:40am SQL OLTP policy adjusted to add additional Flash capability After 30 minutes performance improved SQL txn/sec increased and latency dropped After 2 hours the SQL performance stabilized Zero impact observed by other running applications To demonstrate the flexibility and responsiveness of FAST VP, the MSSQL OLTP policy was changed from 5% to 30% Flash. With a TPC-E -like workload running and the application running in a steady state for a number of hours, the Flash percentage of the policy was increased. Within 30 minutes of changing the policy the number of host IO and server transactions being processed started to increase, after approximately 1.5 hours the response time had dropped from an average of 9 ms to under 3 ms — a 63% improvement. The IOPS on the two SQL Server instance increased from 5330 to 8640 with 62% improvement in the same time frame. From a storage perspective the IOPS on the two SQL Server instance increased from 5330 to a 62% improvement, the SQL Server database transaction/sec increased from 2897 to 5215, a 80% improvement, while the latency decreased from 9 ms to 2 ms, with around 78% improvement. This slide shows the SQL OLTP performance improvement of the two SQL Server OLTP instances after changing the policy. During this time all other applications running observed no impact and continued to run steady state, with small improvements observed due to less IO being directed to FC tier.
42
Summary and Solution Benefits
VMAX 40K provides an ideal platform for virtualized critical database applications, running varied workloads. End-to-end visibility with VMware Storage Integrator from EMC enables simplified management and identification of VMAX devices direct from vCenter FAST VP Cascaded storage groups enable simplified management of ESX environments with FAST. Allocate by policy reduces time to balance FAST VP storage groups and the amount of data FAST VP needs to move. FAST VP is tunable and responsive. By adjusting FAST policies, performance can be tuned to meet changing business performance needs. Increased performance within 30 minutes with no negative impact to other running workloads. SRDF coordination with FAST VP ensures production level performance at the R2 site in FAST VP environment in the event of a failover when configurations are similar. This delivers the performance and cost benefits of FAST VP at both production and disaster recovery sites. EMC Symmetrix VMAX 40K provides an ideal platform for virtualized critical database applications. Our testing has shown that multiple critical applications with dissimilar workloads can run simultaneously on VMAX 40K array being managed by FAST VP in a virtual environment. Cascaded storage groups enable simplified management of ESXi environments with FAST, reducing the complexity of managing storage groups for FAST and for masking. Allocate by Policy reduces the time to balance FAST VP storage groups and the amount of data FAST VP needs to move. FAST VP is tuneable and responsive. By adjusting FAST policies, storage performance can be tuned to meet changing business performance needs. By tuning MSSQL OLTP policy, response times performance was increased within 30 minutes with no change other than increasing Flash percentage in FAST policy. With SRDF coordination FAST VP ensures production level performance at the R2 site in FAST VP environment in the event of a failover when configurations are similar. This ensures the performance and cost benefits of FAST VP can be achieved at both Production and Disaster recovery sites.
43
Questions? Any questions?
44
Thank you!
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.