Presentation on theme: "Microsoft Exchange Best Practices and Design Guidelines on EMC Storage"— Presentation transcript:
1Microsoft Exchange Best Practices and Design Guidelines on EMC Storage Exchange 2010 and Exchange 2013VNX and VMAX Storage SystemsEMC Global Solutions | Global Solutions EngineeringOctober 2014 Update
2Topics Exchange – What has changed Exchange Virtualization Exchange Storage DesignExchange DR optionsExchange Validated SolutionsESI for VNX Pool Optimization Tool (fka SOAP)Building Block Design ExampleExchange Archiving
4Exchange…What has changed ◊ 64-bit Windows◊ 32+ GB database cache◊ 8Kb block size◊ 1:1 DB read/write ratio◊ 70% reduction in IOPS from Exchange 2003Exchange 2010◊ 100GB database cache (DAG)◊ 32Kb block size◊ 3:2 DB read/write ratio◊ 70% reduction in IOPS from Exchange 2007Exchange 2013◊ 33% reduction in IOPS from Exchange 2010
5Exchange User Profile Changes Messages sent/ received per mailbox per dayExchange 2010 Estimated IOPS per mailbox (Active or Passive)Exchange 2013 Estimated IOPS per mailbox(Active or Passive)Mailbox resiliencyStand-alone500.050.060.0341000.1000.1200.0671500.1500.1800.1012000.2000.2400.1342500.2500.3000.1683000.3600.2013500.3500.4200.2354000.4000.4800.2684500.4500.5400.3025000.5000.6000.335
6Exchange Processor Requirements Changes Messages sent or received per mailbox per dayMegacycles per UserActive DB Copy or Standalone (MBX only)Active DB Copy or Standalone (Multi-Role)Passive DB CopyExchange 2010Exchange 20135012.13N/A2.660.150.6910024.255.310.31.3715036.387.970.452.0620048.5010.630.62.74250513.280.753.43300612.7515.940.94.11350714.8818.591.054.80400817.0021.251.25.48450919.1323.911.356.175001026.561.56.85
7Exchange I/O Characteristics I/O TypeExchange 2007Exchange 2010Exchange 2013Database I/O8 KB random write I/O32 KB random I/OBackground Database Maintenance (BDM) I/ON/A256 KB Sequential Read I/OLog I/OVaries in size from bytes to the log buffer size (1 MB)Varies in size from 4 KB to the log buffer size (1 MB)
8Exchange 2010/2013 mailbox database I/O read/write ratios Messages sent/received per mailbox per dayStand-alone databasesDatabases participating in mailbox resiliency501:13:21001502002503002:3350400450500
9Understanding Exchange I/O Exchange 2010/2013 I/O’s to the database (.edb) are divided into two types:Transactional I/O (aka user I/O)Database volume I/O (database reads and writes)Log volume I/O (logs reads and writes)Non Transactional I/OBackground Database Maintenance (Checksum) (BDM)NOTE: Only database I/O’s are measured when sizing storage and during Jetstress validationFor more details see “Understanding Database and Log Performance Factors” at
10Background Database Maintenance (BDM) BDM is the process of Exchange Server 2010/2013 database maintenance that includes online defragmentation and online database scanningBoth active and passive database copies are scannedOn active copy can be scheduled to run during the online maintenance window (default is 24 x 7)Passive copy is ”hardcoded” to 24 x 7 scanJetstress has no concept of passive copy, all are activePossible BDM related issues (mostly for Exchange 2010):Bandwidth/throughput required for BDM and BDM IOPSNot enough FE ports, not enough BE ports, non-optimal RAID configurationExchange 2010Exchange 2013Read I/O size256 KBDatabase scan completion1 weekevery 4 weeksIOPS per database309Bandwidth7.5 MB/s*2.25 MB/s*
11Exchange Content Index Considerations Content Indexing space considerations:In Exchange 2010 content index space is estimated at about 10% of the database size.In Exchange 2013 content index space is estimated at about 20% of the database size.An additional 20% must be added for content indexing maintenance tasks (such as the master merge process) to complete.Calculations References exchange-2013-deployments.aspx
12Exchange High Availability Key TerminologyTermDescriptionActive ManagerAn internal Exchange component which runs inside the Microsoft Exchange Replication service that's responsible for failure monitoring and corrective action through failover within a database availability group (DAG).AutoDatabaseMountDialA property setting of a Mailbox server that determines whether a passive database copy will automatically mount as the new active copy, based on the number of log files missing by the copy being mounted.Continuous replication - block modeIn block mode, as each update is written to the active database copy's active log buffer, it's also shipped to a log buffer on each of the passive mailbox copies in block mode. When the log buffer is full, each database copy builds, inspects, and creates the next log file in the generation sequence.Continuous replication - file modeIn file mode, closed transaction log files are pushed from the active database copy to one or more passive database copies.Database availability group (DAG)A group of up to 16 Exchange 2013 Mailbox servers that hosts a set of replicated databases.Database mobilityThe ability of an Exchange 2013 mailbox database to be replicated to and mounted on other Exchange 2013 Mailbox servers.Datacenter Activation Coordination modeA property of the DAG setting that, when enabled, forces the Microsoft Exchange Replication service to acquire permission to mount databases at startup.
13Exchange High Availability Key TerminologyTermDescriptionDisaster recoveryAny process used to manually recover from a failure. This can be a failure that affects a single item, or it can be a failure that affects an entire physical location.Exchange third-party replication APIAn Exchange-provided API that enables use of third-party synchronous replication for a DAG instead of continuous replication.High availabilityA solution that provides service availability, data availability, and automatic recovery from failures that affect the service or data (such as a network, storage, or server failure).Lagged mailbox database copyA passive mailbox database copy that has a log replay lag time greater than zero.Mailbox database copyA mailbox database (.edb file and logs), which is either active or passive.Mailbox resiliencyThe name of a unified high availability and site resilience solution in Exchange 2013.Managed availabilityA set of internal processes made up of probes, monitors, and responders that incorporate monitoring and high availability across all server roles and all protocols.
14Exchange High Availability Key TerminologyTermDescriptionA switchoverIs a manual activation of one or more database copiesA failoverIs an automatic activation of one or more database copies after a failure.Safety NetFormerly known as transport dumpster, this is a feature of the transport service that stores a copy of all messages for X days. The default setting is 2 days.Shadow redundancyA transport server feature that provides redundancy for messages for the entire time they're in transit.Site resilienceA configuration that extends the messaging infrastructure to multiple Active Directory sites to provide operational continuity for the messaging system in the event of a failure affecting one of the sites.
15Database Availability Group Exchange High AvailabilityDatabase Availability Group (DAG)Base component of the high availability and site resilience framework built into Exchange 2010/2013A group of servers participating within a Windows failover cluster with a limit of 16 servers and 100 databases. All servers participating within a DAG can have a copy of any database within the DAGEach DAG member server can house one copy of each database, up to 16 copies, with only one being active, passive, or laggedNo configuration of cluster services are requiredExchange 2010/2013 handles the entire installationDuring site DR - manual work, scripts must be runA DAG does not provide recovery for logical database corruptionDatabase Availability GroupMBX1MBX2MBX3APPDB1CopyCopyPAPCopyDB2CopyPPACopyCopyDB3A = ActiveP = Passive
16Exchange High Availability Guidance for deploying DAGsEnsure all elements of the design have resilient componentsStorage processorsConnectivity to the serversStorage spindlesMultiple arrays in DR scenariosDAG copies should be stored on separate physical spindlesProvided all resiliency is reached at the source siteOn SANs, consider performance of the passive and active copies within one array
18Exchange 2010/2013 virtualization Virtualizing Exchange is supported on Hyper-V, VMware, and other hypervisorsHypervisor vendors must participate in the Windows Server Virtualization Validation Program (SVVP)EMC recommends virtualizing Exchange for most deployments based on customer requirements
19Exchange virtualization VM placement considerationsDeploy VMs with the same role across multiple hostsDo not deploy VMs from the same DAG on the same hostDAG1DAG2DAG1DAG2
20Exchange virtualization Configuration best practicesPhysical sizing still appliesHypervisor server must accommodate the guests they will supportDAG copies must be spread out across physical hosts to minimize outage in case of physical server issuesKnow your hypervisors limits256 SCSI disks per host (or cluster)Processor limits (vCPUs per virtual machine) and Memory limitsBe aware of the hypervisor CPU overheadMicrosoft Hyper-V: ~10-12%VMware vSphere: ~5-7%Core Exchange design principles still applyDesign for performance and high availabilityDesign for user workloads
21Exchange virtualization Configuration best practices – Hypervisor and VMsHypervisor serverHave at least 4 paths (HBA/CNA/iSCSI) to the storageInstall EMC PowerPath for maximum throughput, load balancing, path management, and I/O path failure detectionMultiple NICs - Segregate management and clients traffic from Exchange replication trafficDisable hypervisor-based auto tuning features - No dynamic memoryCPU & MemoryDedicate/reserve CPU and memory to the Mailbox virtual machines and do not over commitpCPU to vCPU ratios: 2:1 is OK, 1:1 is a best practiceVM migrationsAlways migrate live or completely shut down virtual machines
22Exchange virtualization Configuration best practices - StorageExchange storage should be on spindles separate from guest OS physical storageExchange storage must be block-levelNetwork attached storage (NAS) volumes are not supportedNo NFS, SMB (other than SMB 3.0), or any other NAS technologyStorage must be fixed VHD/VHDX/VMDK, SCSI pass-through/RDM or iSCSI
23Exchange virtualization Configuration best practices - SMB 3.0 SupportOnly in virtualized configurationsVHDs can reside on SMB 3.0 shares presented to Hyper-V hostNo support for UNC path for Exchange db and log volumes (\\server\share\db1\db1.edb)
24Exchange virtualization Supported SMB 3.0 Configuration Example
25Exchange virtualization Configuration best practices – Hyper-V StorageVirtual SCSI (pass-through or fixed disk)VHD on host – recommended for OS, program filesPass-through disk on host - recommended for Exchange database and log volumesiSCSIiSCSI direct from a guest virtual machineiSCSI initiator on host and disk presented to guest as pass-throughISCSI initiator from guest performs well and is easier to configureMPIO or EMC PowerPath – PowerPath recommended
26Exchange virtualization Configuration best practices – VMware VMFS or RDM Trade-offsVMFSRDMVolume can host many virtual machines (or can be dedicated to one virtual machine)Maps a single LUN to one virtual machine; isolated I/OIncreases storage utilization, provides better flexibility, easier administration, and managementMore LUNs = easier to hit the LUN limit of 256 that can be presented to ESX ServerCan’t have hardware enabled VSS backupsRequired for hardware VSS and replication tools that integrate with Exchange databasesLarge third-party ecosystem with V2P products to aid in certain support situationsCan help reduce physical to virtual migration timeNot supported for shared-disk clusteringRequired for shared-disk clusteringFull support for VMware Site Recovery Manager
27Exchange virtualization ReferencesFor Hyper-V:Best Practices for Virtualizing Exchange Server 2010 with Windows Server 2008 R2 Hyper-VBest Practices for Virtualizing and Managing Exchange 2013For VMware:Microsoft Exchange 2010 on VMware Best Practices GuideMicrosoft Exchange 2010 on VMware Design and Sizing ExamplesMicrosoft Exchange 2013 on VMware Best Practices GuideMicrosoft Exchange 2013 on VMware Availability and Recovery OptionsMicrosoft Exchange 2013 on VMware Design and Sizing Guide
29EMC offers both options Exchange Storage OptionsDAS or SAN?EMC offers both optionsFor small, low-cost = DASFor large-scale efficiency = SANBest long-term TCO = SANVirtualization ready = SANVNXeVNXVMAXXtremIOUnderstand which storage type best meets design requirementsPhysical or virtual?Dedicated for Exchange or shared with other applications?Follow EMC proven guidance for each platform
30Exchange Server IOPS Per Disks Use the following table for IOPS per drive values when calculating disk requirements for Exchange 2010/2013*Disk typeExchange 2010/ database IOPS per disk(random workload)Exchange Server /2013 database logs IOPS per disk(Sequential workload)VNX/VMAXVNXVMAXVNX and VMAX7.2 K rpm NL-SAS/SATA656018010 K rpm SAS/FC13513027015 K rpm SAS/FC450Flash12502000*Recommendations may change based on future test results
31I/O Characteristics for Various RAID Types Random I/OReadExcellentWriteModeratePoorSequential I/OGoodRAID write overhead246Disk capacity utilization11/24/5 (in 4+1 R5)4/6 (in 4+2 R6)Minimal drives required23RAID 1/0 provides data protection by mirroring data onto another disk. This produces better performance and minimal or no performance impact on the event of a disk failure. In general, RAID 1/0 is the best choice for Exchange Server, especially if SATA and NL-SAS drives are used.RAID 5 data is striped across disks in large stripe sizes. The parity information is stored across all disks so that data can be reconstructed. This can protect against a single-disk failure. With its high write penalty, RAID 5 is most appropriate in environments with read I/Os and where large databases are being deployed. In the case of flash drives, this performance concern will be eliminated and most environments with flash drives can be configured as RAID 5 to support high I/O requirements with very low disk latency.RAID 6 data is also striped across disks in large stripe sizes. However, two sets of parity information are stored across all disks so that data can be reconstructed, if required. RAID 6 can accommodate the simultaneous failure of two disks without data loss.1 Depends on the size of RAID group2 Depends on the array
32Exchange Server Design Methodology Phase 1:Gather requirementsTotal number of usersNumber of users per serverUser profile and mailbox sizeUser concurrencyHigh availability requirements (DAG configuration)Backup and restore SLAsThird party software in use (archiving, Blackberry, etc.)Phase 2:Design the building block and storage architectureDesign the building-block using Microsoft and EMC best practicesDesign the storage architecture using EMC best practicesUse EMC Proven Solutions whitepapersUse Exchange Solution Review Program (ESRP) documentationPhase 3:Validate the designUse Microsoft Exchange validation tools:Jetstress - for storage validationLoadGen - for user workload validation and end-to-end solution validation
33Exchange Storage Design TitleMonth YearExchange Storage DesignExchange building-block design methodologyWhat is a building-block?A building-block represents the required amount of resources needed to support a specific number of Exchange users on a single server or VMBuilding blocks are based on requirements and include:Compute requirements (CPU, memory, and network)Disk requirements (database, log, and OS)Why use the building-block approach?Can be easily reproduced to support all users with similar user profile characteristicsMakes Exchange environment additions much easier and straightforward, helpful for future environment growthHas been very successful for many real-world customer implementationsSee Appendix for building-block design process
35Exchange Storage Design TitleMonth YearExchange Storage DesignTools for Performance and Scalability EvaluationExchange JetStress for Storage ValidationExchangeExchangeExchange Load Generator (Loadgen) for end-to-end environment validation (must be used in isolated lab only)ExchangeExchange
36Storage Design validation Exchange Solution Reviewed Program (ESRP) ResultsMicrosoft program for validation of Storage vendor designs with ExchangeVendor runs multiple JetStress tests based on requirements for performance, stress, backup to disk, and log file replayReviewed and approved by Microsoft
37Exchange Storage Design Tools for Performance and Scalability EvaluationExchange JetstressUses Exchange executables to simulate I/O load (use same version)Initialized and executed during pre-production before Exchange Server is installedThroughput and mailbox profile tests – Pass gives confidence that storage design will perform as designedExchange Load Generator (Loadgen) (optional)Validation must be performed in isolated labProduces a simulated client workload against a test Exchange deploymentEstimate number of users per server and validate Exchange deploymentLoadGen testing can take many weeks to configure and populate DBs
38Exchange Storage Design TitleMonth YearExchange Storage DesignStorage sizing guidanceDo not solely rely on automated tools when sizing your Exchange environmentPlace time and effort into your calculations, and provide supporting factual evidence on your designs rather than providing fictional calculationsSize Exchange based on I/O, mailbox capacity and bandwidth requirementsFactor in other overhead variables such as archiving, snapshots protection, virus protection, mobile devices, and risk factorConfirm Exchange storage requirements with specific array sizing tools
39Storage Design Guidance Best PracticesIsolate the Exchange server workload to its own set of spindles from other workloads to guarantee performanceWhen sizing, always calculate I/O requirements and capacity requirementsSeparate the database and logs onto different volumesDeploy DAG copies on separate physical spindlesDatabases up to 2 TB in size are acceptable when DAG is being used:The exact size should be based or customer requirementsEnsure that your solution can support LUNs larger than 2 TB
40Storage Design Guidance Best Practices - continuedConsider backup and restore times when calculating the database sizeSpread the load as evenly as possible across array resources, V-Max Engines, VNX SPs, back-end buses, etc.Always format Windows NTFS volumes for databases and logs with an allocation unit size of 64KUse an Exchange building block design approach whenever possible
41Storage Design Guidance - VNX Pools and RAID GroupsEither method works well and provides the same performance (thick pools versus RGs)RAID groups are limited to 16 disks per RG, pools can support many more disksPools are more efficient and easier to manageUse pools if planning to use advanced features such as:FAST VP, VNX SnapshotsStorage pools can support a single building block or multiple building blocks based on customer requirementsDesign and expand pools using the correct multiplier for best performance (R1/0 4+4, R5 4+1, R6 6+2)
42Storage Design Guidance - VNX Thick or Thin LUNs?Both Thick and Thin LUNs can be used for Exchange storage (database and logs)Thick LUNs are recommended for heavy workloads with high IOPS user profilesThin LUNs are recommended for light to medium workloads with low IOPS user profilesBenefits: significantly reduces initial storage requirementsUse VNX Pool Optimizer before formatting volumesCan enable FAST Cache or FAST VP for fast metadata promotions
43FAST VP and FAST Cache FAST VP FAST Cache Virtual ServerVirtual ServerVirtual ServerVirtual ServerVirtual ServerVirtual ServerExchangeSQL ServerSharePointExchangeSQL ServerSharePointFAST VPFAST CacheFAST VPFAST CacheA VMAX and VNX feature that automates the identification of data for allocating or reallocating across various performance and capacity tiers within the storage arrayA VNX performance optimization feature that provides performance boost to frequently accessed data by leveraging the use of flash drives to extend cache capacitiesFlashFCSASNLSAS
44FAST VP vs. FAST Cache on VNX Storage Leverages pools to provide sub-LUN tiering,enabling the utilization of multiple tiers of storageSimultaneously.Enables flash drives to extend theexisting caching capacity of the storage system.Uses 1 GB chunks (256 MB in Next-Gen VNX).Uses 64 KB chunks.Local feature – per storage pool.Assured performance per pool.Global feature – per storage array. Shared resource, performance for one pool is not guaranteed under FAST Cache performance contention.Moves data between different storage tiers basedon a weighted average of access statistics collected over a period of time.Copies data from hard disks to flash disks when accessed frequently.Uses a relocation window to periodically makestorage tiering adjustments. Default setting is an 8-hour relocation window each day.Adapts continuously to changes in workload.While it can improve performance, it is primarilydesigned to improve usability and reduce TCO.Designed primarily to improve performance.
45VNX Considerations for FAST VP and FAST Cache When the number of flash drives is limited, use flash drives to create FAST Cache firstFAST Cache can benefit multiple pools in the storage system.FAST Cache uses 64 KB chunks, smaller than 1 GB or 256 MB chunks in FAST VP, which results in higher performance benefits and faster reaction time for changing usage patterns.Use flash drives to create the FAST VP performance tier for a specific poolThis ensures the performance of certain mission-critical dataThe FAST VP tier is dedicated to a storage pool and cannot be shared with other storage pools in the same storage array.
46Exchange Design with FAST VP Configuration recommendations (VNX and VMAX)Separate databases from logs, due to different workloadsData – random workload with skew; high FAST VP benefitLogs – sequential data without skew; no FAST VP benefitUse dedicated poolsProvides a better SLA guaranteeProvides fault domainsRecommended for most deterministic behaviorUse Thick Pool LUNs for highest performance (on VNX)Thin Pool LUNs are acceptable with optimizationUse Thin LUNs on VMAX
47Tools for FAST VP Tier Advisor for sizing Historical performance data is needed from storage arrays.Workload Performance Assessment ToolIt shows FAST VP heat map. For more information, refer to https://emc.mitrend.com.VSPEX Sizing Tool (for VNX)For more information, refer to ebook/vspex-solutions.htm.EMC Professional Services and qualified partners can assist in properly sizing tiers and pools to maximize investment.
48Exchange Design with FAST Cache VNX onlyFAST Cache usagePools with Thin LUNs for metadata trackingPools with Thin and Thick LUNs when VNX Snapshots are usedPools with Thick LUNsNot required but not restricted eitherRequired with VNX Snapshots FAST Cache Sizing guidanceRule of thumb: for every 1 TB of Exchange dataset, provision 1 GB of FAST CacheMonitor and adjust the FAST Cache size, your mileage may varyEnable FAST Cache on pools with database LUNs only
49Exchange Storage design - VMAX Design Best PracticesEnsure that the initial disk configurations can support the I/O requirementsCan configure a thin pool to support a single Exchange building block or multiple building blocks, depending on customer requirementsUse Unisphere for VMAX to monitor the thin pool utilization and prevent the thin pools from running out of spaceInstall the Microsoft hotfix KB on the Windows Server 2012 hosts in your environment.
50Exchange Storage design - VMAX Design Best PracticesUse Symmetrix Virtual ProvisioningCan share database and log volumes across the same disks, but separate them into different LUNs on the same hostsFor optimal Exchange performance, use striped meta volumes
51VMAX FAST VP with Exchange Design Best PracticesWhen designing FAST VP for Exchange 2010/2013 on VMAX follow these guidelines:Separate databases and logs onto their own volumesCan share database and log volumes across the same disksExclude transaction log volumes from the FAST VP policy or pin all the log volumes into the tier on which they are createdSelect Allocate by FAST Policy to allow FAST VP to use all tiers for new allocations based on the performance and capacity restrictionsNew feature introduced in the Enginuity™ 5876 codeWhen using FAST VP with Exchange DAG, do not place DAG copies of the same database in the same pool on the same disks
52XtremCache with Exchange Consider XtremCache for Exchange if:You have an I/O bound Exchange solutionYou are not sure about anticipated workloadYou need to guarantee high performance and low latency for specific users (VIP servers, databases, and so on)XtremCache is proven to improve Exchange 2010 performance by:Reducing read latenciesIncreasing I/O throughputEliminating almost all high latency spikesProviding more improvements as workload increasesReducing RPC latenciesReducing writes to the back-end storage with XtremCache deduplication
53XtremCache with Exchange Configuration recommendationsXtremCache PCI Flash card can be installed onA physical Exchange Mailbox serverHypervisor server hosting Exchange Mailbox virtual machines (VMware or Hyper-V)Enable XtremCache acceleration only on database volumes onlyDo not enable on log volumes – sequential workload=no performance benefitsXtremCache sizing guidance:For a 1,000 GB working dataset, configure 10 GB of XtremCache device
54XtremCache with Exchange Configuration recommendationsWhen implementing XtremCache with VMware vSphere, consider the following:Size of the PCI cache card to deployNumber of Exchange virtual machines deployed on each vSphere host that will be using XtremCacheExchange workload characteristics (read:write ratio, user profile type)Exchange dataset sizeThe most benefits will be achieved when all reads from a working dataset are cached
55XtremCache with Exchange Configuration recommendationsWhen adding a XtremCache device to an Exchange virtual machine:Set cache page to 64KB and Max IO size to 64 (BDM I/O will not be cached)Can use VSI Plug-in or XtemCache CLI command to set the cache page size to 64 KB and max I/O size to 64 KB when adding the cache device to a virtual machine:vfcmt add -cache <cache_device> -set_page_size 64 -set_max_io_size 64)
56XtremCache with Exchange Configuration recommendations (deduplication)Evaluate your workload before considering enabling deduplication for accelerated Exchange LUNsConsider CPU overhead when enabling deduplicationSet the deduplication ratio based on workload characteristics:If the observed deduplication ratio is less than 10%, EMC recommends that you turn it off (or set it to 0%), which enables you to benefit from extended cache device life.If the observed ratio is over 35%, EMC recommends that you raise the deduplication gain to match the observed deduplication.If the observed ratio is between 10% and 35%, EMC recommends that you leave the deduplication gain as it is.
58Exchange DR Options EMC’s Common Solutions for Exchange DR Replication MethodBest forStretched (cross-site) DAGNative Exchange host-based replicationSmall environments, local replicationThird Party Replication with Manual Exchange Recovery (database portability)EMC RecoverPoint or SRDFMixed environments with large workloads and strict SLAsThird Party Replication with Automated Exchange RecoveryEMC RecoverPoint or SRDF with VMware SRMVMware shops withlarge workloads and strict SLAs
59Replication Matrix Features SRDF/S SRDF/A MV/S MV/A RecoverPoint Sync RecoverPoint AsyncSYNC-APIDAGReplication TechnologyArraySANHostHeterogeneous storage supportNoYesHeterogeneous applications supportActive Bi-DirectionalLatency Impact to applicationsMediumMinimumHighData replication TypeSynchAsyncSyncAsynchAutomated RestartCoexistence with VSS replicationZero data lossRPO = 0Time to recoverRTOMinutesSecondsReplication intervalContinuousContinuous IO PoolsContinuous Continuous IO Pools Log ShippingReplication distance200kmunlimitedData centerUnlimitedAbility to resynchronize data incrementallyLink Cost RequirementLow
60Exchange Validated Solutions Exchange 2013 ESRPExchange with XtremCacheCross-site DAD vs. RecoverPointAutomated DR with RecoverPoint and VMware SRMVSPEX SolutionCloud Solutions with EMC Hybrid Cloud
67Exchange ResourcesExchange Storage Best Practices and Design Guidelines for EMC Storage whitepaper:design-guid-emc-storage.pdfEMC Community Networkhttps://community.emc.com/community/connect/everything_microsoftEMC and Partner Exchange 2010 Tested Virtualized SolutionsExchange Solution Reviewed Program Submissions (ESRP)Exchange Mailbox Server Storage Design (Microsoft TechNet)
68Additional References Exchange virtualization supportability guidance -Understanding Exchange Performance -Server Virtualization Validation Program -Exchange 2010 EMC-tested OEM solutions (on Hyper-V)20,000 users on EMC storage with virtual provisioning32,400 users on EMC storage with EMC REE us/library/hh145600(v=exchg.141).aspx
70ESI For VNX Pool Optimization Tool ESI for VNX Pool Optimization (formerly known as the SOAP tool) improves thick and thin LUN performance by ensuring a balanced allocation of data across all available disks and also by optimizing the access of that data on an ongoing basis.It currently supports only the VNX Next Generation Series Storage Arrays.For older VNX platforms, customers are advised to use the original SOAP tool.More details about ESI for VNX Pool Optimization are available hereProduct Download: https://download.emc.com/downloads/DL50658_Storage- Integrator-(ESI)-for-VNX-Pool-Optimization-tool.zipRelease Notes: https://support.emc.com/docu50668_EMC_Storage_Integrator_for_VNX_Pool_Optim ization_Tool_Release_Notes.pdf?language=en_US
71What Does This Tool Do?The tool optimizes pool-based LUNs by pre- allocating slices in the pool evenly across all disks, private raid groups and LUNsProvides the best option for any application requiring deterministic high performance across all LUNs in the pool equally
72When To Use The Optimization Tool When maximum application performance is requiredTo achieve the best performance for pool-based LUNs (primarily thin)To eliminate performance issues and successfully pass Exchange Jetstress during Exchange pre-deployment storage validationTo mitigate “Jetstress effect”Applications to benefit from using the tool:Microsoft ExchangeMicrosoft SQL ServerOracle
73Jetstress Initialization Process How Jetstress initialization phase works:Jetstress creates first databaseIt then creates other databases by copying the first database to other databases concurrently
74What is the “JetStress effect” Jetstress initialization process (data population) results in imbalances of underlying virtual disks in a poolHow is the issue surfaced?With Jetstress testing first database on the Exchange server will:Experience higher latencies then the others when the LUN is ThickExperience lower latencies then the others when the LUN is Thin
75Looking under the covers… Slice Maps Without OptimizationWith Optimization
76Prerequisites VNX OE for Block Release 33 SP1 or later Before using the tool, LUNs must be created using ESI PowerShell with specific parameters in order for the tool to optimize the LUNs*All LUNs in the pool must be the same sizeAll LUNs in the pool must be optimized at the same time*It is also possible to use NaviSecCLI to create LUNs using undocumented black content switches
77Options for Thick pool LUNs provisioning VNX OE for Block Release 32 (Inyo) and laterTwo options for thick pool LUNs provisioningFor good and optimal performance – No SOAP is necessary. Use default pool LUNs provisioning via Unisphere, NaviSecCLI, EMC ESI, or EMC VSIFor best performance (max IOPS) – Use NaviSecCLI to disable pre-provisioning and then run SOAPTurn off pre-provisioning via the CLIRun SOAPRe-enable pre-provisioning
78SOAP Utility for VNX R32– Where and how? Old SOAP Utility is available on EMC Online Support siteEnter “soap” in the search and select “Support Tools”Must be used with CX4/VNX Inyo (OE 5.32) onlySupport Thick LUN optimization onlyZip file contains the tool, step-by-step documentation, and demo video
80Exchange Mailbox Server Storage Design Methodology Phase 1:Gather requirementsTotal number of usersNumber of users per serverUser profile and mailbox sizeUser concurrencyHigh availability requirements (DAG configuration)Backup and restore SLAsThird party software in use (archiving, blackberry, etc.)Phase 2:Design the building block and storage architectureDesign the building block using Microsoft and EMC best practicesDesign storage architecture using EMC best practicesLeverage EMC Proven Solutions whitepapersLeverage Exchange Solution Review Program (ESRP) documentationPhase 3:Validate the designUse Microsoft Exchange validation toolsJetstress - for storage validationLoadGen - for user workloads validation and end-to-end solution validation
81Requirements Example Item Value Exchange version. Total number of active users (mailboxes) in Exchange environmentExchange 2013, 20,000Site resiliency requirementsSingle siteStorage InfrastructureSANType of deployment (Physical or Virtual)Virtual (VMware vSphere)HA requirementsOne DAG with two database copiesMailbox size limit2 GB max quotaUser profile200 messages per user per day (0.134 IOPS)Target average message size75 KBOutlook modeCached mode, 100 percent MAPINumber of mailbox servers8Number of mailboxes per server5,000 (2,500 active/2,500 passive)Number of databases per server10Number of users per database500Deleted items retention (DIR) period14 daysLog protection buffer (to protect against log truncation failure)3 daysBDM configurationEnabled 24 x7Database read/write ratio3:2 (60/40 percent) in a DAG configurationUser concurrency requirements100 percentThird-party software that affects space or I/O (for example, Blackberry, snapshots)Storage snapshots for data protectionDisk Type3 TB NL-SAS (7,200 rpm)Storage platformVNX
82Building Block designDefine and design Building BlockIn our example we are defining a building block as:A mailbox server that will support 5,000 users2,500 users will be active during normal runtime and the other 2,500 users will be passive until a switchover from another mailbox server occurs.Each building block will support two database copies.
83Building block sizing and scaling process Perform calculations for IOPS requirementsPerform calculations for capacity requirements based on different RAID typesDetermine the best optionScale building blockMultiple building blocks may be combined together to create the final configuration and storage layout (pools or RGs)
84Building block sizing and scaling process Front-end IOPS ≠ Back-end IOPSFront-end IOPS = Total Exchange Mailbox server IOPSBack-end IOPS = Storage array IOPS (including RAID penalty)Understand disk IOPS by RAID typeBlock front-end Exchange application workload is translated into a different back-end disk workload based on the RAID type in use.For reads there is no impact of RAID type:1 application read I/O = 1 back-end read I/OFor random writes like Exchange:RAID 1/0: 1 application write I/O = 2 back-end write I/ORAID 5: 1 application write I/O = 4 back-end disk I/O (2 read I/O + 2 write I/O)RAID 6: 1 application write I/O = 6 back-end write I/O (3 read I/O + 3 write I/O)
85Database IOPS requirements Formula & CalculationsTotal transactional IOPS = IOPS per mailbox * mailboxes per server + (Microsoft recommended overhead %)Total transactional IOPS = 5,000 users * IOPS per user + 20% Microsoft recommended overhead = = 804 IOPSTotal front-end IOPS = (Total transactional IOPS) + (EMC required overhead %)Total Front-end IOPS = % EMC required overhead = 965 IOPS (rounded-up from 964.8)
86Database Disks requirements for Performance (IOPS) FormulaDisks required for Exchange database IOPS = (Total backend database Read IOPS) + (Total backend database Write IOPS)/ Exchange random IOPS per diskWhere:Total back-end database read IOPS = (Total Front-end IOPS) * (% of Reads IOPS)Total back-end database write IOPS = RAID Write Penalty *(Total Front-end IOPS * (% of Write IOPS)
87Database Disks requirements for Performance (IOPS) CalculationsRAID OptionRAID PenaltyDisks requiredRAID 1/0 (4+4)2(965 x 0.60) + 2(965 x 0.40) = = 1351 / 65 = 21 (round-up to 24 disks)RAID 5 (4+1)4(965 x 0.60) + 4(965 x 0.40) = = 2123 / 65 = 33 (round-up to 35 disks)RAID 6 (6+2)6(965 x 0.60) + 6(965 x 0.40) = = 2895 / 65 = 45 disks (round-up to 48 disks)
88Transactional logs IOPS requirements Formula & CalculationsDisks required for Exchange log IOPS = (Total backend database Write IOPS * 50%) + (Total backend database Write IOPS * 10%)/ Exchange sequential IOPS per diskDisks required for Exchange log IOPS = (772 back-end write IOPS * 50%) + (772 *10%))/ 180 sequential Exchange IOPS per disk = ( )/180 = 2.57(round-up to 4 disks)
89Storage capacity calculations FormulaCalculate User Mailbox on DiskCalculate Database Size on DiskCalculate Database LUN SizeMailbox size on disk = Maximum mailbox size + White space + DumpsterDatabase size on disk = number of mailboxes per database * mailbox size on diskDatabase LUN size = Number of mailboxes * Mailbox size on disk * (1 + Index space + additional Index space for maintenance) / (1 + LUN free space)
90Mailbox size on diskFormulaMailbox size on disk = Maximum mailbox size + White space + DumpsterWhere:Estimated Database Whitespace per Mailbox = per-user daily message profile * average message sizeDumpster = (per-user daily message profile * average message size * deleted item retention window) + (mailbox quota size * 0.012) + (mailbox quota size * 0.03)
91Mailbox size on disk Calculations White space = 200 messages /day * 75KB = 14.65MBDumpster = (200 messages/day * 75KB * 14 days) + (2GB * 0.012) + (2GB x 0.03) = = MBMailbox size on disk = 2GB mailbox quota MB database whitespace MB Dumpster = 2,354 MB (2.3GB)
92Database Size On Disk & LUN size CalculationsDatabase size on disk = 500 users per database * 2,354 MB mailbox on disk = 1,177 GB (1.15 TB)Database LUN size = 1,177 GB * ( ) / ( ) = 2,060 (2 TB)In our example:20% added for the Index20% added for the Index maintenance task20% added for LUN-free space protection
93Logs space calculations Formula & CalculationsLog LUN size = (Log size)*(Number of mailboxes per database)*(Backup/truncation failure tolerance days)+ (Space to support mailbox moves)/(1 + LUN free space)Log Capacity to Support 3 Days of Truncation Failure = (500 mailboxes/database x 40 logs/day x 1MB log size) x 3 days = 58.59GB Log Capacity to Support 1% mailbox moves per week = 500 mailboxes/database x 0.01 x 2.3GB mailbox size = 11.5GB Log LUN size = 58.59GB GB /( ) = GB (round-up to 88 GB)
94Total Capacity per Building Block Total LUN size capacity required per server = (Database LUN size per server) + (Log LUN size per server) * (Number of databases per server)LUN Capacity TypeLUN Capacity Required per serverDatabase LUN capacity2,060 GB per LUN * 10 LUNs per server = 20,600 GBLog LUN capacity88 GB per LUN * 10 LUNs per server = 880 GBTotal LUN capacity per server20, = 21,480 GB
95Total number of disks required Database disksLogs disksDisks required for Exchange database capacity = Total database LUN size / Physical Disk Capacity * RAID Multiplication FactorDisks required for Exchange log capacity = Total log LUN size) / Physical Disk Capacity * RAID Multiplication Factor
96Disk requirements based on capacity RAID OptionDatabase Disks requiredRAID 1/0 (4+4)20,600/ * 2 = 7.37 * 2 = (round-up to 16 disks)RAID 5 (4+1)20,600/ * 1.25 = 7.37 * 1.25 = 9.2 (round-up to 10 disks)RAID 6 (6+2)20,600/ * 1.33 = 7.37 * 1.33 = 9.8 (round-up to 16 disks)RAID OptionLogs Disks requiredRAID 1/0 (1+1)880 / 2,794.5 * 2 = 0.63 (round up to 2 disks)
97Final storage calculation results Building Block SummaryVolume TypeRAID OptionDisks Required for Performance (IOPS)Disks Required for CapacityBest OptionExchange DatabasesRAID 1/0 (4+4)24 disks16 disksRAID 5 (4+1)35 disks10 disksRAID 6 (6+2)48 disksExchange LogsRAID 1/0 (1+1)4 disks2 disksTotal disks per building block28 disks
98Final storage calculation results Building Block ScalabilityTotal number of disks required for entire 20,000 users solution in a DAG with two copies = 28 disk per building block * 8 building blocks = 224 disks total
99Building block sizing and scaling process Bandwidth calculationsArray throughput MB/s validation for Exchange involves:Determining how many databases the customer will requireConfirming the database LUNs are evenly distributed among the backend busses and storage processors.Determine if each bus can accommodate the peak Exchange database throughputUse this calculation to calculate the throughput required(DB throughput * number of DBs per bus) = Exchange DB throughputCompare that number with array bus throughputDB throughput = Total transactional (user) IOPS per DB * 32K + (BDM throughput per DB in MB/s)Number of DBs per bus = the total number of active and passive databases per bus
100Storage Bandwidth Requirements The processThe bandwidth validation process involves the following steps:Determine how many databases in the Exchange environmentDetermine the bandwidth requirements per databaseDetermine the required bandwidth requirements per array busDetermine whether each bus can accommodate the peak Exchange database bandwidthUse DiskSizer for VNX or contact your local storage specialist to get the array and bus throughput numbersDiskSizer is available through your local USPEED contactEvenly distribute database LUNs among the back-end buses and storage processorsUniformed distribution is key for best performanceFE/BE/RAID Group/POOLs & DAEsDBs uniformly distributed onto the poolsUse even numbers on the SAS loops (0 & 2) for max performance
101Storage Bandwidth Requirements CalculationsBandwidth per database (MB/s) = Total transactional IOPS per database * 32 KB + Estimated BDM Throughput per database (MB/s)Where:32 KB is an Exchange page sizeEstimated BDM throughput per database is 7.5 MB/s for Exchange 2010 and 2.25 MB/s for Exchange 2013Required throughput MB/s per bus = (throughput MB/s per database) * (total number of active and passive databases per bus)
102Storage Bandwidth Requirements CalculationsTotal transactional IOPS per database = (500 * * 32 KB = 2.1 MB/s Throughput per database = 2.1 MB/s MB/s = 4.35 MB/s Required throughput per bus = 4.35 MB/s * 200 databases per bus = 870 MB/sExample assumptions:500 users at IOPS per database200 databases per busIf the array supports a maximum throughput of 3,200 MB/s per bus, 200 databases can be supported from a throughput perspective.
103Final Design ExampleConfigured dedicated storage pools for each mailbox server with 24 x 3 TB NL-SAS drives.Each storage pool holds two copies from different mailbox servers.Separated Exchange log files into different storage poolsFor better storage utilization created one storage pool with 16 x 3 TB NL-SAS drives for logs per four mailbox server building blocks.
105Archiving – Native vs. EMC SourceOne Archiving CapabilitiesExchange 2013EMC SourceOneNameIn-Place ArchivingComprehensive management capabilitiesConsistent & automated policy-based with proper retention and dispositionûüFull-text indexing of attachmentsLimited to few file typesSupport many file typesSingle-instance, de-duplication and compression capabilitiesMultiple content types ( , SharePoint, files)Offline Access LimitationIn-place archiving is not accessible in offline modeAccess from non-Microsoft DeviceNot availableDoesn’t Required Exchange enterprise CAL licenseDoesn’t required In-place Exchange MigrationSearch results between online and offlineInconsistent as archived s are not searchable offlineconsistentArchiving policyBasicAdvanced - based on many other parameters like for example based word in subject field, specific outlook folder, with attachment or no, attachment format, read/unread message and etc….
106Archiving – Native vs. EMC SourceOne eDiscovery CapabilitiesExchange 2013EMC SourceOneComprehensive eDiscovery CapabilitiesLimitedComprehensive eDiscoveryLegal holdUp to lower level (single level)Assign CustodiansûüLegal hold based on time frame of contentSupported File types for Indexing54 file types – limits the keyword search result400 file types – accurate keyword search resultProper eDiscovery Workflow (assessment, culling, review and tagging then holding)Message TaggingMessage distribution among reviewersRun Complex Search QueryBCC Field Search