Presentation is loading. Please wait.

Presentation is loading. Please wait.

Microsoft Exchange Best Practices and Design Guidelines on EMC Storage

Similar presentations


Presentation on theme: "Microsoft Exchange Best Practices and Design Guidelines on EMC Storage"— Presentation transcript:

1 Microsoft Exchange Best Practices and Design Guidelines on EMC Storage
Exchange 2010 and Exchange 2013 VNX and VMAX Storage Systems EMC Global Solutions | Global Solutions Engineering October 2014 Update

2 Topics Exchange – What has changed Exchange Virtualization
Exchange Storage Design Exchange DR options Exchange Validated Solutions ESI for VNX Pool Optimization Tool (fka SOAP) Building Block Design Example Exchange Archiving

3 Exchange – What has changed

4 Exchange…What has changed
◊ 64-bit Windows ◊ 32+ GB database cache ◊ 8Kb block size ◊ 1:1 DB read/write ratio ◊ 70% reduction in IOPS from Exchange 2003 Exchange 2010 ◊ 100GB database cache (DAG) ◊ 32Kb block size ◊ 3:2 DB read/write ratio ◊ 70% reduction in IOPS from Exchange 2007 Exchange 2013 ◊ 33% reduction in IOPS from Exchange 2010

5 Exchange User Profile Changes
Messages sent/ received per mailbox per day Exchange 2010 Estimated IOPS per mailbox (Active or Passive) Exchange 2013 Estimated IOPS per mailbox (Active or Passive) Mailbox resiliency Stand-alone 50 0.05 0.06 0.034 100 0.100 0.120 0.067 150 0.150 0.180 0.101 200 0.200 0.240 0.134 250 0.250 0.300 0.168 300 0.360 0.201 350 0.350 0.420 0.235 400 0.400 0.480 0.268 450 0.450 0.540 0.302 500 0.500 0.600 0.335

6 Exchange Processor Requirements Changes
Messages sent or received per mailbox per day Megacycles per User Active DB Copy or Standalone (MBX only) Active DB Copy or Standalone (Multi-Role) Passive DB Copy Exchange 2010 Exchange 2013 50 1 2.13 N/A 2.66 0.15 0.69 100 2 4.25 5.31 0.3 1.37 150 3 6.38 7.97 0.45 2.06 200 4 8.50 10.63 0.6 2.74 250 5 13.28 0.75 3.43 300 6 12.75 15.94 0.9 4.11 350 7 14.88 18.59 1.05 4.80 400 8 17.00 21.25 1.2 5.48 450 9 19.13 23.91 1.35 6.17 500 10 26.56 1.5 6.85

7 Exchange I/O Characteristics
I/O Type Exchange 2007 Exchange 2010 Exchange 2013 Database I/O 8 KB random write I/O 32 KB random I/O Background Database Maintenance (BDM) I/O N/A 256 KB Sequential Read I/O Log I/O Varies in size from bytes to the log buffer size (1 MB) Varies in size from 4 KB to the log buffer size (1 MB)

8 Exchange 2010/2013 mailbox database I/O read/write ratios
Messages sent/received per mailbox per day Stand-alone databases Databases participating in mailbox resiliency 50 1:1 3:2 100 150 200 250 300 2:3 350 400 450 500

9 Understanding Exchange I/O
Exchange 2010/2013 I/O’s to the database (.edb) are divided into two types: Transactional I/O (aka user I/O) Database volume I/O (database reads and writes) Log volume I/O (logs reads and writes) Non Transactional I/O Background Database Maintenance (Checksum) (BDM) NOTE: Only database I/O’s are measured when sizing storage and during Jetstress validation For more details see “Understanding Database and Log Performance Factors” at

10 Background Database Maintenance (BDM)
BDM is the process of Exchange Server 2010/2013 database maintenance that includes online defragmentation and online database scanning Both active and passive database copies are scanned On active copy can be scheduled to run during the online maintenance window (default is 24 x 7) Passive copy is ”hardcoded” to 24 x 7 scan Jetstress has no concept of passive copy, all are active Possible BDM related issues (mostly for Exchange 2010): Bandwidth/throughput required for BDM and BDM IOPS Not enough FE ports, not enough BE ports, non-optimal RAID configuration Exchange 2010 Exchange 2013 Read I/O size 256 KB Database scan completion 1 week every 4 weeks IOPS per database 30 9 Bandwidth 7.5 MB/s* 2.25 MB/s*

11 Exchange Content Index Considerations
Content Indexing space considerations: In Exchange 2010 content index space is estimated at about 10% of the database size. In Exchange 2013 content index space is estimated at about 20% of the database size. An additional 20% must be added for content indexing maintenance tasks (such as the master merge process) to complete. Calculations References exchange-2013-deployments.aspx

12 Exchange High Availability
Key Terminology Term Description Active Manager An internal Exchange component which runs inside the Microsoft Exchange Replication service that's responsible for failure monitoring and corrective action through failover within a database availability group (DAG). AutoDatabaseMountDial A property setting of a Mailbox server that determines whether a passive database copy will automatically mount as the new active copy, based on the number of log files missing by the copy being mounted. Continuous replication - block mode In block mode, as each update is written to the active database copy's active log buffer, it's also shipped to a log buffer on each of the passive mailbox copies in block mode. When the log buffer is full, each database copy builds, inspects, and creates the next log file in the generation sequence. Continuous replication - file mode In file mode, closed transaction log files are pushed from the active database copy to one or more passive database copies. Database availability group (DAG) A group of up to 16 Exchange 2013 Mailbox servers that hosts a set of replicated databases. Database mobility The ability of an Exchange 2013 mailbox database to be replicated to and mounted on other Exchange 2013 Mailbox servers. Datacenter Activation Coordination mode A property of the DAG setting that, when enabled, forces the Microsoft Exchange Replication service to acquire permission to mount databases at startup.

13 Exchange High Availability
Key Terminology Term Description Disaster recovery Any process used to manually recover from a failure. This can be a failure that affects a single item, or it can be a failure that affects an entire physical location. Exchange third-party replication API An Exchange-provided API that enables use of third-party synchronous replication for a DAG instead of continuous replication. High availability A solution that provides service availability, data availability, and automatic recovery from failures that affect the service or data (such as a network, storage, or server failure). Lagged mailbox database copy A passive mailbox database copy that has a log replay lag time greater than zero. Mailbox database copy A mailbox database (.edb file and logs), which is either active or passive. Mailbox resiliency The name of a unified high availability and site resilience solution in Exchange 2013. Managed availability A set of internal processes made up of probes, monitors, and responders that incorporate monitoring and high availability across all server roles and all protocols.

14 Exchange High Availability
Key Terminology Term Description A switchover Is a manual activation of one or more database copies A failover Is an automatic activation of one or more database copies after a failure. Safety Net Formerly known as transport dumpster, this is a feature of the transport service that stores a copy of all messages for X days. The default setting is 2 days. Shadow redundancy A transport server feature that provides redundancy for messages for the entire time they're in transit. Site resilience A configuration that extends the messaging infrastructure to multiple Active Directory sites to provide operational continuity for the messaging system in the event of a failure affecting one of the sites.

15 Database Availability Group
Exchange High Availability Database Availability Group (DAG) Base component of the high availability and site resilience framework built into Exchange 2010/2013 A group of servers participating within a Windows failover cluster with a limit of 16 servers and 100 databases.  All servers participating within a DAG can have a copy of any database within the DAG Each DAG member server can house one copy of each database, up to 16 copies, with only one being active, passive, or lagged No configuration of cluster services are required Exchange 2010/2013 handles the entire installation During site DR - manual work, scripts must be run A DAG does not provide recovery for logical database corruption Database Availability Group MBX1 MBX2 MBX3 A P P DB1 Copy Copy P A P Copy DB2 Copy P P A Copy Copy DB3 A = Active P = Passive

16 Exchange High Availability
Guidance for deploying DAGs Ensure all elements of the design have resilient components Storage processors Connectivity to the servers Storage spindles Multiple arrays in DR scenarios DAG copies should be stored on separate physical spindles Provided all resiliency is reached at the source site On SANs, consider performance of the passive and active copies within one array

17 Exchange Virtualization

18 Exchange 2010/2013 virtualization
Virtualizing Exchange is supported on Hyper-V, VMware, and other hypervisors Hypervisor vendors must participate in the Windows Server Virtualization Validation Program (SVVP) EMC recommends virtualizing Exchange for most deployments based on customer requirements

19 Exchange virtualization
VM placement considerations Deploy VMs with the same role across multiple hosts Do not deploy VMs from the same DAG on the same host DAG1 DAG2 DAG1 DAG2

20 Exchange virtualization
Configuration best practices Physical sizing still applies Hypervisor server must accommodate the guests they will support DAG copies must be spread out across physical hosts to minimize outage in case of physical server issues Know your hypervisors limits 256 SCSI disks per host (or cluster) Processor limits (vCPUs per virtual machine) and Memory limits Be aware of the hypervisor CPU overhead Microsoft Hyper-V: ~10-12% VMware vSphere: ~5-7% Core Exchange design principles still apply Design for performance and high availability Design for user workloads

21 Exchange virtualization
Configuration best practices – Hypervisor and VMs Hypervisor server Have at least 4 paths (HBA/CNA/iSCSI) to the storage Install EMC PowerPath for maximum throughput, load balancing, path management, and I/O path failure detection Multiple NICs - Segregate management and clients traffic from Exchange replication traffic Disable hypervisor-based auto tuning features - No dynamic memory CPU & Memory Dedicate/reserve CPU and memory to the Mailbox virtual machines and do not over commit pCPU to vCPU ratios: 2:1 is OK, 1:1 is a best practice VM migrations Always migrate live or completely shut down virtual machines

22 Exchange virtualization
Configuration best practices - Storage Exchange storage should be on spindles separate from guest OS physical storage Exchange storage must be block-level Network attached storage (NAS) volumes are not supported No NFS, SMB (other than SMB 3.0), or any other NAS technology Storage must be fixed VHD/VHDX/VMDK, SCSI pass-through/RDM or iSCSI

23 Exchange virtualization
Configuration best practices - SMB 3.0 Support Only in virtualized configurations VHDs can reside on SMB 3.0 shares presented to Hyper-V host No support for UNC path for Exchange db and log volumes (\\server\share\db1\db1.edb)

24 Exchange virtualization
Supported SMB 3.0 Configuration Example

25 Exchange virtualization
Configuration best practices – Hyper-V Storage Virtual SCSI (pass-through or fixed disk) VHD on host – recommended for OS, program files Pass-through disk on host - recommended for Exchange database and log volumes iSCSI iSCSI direct from a guest virtual machine iSCSI initiator on host and disk presented to guest as pass-through ISCSI initiator from guest performs well and is easier to configure MPIO or EMC PowerPath – PowerPath recommended

26 Exchange virtualization
Configuration best practices – VMware VMFS or RDM Trade-offs VMFS RDM Volume can host many virtual machines (or can be dedicated to one virtual machine) Maps a single LUN to one virtual machine; isolated I/O Increases storage utilization, provides better flexibility, easier administration, and management More LUNs = easier to hit the LUN limit of 256 that can be presented to ESX Server Can’t have hardware enabled VSS backups Required for hardware VSS and replication tools that integrate with Exchange databases Large third-party ecosystem with V2P products to aid in certain support situations Can help reduce physical to virtual migration time Not supported for shared-disk clustering Required for shared-disk clustering Full support for VMware Site Recovery Manager

27 Exchange virtualization
References For Hyper-V: Best Practices for Virtualizing Exchange Server 2010 with Windows Server 2008 R2 Hyper-V Best Practices for Virtualizing and Managing Exchange 2013 For VMware: Microsoft Exchange 2010 on VMware Best Practices Guide Microsoft Exchange 2010 on VMware Design and Sizing Examples Microsoft Exchange 2013 on VMware Best Practices Guide Microsoft Exchange 2013 on VMware Availability and Recovery Options Microsoft Exchange 2013 on VMware Design and Sizing Guide

28 Exchange Storage Design

29 EMC offers both options
Exchange Storage Options DAS or SAN? EMC offers both options For small, low-cost = DAS For large-scale efficiency = SAN Best long-term TCO = SAN Virtualization ready = SAN VNXe VNX VMAX XtremIO Understand which storage type best meets design requirements Physical or virtual? Dedicated for Exchange or shared with other applications? Follow EMC proven guidance for each platform

30 Exchange Server IOPS Per Disks
Use the following table for IOPS per drive values when calculating disk requirements for Exchange 2010/2013* Disk type Exchange 2010/ database IOPS per disk (random workload) Exchange Server /2013 database logs IOPS per disk (Sequential workload) VNX/VMAX VNX VMAX VNX and VMAX 7.2 K rpm NL-SAS/SATA 65 60 180 10 K rpm SAS/FC 135 130 270 15 K rpm SAS/FC 450 Flash 1250 2000 *Recommendations may change based on future test results

31 I/O Characteristics for Various RAID Types
Random I/O Read Excellent Write Moderate Poor Sequential I/O Good RAID write overhead 2 4 6 Disk capacity utilization1 1/2 4/5 (in 4+1 R5) 4/6 (in 4+2 R6) Minimal drives required2 3 RAID 1/0 provides data protection by mirroring data onto another disk. This produces better performance and minimal or no performance impact on the event of a disk failure. In general, RAID 1/0 is the best choice for Exchange Server, especially if SATA and NL-SAS drives are used. RAID 5 data is striped across disks in large stripe sizes. The parity information is stored across all disks so that data can be reconstructed. This can protect against a single-disk failure. With its high write penalty, RAID 5 is most appropriate in environments with read I/Os and where large databases are being deployed. In the case of flash drives, this performance concern will be eliminated and most environments with flash drives can be configured as RAID 5 to support high I/O requirements with very low disk latency. RAID 6 data is also striped across disks in large stripe sizes. However, two sets of parity information are stored across all disks so that data can be reconstructed, if required. RAID 6 can accommodate the simultaneous failure of two disks without data loss. 1 Depends on the size of RAID group 2 Depends on the array

32 Exchange Server Design Methodology
Phase 1: Gather requirements Total number of users Number of users per server User profile and mailbox size User concurrency High availability requirements (DAG configuration) Backup and restore SLAs Third party software in use (archiving, Blackberry, etc.) Phase 2: Design the building block and storage architecture Design the building-block using Microsoft and EMC best practices Design the storage architecture using EMC best practices Use EMC Proven Solutions whitepapers Use Exchange Solution Review Program (ESRP) documentation Phase 3: Validate the design Use Microsoft Exchange validation tools: Jetstress - for storage validation LoadGen - for user workload validation and end-to-end solution validation

33 Exchange Storage Design
Title Month Year Exchange Storage Design Exchange building-block design methodology What is a building-block? A building-block represents the required amount of resources needed to support a specific number of Exchange users on a single server or VM Building blocks are based on requirements and include: Compute requirements (CPU, memory, and network) Disk requirements (database, log, and OS) Why use the building-block approach? Can be easily reproduced to support all users with similar user profile characteristics Makes Exchange environment additions much easier and straightforward, helpful for future environment growth Has been very successful for many real-world customer implementations See Appendix for building-block design process

34 Exchange Storage Design
Title Month Year Exchange Storage Design Exchange Sizing Tools options Microsoft Exchange Server Role Requirements Calculator Exchange 2010: Exchange 2013: EMC Exchange Designer: https://community.emc.com/docs/DOC-13037?et=watches. .document VSPEX Sizing Tool: Specific array sizing tools, i.e. VNX Disksizer Perform manual calculations (for advanced administrators)

35 Exchange Storage Design
Title Month Year Exchange Storage Design Tools for Performance and Scalability Evaluation Exchange JetStress for Storage Validation Exchange Exchange Exchange Load Generator (Loadgen) for end-to-end environment validation (must be used in isolated lab only) Exchange Exchange

36 Storage Design validation
Exchange Solution Reviewed Program (ESRP) Results Microsoft program for validation of Storage vendor designs with Exchange Vendor runs multiple JetStress tests based on requirements for performance, stress, backup to disk, and log file replay Reviewed and approved by Microsoft

37 Exchange Storage Design
Tools for Performance and Scalability Evaluation Exchange Jetstress Uses Exchange executables to simulate I/O load (use same version) Initialized and executed during pre-production before Exchange Server is installed Throughput and mailbox profile tests – Pass gives confidence that storage design will perform as designed Exchange Load Generator (Loadgen) (optional) Validation must be performed in isolated lab Produces a simulated client workload against a test Exchange deployment Estimate number of users per server and validate Exchange deployment LoadGen testing can take many weeks to configure and populate DBs

38 Exchange Storage Design
Title Month Year Exchange Storage Design Storage sizing guidance Do not solely rely on automated tools when sizing your Exchange environment Place time and effort into your calculations, and provide supporting factual evidence on your designs rather than providing fictional calculations Size Exchange based on I/O, mailbox capacity and bandwidth requirements Factor in other overhead variables such as archiving, snapshots protection, virus protection, mobile devices, and risk factor Confirm Exchange storage requirements with specific array sizing tools

39 Storage Design Guidance
Best Practices Isolate the Exchange server workload to its own set of spindles from other workloads to guarantee performance When sizing, always calculate I/O requirements and capacity requirements Separate the database and logs onto different volumes Deploy DAG copies on separate physical spindles Databases up to 2 TB in size are acceptable when DAG is being used: The exact size should be based or customer requirements Ensure that your solution can support LUNs larger than 2 TB

40 Storage Design Guidance
Best Practices - continued Consider backup and restore times when calculating the database size Spread the load as evenly as possible across array resources, V-Max Engines, VNX SPs, back-end buses, etc. Always format Windows NTFS volumes for databases and logs with an allocation unit size of 64K Use an Exchange building block design approach whenever possible

41 Storage Design Guidance - VNX
Pools and RAID Groups Either method works well and provides the same performance (thick pools versus RGs) RAID groups are limited to 16 disks per RG, pools can support many more disks Pools are more efficient and easier to manage Use pools if planning to use advanced features such as: FAST VP, VNX Snapshots Storage pools can support a single building block or multiple building blocks based on customer requirements Design and expand pools using the correct multiplier for best performance (R1/0 4+4, R5 4+1, R6 6+2)

42 Storage Design Guidance - VNX
Thick or Thin LUNs? Both Thick and Thin LUNs can be used for Exchange storage (database and logs) Thick LUNs are recommended for heavy workloads with high IOPS user profiles Thin LUNs are recommended for light to medium workloads with low IOPS user profiles Benefits: significantly reduces initial storage requirements Use VNX Pool Optimizer before formatting volumes Can enable FAST Cache or FAST VP for fast metadata promotions

43 FAST VP and FAST Cache FAST VP FAST Cache
Virtual Server Virtual Server Virtual Server Virtual Server Virtual Server Virtual Server Exchange SQL Server SharePoint Exchange SQL Server SharePoint FAST VP FAST Cache FAST VP FAST Cache A VMAX and VNX feature that automates the identification of data for allocating or reallocating across various performance and capacity tiers within the storage array A VNX performance optimization feature that provides performance boost to frequently accessed data by leveraging the use of flash drives to extend cache capacities Flash FC SAS NL SAS

44 FAST VP vs. FAST Cache on VNX Storage
Leverages pools to provide sub-LUN tiering, enabling the utilization of multiple tiers of storage Simultaneously. Enables flash drives to extend the existing caching capacity of the storage system. Uses 1 GB chunks (256 MB in Next-Gen VNX). Uses 64 KB chunks. Local feature – per storage pool. Assured performance per pool. Global feature – per storage array. Shared resource, performance for one pool is not guaranteed under FAST Cache performance contention. Moves data between different storage tiers based on a weighted average of access statistics collected over a period of time. Copies data from hard disks to flash disks when accessed frequently. Uses a relocation window to periodically make storage tiering adjustments. Default setting is an 8-hour relocation window each day. Adapts continuously to changes in workload. While it can improve performance, it is primarily designed to improve usability and reduce TCO. Designed primarily to improve performance.

45 VNX Considerations for FAST VP and FAST Cache
When the number of flash drives is limited, use flash drives to create FAST Cache first FAST Cache can benefit multiple pools in the storage system. FAST Cache uses 64 KB chunks, smaller than 1 GB or 256 MB chunks in FAST VP, which results in higher performance benefits and faster reaction time for changing usage patterns. Use flash drives to create the FAST VP performance tier for a specific pool This ensures the performance of certain mission-critical data The FAST VP tier is dedicated to a storage pool and cannot be shared with other storage pools in the same storage array.

46 Exchange Design with FAST VP
Configuration recommendations (VNX and VMAX) Separate databases from logs, due to different workloads Data – random workload with skew; high FAST VP benefit Logs – sequential data without skew; no FAST VP benefit Use dedicated pools Provides a better SLA guarantee Provides fault domains Recommended for most deterministic behavior Use Thick Pool LUNs for highest performance (on VNX) Thin Pool LUNs are acceptable with optimization Use Thin LUNs on VMAX

47 Tools for FAST VP Tier Advisor for sizing
Historical performance data is needed from storage arrays. Workload Performance Assessment Tool It shows FAST VP heat map. For more information, refer to https://emc.mitrend.com. VSPEX Sizing Tool (for VNX) For more information, refer to ebook/vspex-solutions.htm. EMC Professional Services and qualified partners can assist in properly sizing tiers and pools to maximize investment.

48 Exchange Design with FAST Cache
VNX only FAST Cache usage Pools with Thin LUNs for metadata tracking Pools with Thin and Thick LUNs when VNX Snapshots are used Pools with Thick LUNs Not required but not restricted either Required with VNX Snapshots  FAST Cache Sizing guidance Rule of thumb: for every 1 TB of Exchange dataset, provision 1 GB of FAST Cache Monitor and adjust the FAST Cache size, your mileage may vary Enable FAST Cache on pools with database LUNs only

49 Exchange Storage design - VMAX
Design Best Practices Ensure that the initial disk configurations can support the I/O requirements Can configure a thin pool to support a single Exchange building block or multiple building blocks, depending on customer requirements Use Unisphere for VMAX to monitor the thin pool utilization and prevent the thin pools from running out of space Install the Microsoft hotfix KB on the Windows Server 2012 hosts in your environment.

50 Exchange Storage design - VMAX
Design Best Practices Use Symmetrix Virtual Provisioning Can share database and log volumes across the same disks, but separate them into different LUNs on the same hosts For optimal Exchange performance, use striped meta volumes

51 VMAX FAST VP with Exchange
Design Best Practices When designing FAST VP for Exchange 2010/2013 on VMAX follow these guidelines: Separate databases and logs onto their own volumes Can share database and log volumes across the same disks Exclude transaction log volumes from the FAST VP policy or pin all the log volumes into the tier on which they are created Select Allocate by FAST Policy to allow FAST VP to use all tiers for new allocations based on the performance and capacity restrictions New feature introduced in the Enginuity™ 5876 code When using FAST VP with Exchange DAG, do not place DAG copies of the same database in the same pool on the same disks

52 XtremCache with Exchange
Consider XtremCache for Exchange if: You have an I/O bound Exchange solution You are not sure about anticipated workload You need to guarantee high performance and low latency for specific users (VIP servers, databases, and so on) XtremCache is proven to improve Exchange 2010 performance by: Reducing read latencies Increasing I/O throughput Eliminating almost all high latency spikes Providing more improvements as workload increases Reducing RPC latencies Reducing writes to the back-end storage with XtremCache deduplication

53 XtremCache with Exchange
Configuration recommendations XtremCache PCI Flash card can be installed on A physical Exchange Mailbox server Hypervisor server hosting Exchange Mailbox virtual machines (VMware or Hyper-V) Enable XtremCache acceleration only on database volumes only Do not enable on log volumes – sequential workload=no performance benefits XtremCache sizing guidance: For a 1,000 GB working dataset, configure 10 GB of XtremCache device

54 XtremCache with Exchange
Configuration recommendations When implementing XtremCache with VMware vSphere, consider the following: Size of the PCI cache card to deploy Number of Exchange virtual machines deployed on each vSphere host that will be using XtremCache Exchange workload characteristics (read:write ratio, user profile type) Exchange dataset size The most benefits will be achieved when all reads from a working dataset are cached

55 XtremCache with Exchange
Configuration recommendations When adding a XtremCache device to an Exchange virtual machine: Set cache page to 64KB and Max IO size to 64 (BDM I/O will not be cached) Can use VSI Plug-in or XtemCache CLI command to set the cache page size to 64 KB and max I/O size to 64 KB when adding the cache device to a virtual machine: vfcmt add -cache <cache_device> -set_page_size 64 -set_max_io_size 64)

56 XtremCache with Exchange
Configuration recommendations (deduplication) Evaluate your workload before considering enabling deduplication for accelerated Exchange LUNs Consider CPU overhead when enabling deduplication Set the deduplication ratio based on workload characteristics: If the observed deduplication ratio is less than 10%, EMC recommends that you turn it off (or set it to 0%), which enables you to benefit from extended cache device life. If the observed ratio is over 35%, EMC recommends that you raise the deduplication gain to match the observed deduplication. If the observed ratio is between 10% and 35%, EMC recommends that you leave the deduplication gain as it is.

57 Exchange DR Options

58 Exchange DR Options EMC’s Common Solutions for Exchange DR
Replication Method Best for Stretched (cross-site) DAG Native Exchange host-based replication Small environments, local replication Third Party Replication with Manual Exchange Recovery (database portability) EMC RecoverPoint or SRDF Mixed environments with large workloads and strict SLAs Third Party Replication with Automated Exchange Recovery EMC RecoverPoint or SRDF with VMware SRM VMware shops with large workloads and strict SLAs

59 Replication Matrix Features SRDF/S SRDF/A MV/S MV/A RecoverPoint Sync
RecoverPoint Async SYNC-API DAG Replication Technology Array SAN Host Heterogeneous storage support No Yes Heterogeneous applications support Active Bi-Directional Latency Impact to applications Medium Minimum High Data replication Type Synch Async Sync Asynch Automated Restart Coexistence with VSS replication Zero data loss RPO = 0 Time to recover RTO Minutes Seconds Replication interval Continuous Continuous IO Pools Continuous  Continuous IO Pools   Log Shipping Replication distance 200km unlimited Data center Unlimited Ability to resynchronize data incrementally Link Cost Requirement Low

60 Exchange Validated Solutions
Exchange 2013 ESRP Exchange with XtremCache Cross-site DAD vs. RecoverPoint Automated DR with RecoverPoint and VMware SRM VSPEX Solution Cloud Solutions with EMC Hybrid Cloud

61 Exchange 2013 ESRP on VNX5700

62 Exchange with XtremCache Solution

63 Cross-site DAG vs. RecoverPoint

64 Automated DR with RecoverPoint and SRM

65 VSPEX Solutions for Exchange

66 Exchange with EMC Hybrid Cloud

67 Exchange Resources Exchange Storage Best Practices and Design Guidelines for EMC Storage whitepaper: design-guid-emc-storage.pdf EMC Community Network https://community.emc.com/community/connect/everything_microsoft EMC and Partner Exchange 2010 Tested Virtualized Solutions Exchange Solution Reviewed Program Submissions (ESRP) Exchange Mailbox Server Storage Design (Microsoft TechNet)

68 Additional References
Exchange virtualization supportability guidance - Understanding Exchange Performance - Server Virtualization Validation Program - Exchange 2010 EMC-tested OEM solutions (on Hyper-V) 20,000 users on EMC storage with virtual provisioning 32,400 users on EMC storage with EMC REE us/library/hh145600(v=exchg.141).aspx

69 ESI for VNX Pool Optimization Tool

70 ESI For VNX Pool Optimization Tool
ESI for VNX Pool Optimization (formerly known as the SOAP tool) improves thick and thin LUN performance by ensuring a balanced allocation of data across all available disks and also by optimizing the access of that data on an ongoing basis. It currently supports only the VNX Next Generation Series Storage Arrays. For older VNX platforms, customers are advised to use the original SOAP tool. More details about ESI for VNX Pool Optimization are available here Product Download: https://download.emc.com/downloads/DL50658_Storage- Integrator-(ESI)-for-VNX-Pool-Optimization-tool.zip Release Notes: https://support.emc.com/docu50668_EMC_Storage_Integrator_for_VNX_Pool_Optim ization_Tool_Release_Notes.pdf?language=en_US

71 What Does This Tool Do? The tool optimizes pool-based LUNs by pre- allocating slices in the pool evenly across all disks, private raid groups and LUNs Provides the best option for any application requiring deterministic high performance across all LUNs in the pool equally

72 When To Use The Optimization Tool
When maximum application performance is required To achieve the best performance for pool-based LUNs (primarily thin) To eliminate performance issues and successfully pass Exchange Jetstress during Exchange pre-deployment storage validation To mitigate “Jetstress effect” Applications to benefit from using the tool: Microsoft Exchange Microsoft SQL Server Oracle

73 Jetstress Initialization Process
How Jetstress initialization phase works: Jetstress creates first database It then creates other databases by copying the first database to other databases concurrently

74 What is the “JetStress effect”
Jetstress initialization process (data population) results in imbalances of underlying virtual disks in a pool How is the issue surfaced? With Jetstress testing first database on the Exchange server will: Experience higher latencies then the others when the LUN is Thick Experience lower latencies then the others when the LUN is Thin

75 Looking under the covers… Slice Maps
Without Optimization With Optimization

76 Prerequisites VNX OE for Block Release 33 SP1 or later
Before using the tool, LUNs must be created using ESI PowerShell with specific parameters in order for the tool to optimize the LUNs* All LUNs in the pool must be the same size All LUNs in the pool must be optimized at the same time *It is also possible to use NaviSecCLI to create LUNs using undocumented black content switches

77 Options for Thick pool LUNs provisioning
VNX OE for Block Release 32 (Inyo) and later Two options for thick pool LUNs provisioning For good and optimal performance – No SOAP is necessary. Use default pool LUNs provisioning via Unisphere, NaviSecCLI, EMC ESI, or EMC VSI For best performance (max IOPS) – Use NaviSecCLI to disable pre-provisioning and then run SOAP Turn off pre-provisioning via the CLI Run SOAP Re-enable pre-provisioning

78 SOAP Utility for VNX R32– Where and how?
Old SOAP Utility is available on EMC Online Support site Enter “soap” in the search and select “Support Tools” Must be used with CX4/VNX Inyo (OE 5.32) only Support Thick LUN optimization only Zip file contains the tool, step-by-step documentation, and demo video

79 Building Block Design Example

80 Exchange Mailbox Server Storage Design Methodology
Phase 1: Gather requirements Total number of users Number of users per server User profile and mailbox size User concurrency High availability requirements (DAG configuration) Backup and restore SLAs Third party software in use (archiving, blackberry, etc.) Phase 2: Design the building block and storage architecture Design the building block using Microsoft and EMC best practices Design storage architecture using EMC best practices Leverage EMC Proven Solutions whitepapers Leverage Exchange Solution Review Program (ESRP) documentation Phase 3: Validate the design Use Microsoft Exchange validation tools Jetstress - for storage validation LoadGen - for user workloads validation and end-to-end solution validation

81 Requirements Example Item Value
Exchange version. Total number of active users (mailboxes) in Exchange environment Exchange 2013, 20,000 Site resiliency requirements Single site Storage Infrastructure SAN Type of deployment (Physical or Virtual) Virtual (VMware vSphere) HA requirements One DAG with two database copies Mailbox size limit 2 GB max quota User profile 200 messages per user per day (0.134 IOPS) Target average message size 75 KB Outlook mode Cached mode, 100 percent MAPI Number of mailbox servers 8 Number of mailboxes per server 5,000 (2,500 active/2,500 passive) Number of databases per server 10 Number of users per database 500 Deleted items retention (DIR) period 14 days Log protection buffer (to protect against log truncation failure) 3 days BDM configuration Enabled 24 x7 Database read/write ratio 3:2 (60/40 percent) in a DAG configuration User concurrency requirements 100 percent Third-party software that affects space or I/O (for example, Blackberry, snapshots) Storage snapshots for data protection Disk Type 3 TB NL-SAS (7,200 rpm) Storage platform VNX

82 Building Block design Define and design Building Block In our example we are defining a building block as: A mailbox server that will support 5,000 users 2,500 users will be active during normal runtime and the other 2,500 users will be passive until a switchover from another mailbox server occurs. Each building block will support two database copies.

83 Building block sizing and scaling process
Perform calculations for IOPS requirements Perform calculations for capacity requirements based on different RAID types Determine the best option Scale building block Multiple building blocks may be combined together to create the final configuration and storage layout (pools or RGs)

84 Building block sizing and scaling process
Front-end IOPS ≠ Back-end IOPS Front-end IOPS = Total Exchange Mailbox server IOPS Back-end IOPS = Storage array IOPS (including RAID penalty) Understand disk IOPS by RAID type Block front-end Exchange application workload is translated into a different back-end disk workload based on the RAID type in use. For reads there is no impact of RAID type: 1 application read I/O = 1 back-end read I/O For random writes like Exchange: RAID 1/0: 1 application write I/O = 2 back-end write I/O RAID 5: 1 application write I/O = 4 back-end disk I/O (2 read I/O + 2 write I/O) RAID 6: 1 application write I/O = 6 back-end write I/O (3 read I/O + 3 write I/O)

85 Database IOPS requirements
Formula & Calculations Total transactional IOPS = IOPS per mailbox * mailboxes per server + (Microsoft recommended overhead %) Total transactional IOPS = 5,000 users * IOPS per user + 20% Microsoft recommended overhead = = 804 IOPS Total front-end IOPS = (Total transactional IOPS) + (EMC required overhead %) Total Front-end IOPS = % EMC required overhead = 965 IOPS (rounded-up from 964.8)

86 Database Disks requirements for Performance (IOPS)
Formula Disks required for Exchange database IOPS = (Total backend database Read IOPS) + (Total backend database Write IOPS)/ Exchange random IOPS per disk Where: Total back-end database read IOPS = (Total Front-end IOPS) * (% of Reads IOPS) Total back-end database write IOPS = RAID Write Penalty *(Total Front-end IOPS * (% of Write IOPS)

87 Database Disks requirements for Performance (IOPS)
Calculations RAID Option RAID Penalty Disks required RAID 1/0 (4+4) 2 (965 x 0.60) + 2(965 x 0.40) = = 1351 / 65 = 21 (round-up to 24 disks) RAID 5 (4+1) 4 (965 x 0.60) + 4(965 x 0.40) = = 2123 / 65 = 33 (round-up to 35 disks) RAID 6 (6+2) 6 (965 x 0.60) + 6(965 x 0.40) = = 2895 / 65 = 45 disks (round-up to 48 disks)

88 Transactional logs IOPS requirements
Formula & Calculations Disks required for Exchange log IOPS = (Total backend database Write IOPS * 50%) + (Total backend database Write IOPS * 10%)/ Exchange sequential IOPS per disk Disks required for Exchange log IOPS = (772 back-end write IOPS * 50%) + (772 *10%))/ 180 sequential Exchange IOPS per disk = ( )/180 = 2.57(round-up to 4 disks)

89 Storage capacity calculations
Formula Calculate User Mailbox on Disk Calculate Database Size on Disk Calculate Database LUN Size Mailbox size on disk = Maximum mailbox size + White space + Dumpster Database size on disk = number of mailboxes per database * mailbox size on disk Database LUN size = Number of mailboxes * Mailbox size on disk * (1 + Index space + additional Index space for maintenance) / (1 + LUN free space)

90 Mailbox size on disk Formula Mailbox size on disk = Maximum mailbox size + White space + Dumpster Where: Estimated Database Whitespace per Mailbox = per-user daily message profile * average message size Dumpster = (per-user daily message profile * average message size * deleted item retention window) + (mailbox quota size * 0.012) + (mailbox quota size * 0.03)

91 Mailbox size on disk Calculations
White space = 200 messages /day * 75KB = 14.65MB Dumpster = (200 messages/day * 75KB * 14 days) + (2GB * 0.012) + (2GB x 0.03) = = MB Mailbox size on disk = 2GB mailbox quota MB database whitespace MB Dumpster = 2,354 MB (2.3GB)

92 Database Size On Disk & LUN size
Calculations Database size on disk = 500 users per database * 2,354 MB mailbox on disk = 1,177 GB (1.15 TB) Database LUN size = 1,177 GB * ( ) / ( ) = 2,060 (2 TB) In our example: 20% added for the Index 20% added for the Index maintenance task 20% added for LUN-free space protection

93 Logs space calculations
Formula & Calculations Log LUN size = (Log size)*(Number of mailboxes per database)*(Backup/truncation failure tolerance days)+ (Space to support mailbox moves)/(1 + LUN free space) Log Capacity to Support 3 Days of Truncation Failure = (500 mailboxes/database x 40 logs/day x 1MB log size) x 3 days = 58.59GB Log Capacity to Support 1% mailbox moves per week = 500 mailboxes/database x 0.01 x 2.3GB mailbox size = 11.5GB Log LUN size = 58.59GB GB /( ) = GB (round-up to 88 GB)

94 Total Capacity per Building Block
Total LUN size capacity required per server = (Database LUN size per server) + (Log LUN size per server) * (Number of databases per server) LUN Capacity Type LUN Capacity Required per server Database LUN capacity 2,060 GB per LUN * 10 LUNs per server = 20,600 GB Log LUN capacity 88 GB per LUN * 10 LUNs per server = 880 GB Total LUN capacity per server 20, = 21,480 GB

95 Total number of disks required
Database disks Logs disks Disks required for Exchange database capacity = Total database LUN size / Physical Disk Capacity * RAID Multiplication Factor Disks required for Exchange log capacity = Total log LUN size) / Physical Disk Capacity * RAID Multiplication Factor

96 Disk requirements based on capacity
RAID Option Database Disks required RAID 1/0 (4+4) 20,600/ * 2 = 7.37 * 2 = (round-up to 16 disks) RAID 5 (4+1) 20,600/ * 1.25 = 7.37 * 1.25 = 9.2 (round-up to 10 disks) RAID 6 (6+2) 20,600/ * 1.33 = 7.37 * 1.33 = 9.8 (round-up to 16 disks) RAID Option Logs Disks required RAID 1/0 (1+1) 880 / 2,794.5 * 2 = 0.63 (round up to 2 disks)

97 Final storage calculation results
Building Block Summary Volume Type RAID Option Disks Required for Performance (IOPS) Disks Required for Capacity Best Option Exchange Databases RAID 1/0 (4+4) 24 disks 16 disks RAID 5 (4+1) 35 disks 10 disks RAID 6 (6+2) 48 disks Exchange Logs RAID 1/0 (1+1) 4 disks 2 disks Total disks per building block 28 disks

98 Final storage calculation results
Building Block Scalability Total number of disks required for entire 20,000 users solution in a DAG with two copies = 28 disk per building block * 8 building blocks = 224 disks total

99 Building block sizing and scaling process
Bandwidth calculations Array throughput MB/s validation for Exchange involves: Determining how many databases the customer will require Confirming the database LUNs are evenly distributed among the backend busses and storage processors. Determine if each bus can accommodate the peak Exchange database throughput Use this calculation to calculate the throughput required (DB throughput * number of DBs per bus) = Exchange DB throughput Compare that number with array bus throughput DB throughput = Total transactional (user) IOPS per DB * 32K + (BDM throughput per DB in MB/s) Number of DBs per bus = the total number of active and passive databases per bus

100 Storage Bandwidth Requirements
The process The bandwidth validation process involves the following steps: Determine how many databases in the Exchange environment Determine the bandwidth requirements per database Determine the required bandwidth requirements per array bus Determine whether each bus can accommodate the peak Exchange database bandwidth Use DiskSizer for VNX or contact your local storage specialist to get the array and bus throughput numbers DiskSizer is available through your local USPEED contact Evenly distribute database LUNs among the back-end buses and storage processors Uniformed distribution is key for best performance FE/BE/RAID Group/POOLs & DAEs DBs uniformly distributed onto the pools Use even numbers on the SAS loops (0 & 2) for max performance

101 Storage Bandwidth Requirements
Calculations Bandwidth per database (MB/s) = Total transactional IOPS per database * 32 KB + Estimated BDM Throughput per database (MB/s) Where: 32 KB is an Exchange page size Estimated BDM throughput per database is 7.5 MB/s for Exchange 2010 and 2.25 MB/s for Exchange 2013 Required throughput MB/s per bus = (throughput MB/s per database) * (total number of active and passive databases per bus)

102 Storage Bandwidth Requirements
Calculations Total transactional IOPS per database = (500 * * 32 KB = 2.1 MB/s  Throughput per database = 2.1 MB/s MB/s = 4.35 MB/s  Required throughput per bus = 4.35 MB/s * 200 databases per bus = 870 MB/s Example assumptions: 500 users at IOPS per database 200 databases per bus If the array supports a maximum throughput of 3,200 MB/s per bus, 200 databases can be supported from a throughput perspective.

103 Final Design Example Configured dedicated storage pools for each mailbox server with 24 x 3 TB NL-SAS drives. Each storage pool holds two copies from different mailbox servers. Separated Exchange log files into different storage pools For better storage utilization created one storage pool with 16 x 3 TB NL-SAS drives for logs per four mailbox server building blocks.

104 Exchange Archiving

105 Archiving – Native vs. EMC SourceOne
Archiving Capabilities Exchange 2013 EMC SourceOne Name In-Place Archiving Comprehensive management capabilities Consistent & automated policy-based with proper retention and disposition û ü Full-text indexing of attachments Limited to few file types Support many file types Single-instance, de-duplication and compression capabilities Multiple content types ( , SharePoint, files) Offline Access Limitation In-place archiving is not accessible in offline mode Access from non-Microsoft Device Not available Doesn’t Required Exchange enterprise CAL license Doesn’t required In-place Exchange Migration Search results between online and offline Inconsistent as archived s are not searchable offline consistent Archiving policy Basic Advanced - based on many other parameters like for example based word in subject field, specific outlook folder, with attachment or no, attachment format, read/unread message and etc….

106 Archiving – Native vs. EMC SourceOne
eDiscovery Capabilities Exchange 2013 EMC SourceOne Comprehensive eDiscovery Capabilities Limited Comprehensive eDiscovery Legal hold Up to lower level (single level) Assign Custodians û ü Legal hold based on time frame of content Supported File types for Indexing 54 file types – limits the keyword search result 400 file types – accurate keyword search result Proper eDiscovery Workflow (assessment, culling, review and tagging then holding) Message Tagging Message distribution among reviewers Run Complex Search Query BCC Field Search

107


Download ppt "Microsoft Exchange Best Practices and Design Guidelines on EMC Storage"

Similar presentations


Ads by Google