Download presentation
Presentation is loading. Please wait.
1
Microsoft Exchange Best Practices
Note to Presenter: The target audience for this presentation are those who need to understand EMC’s general recommendations for storage in Exchange deployments. VERY IMPORTANT: Please note that EMC offers a wide variety of solutions for Exchange deployments. Be consultative while presenting this content and be focused on the customers business requirements. Consider the mindset of this audience. Be aware of who is technical and who is business oriented. Are they: Interested in total cost of ownership? Looking for best-of-breed products? Overwhelmed by technology options? Concerned about service-level agreements? Amy Styers – Commercial Microsoft Solutions Consultant
2
What are “Best Practices”?
Best practices are accepted truths and wisdom based on: Manufacturer’s recommendations Historical evidence Analytical data Lessons learned Proof points Best practices are general recommendations that: Provide guidance and considerations in the design stages NOTE TO PRESENTER: Review “What are best practices?” Explain to the audience that these are general recommendations based on historical data, and that the bullets on this slide have has been collected through much knowledge from subject matter experts (SMEs).
3
Best Practices are Based on The Audience
Depend on audience, requirements complexity, sophistication Some we understand intuitively Some we don’t. What do these represent? Be flexible What may be good for one implementation, it may not be good for another Introduce the concept that “best practices” are audience sensitive. What makes sense for one client may not make sense for another. Keep in mind that early in the design phases there are many considerations to be taken into account and in many cases there are many ways to rotate tires on a car. NOTE TO PRESENTER: Be knowledgeable of the customer environment, and don’t position any products in this session. Realize the single most important goal is to understand the reason why you are sitting in this session. Don’t be afraid to ask for help, and most importantly “don’t allow your technical infatuation to carry your conversation” Know your self, keep in mind that there may be folks in the audience who may know more about Exchange than you do.. Don’t be afraid to ask questions. F
4
Rule 1 - Understand Exchange I/O Profiles
Understand the various user profiles and how this information can be gained Also be aware of what else generates IO which needs to be considered in the design NOTE TO PRESENTER: This presentation is formatted into 10 easy rules. Clearly explain you are taking them through various recommendations and the topics will be broken down into sections. Rule 1 - Explain the single most important thing to know, is to know the /IO profile of the environment. Explain the different user profiles based on send/receive ratios and that this chart comes from Microsoft. Aside from user activity, customers must also understand all various activities that bring about I/O overhead, so these can be taken into account when sizing the environment. This is the foundation to properly sizing Exchange
5
Rule 1 - Understand Exchange I/O Profiles
Use the System Monitor tool to monitor Physical Disk\Disk Transfers/sec counter over the peak hours of server activity. This will allow for handling of random and “bursty” moments Monday averaging between 8:30 or 11am is a typical peak average Be aware of online maintenance activity. In some cases it can be very IO intensive and can generate as much activity as normal Exchange operations Performance data should be collected at various cycles, but the peak times are the key targeted numbers. If the users in a company have diverse usage requirements, you may have to measure usage profiles separately for different groups of users. For example, the sales engineers may have a different usage profile than the local marketing group. Having separate measurements is helpful only if the groups of users are significantly different. Also bring awareness of the impact on-line maintenance has on the overall I/O overnight. IOPS Peak
6
Rule 2 – Size Exchange based on both I/O requirement and mailbox capacity
Old Way of Sizing (Just Capacity) Number of Users x Mailbox Size x Growth = ~1234 GB Buy enough disk to satisfy the capacity requirement (1234 GB / disk size) With larger disk sizes, there may not be enough spinning disks for good performance NEW Way of Sizing (Capacity and Performance) Use enough physical disk spindles to satisfy total user IOPS and total user mailbox capacity Total IOPS required / Disk Spindle IOPS Also know this Be aware of how I/O read/write ratios in Exchange 2007 have changed vs. Exchange 2003 Understand the tradeoff of large mailboxes Apply as much detailed information to your design Measurements should be taken in to two different dimensions. 1- capacity 2- performance, but really both play equality importance roles in the sizing. Capacity-based Measurements Users x Mailbox Size x Growth = ~1234 GB Buy enough disk to satisfy the capacity requirement Performance-based Measurements Make Accurate Performance Measurements of User IOPS Use enough physical disk spindles to satisfy total user IOPS Total IOPS required / Disk Spindle IOPS What if there isn’t enough capacity for these users? You need to do both for Exchange environments. Performance and then Capacity.
7
Rule 2 – Size Exchange based on both I/O requirement and mailbox capacity
Server memory configurations are key in Exchange 2007 design. Read/write ratios are typically 1:1 Be aware of your total server memory configuration requirements Review that server memory configurations have a direct impact on I/O reduction in Exchange Contrast the data points shown in relation to the total memory installed in the server. Bottom line, know the before and after and also explain the read/write ratios.
8
Rule 2 – Size Exchange based on both I/O requirement and mailbox capacity
Paying for more I/O than needed? This is a very complex slide, but its designed to demonstrate how the changes made in Exchange 2007 have drastically changed the way we design today's environments. Make a case for the large mailbox impact and how that drives more hardware. Required for Capacity Exchange 2007 Exchange 2003 146 GB 15K RAID 1/0 146 GB 10K RAID 5 Required for IOPS 300 GB 10K RAID 1/0 Required for IOPS Excess Capacity
9
Rule 2 – Size Exchange based on both I/O requirement and mailbox capacity
Position archiving Facilitate infinite storage capacity with archiving Store unlimited quantities of data—and keep it all readily accessible—by using archiving Consider discussing WORM devices, such as Centera for compliance requirements and compelling TCO Note to Presenter: View this slide in Slide Show Mode for animation. Provide evidence that archiving has a much better TCO than having all content stored in your active mailbox databases. Archiving brings a much better TCO than disk attached storage (DAS) and offers a much better business solution. Once the messages have been compacted into container files, they can be moved to other storage devices. Xtender supports many storage devices, including EMC Centera. Moving messages to secondary storage devices will reduce your costs by reducing the storage necessary for the and archive servers. XTENDER SERVER EMC Centera Server 9
10
Rule 2 – Size Exchange based on both I/O requirement and mailbox capacity
IOPS: the number of Input Output operations (I/O’s) per second %R: Percentage of I/O’s that are Reads WP: RAID Write Penalty (RAID 1 = 2, RAID 5 = 4) %W: Percentage of I/O’s that are Writes Factor other variable such as archiving, journaling, virus protection, remote devices, and risk Don’t solely rely on automated tools in sizing your Exchange environment. Place detailed effort into your calculations and provide supporting factual evidence on your designs rather than providing fictional calculations Don’t solely rely on automated tools in sizing your Exchange environment. Place detailed effort into your calculations and provide supporting factual evidence on your designs rather than providing fictional calculations. The calculation: IOPS: The number of Input/Output operations (I/O’s) per second %R: Percentage of I/O’s that are Reads WP: RAID Write Penalty Multiplier (RAID 1 = 2, RAID 5 = 4) %W: Percentage of I/O’s that are Writes
11
Rule 3- RAID Protection Type
With Exchange 2007, technically any RAID type can satisfy the IO, as long as there are enough spindles to deal with the increased write ratio. However consider the following Understand what each type provide Performance, reliability Cost, practicality Risk, exposure RAID 1 Best practice Always works, predictable and best performance RAID 5 & RAID 6 Writes do impact reads Rebuilds impact reads and writes All RAID types are able to address the pure IO requirements from a given workload. One must, however, take into account reliability, performance, risk, exposure and cost. 11
12
Rule 4- Spindle and Storage Best Practices
Avoid sharing physical spindles over multiple servers Best: an Exchange server has its own physical spindles, i.e. the building block approach Possible: Exchange servers share physical spindles Possible, with care: Exchange servers share physical spindles with BCV volumes used for their own backup Possible, but not good: Exchange servers share physical spindles with other applications with predictable, known, and similar workloads. Very bad: Exchange servers share physical spindles with applications with unpredictable and incompatible workloads, like data warehouses. If unable to dedicate spindles to Exchange, be aware of what else is running on the physical spindle, and account for that activity Avoid sharing physical spindles over multiple servers Best: an Exchange server has its own physical spindles, i.e. the building block approach Possible: Exchange servers share physical spindles Possible, with care: Exchange servers share physical spindles with BCV volumes used for their own backup Possible, but not good: Exchange servers share physical spindles with other applications with predictable, known, and similar workloads. Very bad: Exchange servers share physical spindles with applications with unpredictable and incompatible workloads, like data warehouses. If unable to dedicate spindles to Exchange, be aware of what else is running on the physical spindle, and account for that activity 12
13
Understand the business requirements first
Rule 4- Spindle and Storage Best Practices Performance problems may arise from sharing physical disk resources, with other I/O intensive applications and databases. Exchange database write characteristics are very random and small in size Avoid sharing Exchange physical disks with other application like different IO characteristics like Oracle These recommendations are true for both DAS and SAN topologies. Understand the storage technology options available your Exchange environment Understand the business requirements first Exchange data Other applications NOTE TO PRESENTER: Take the audience through the reasoning of dedicated spindles. Explain that this is not a SAN weakness and that the same principles apply to DAS. Not dedicating physical disk to exchange can bring unpredictable performance Exchange data Dedicating physical disk to exchange allows you to maintain predictable levels of performance
14
Rule 4- Spindle and Storage Best Practices
Exchange on direct attached storage (DAS/JBOD) Exchange always ran on direct attached storage, but… Reliability issues Lack of write cache No rebuild priorities No remote replication, while Exchange was becoming business critical DAS and virtualization don’t mix well – Virtualization is a key market trend Enterprise Storage Options Messaging environments are mission critical applications Treat them as such Understand your technology options Understand your technology cost Acquisition cost Long term of ownership ( all related cost ) Exchange on direct attached storage (DAS/JBOD) Exchange always ran on direct attached storage, but had: Reliability issues Lack of write cache No rebuild priorities No remote replication, while Exchange was becoming business critical DAS and virtualization don’t mix well – Virtualization is a key market trend Why does Microsoft focus on “cheaper” storage? Larger mailbox coupled with poor DAS scalability allows Microsoft to increase the amount of licenses sold
15
Rule 5 – Design Methodology for Exchange
Iterate over all possible solutions, until the right solution has been found Platform independent Considers total solution, including software Parameters of an Exchange configuration determine the possibilities List accuracy of possibility Be aware of Exchange object requirements such as ESG configuration requirements The design methodology for Exchange is: Platform independent Considers total solution, including software Parameters of an Exchange configuration determine the possibilities List accuracy of possibility Aware of Exchange object requirements, such as ESG configuration requirements Exchange should be properly planned before disks are laid out. EMC recommends the use of Building Blocks outlined in our solutions and ESRP, this makes the design, install, backup & troubleshooting much predictable
16
Rule 5 – Design Methodology for Exchange
Myths Around Sharing Logs and Data on Same Physicals Microsoft does not allow it! FALSE OK when you can recover from a double RAID 1 device failure Separate logs and data because of performance, it is TRUE for solutions without write cache Sometimes write cache is lost after a board failure, requiring separation of logs and data volumes also This is the real value of SAN. SG1 Logs SG2 Logs SG1 Data SG2 Data 16
17
Rule 5 – Design Methodology for Exchange
Share logs and data on physical spindles (Symmetrix) Better usage of available spindles Do NOT share logs and data on same logical volume (META) Logs and DB for the same SG should not reside on the same spindles Subsystem Cache Will Not Guarantee Performance The quantity or intelligence of cache does not override normal disk sizing and layout advice Provide enough spindles Optimizer is not the magical spell either Is not the magic that will automatically "create" a good configuration over time Exchange, being such a random application, should be planned out properly before disks are laid out Share logs and data on physical spindles (Symmetrix) Better usage of available spindles Do NOT share logs and data on same logical volume (META) Logs and DB for the same SG should not reside on the same spindles Subsystem Cache Will Not Guarantee Performance The quantity or intelligence of cache does not override normal disk sizing and layout advice Provide enough spindles Optimizer is not the magical spell either Is not the magic that will automatically "create" a good configuration over time Exchange, being such a random application, should be planned out properly before disks are laid out
18
Rule 5 – Design Methodology for Exchange
Use Striped Metavolumes / MetaLUNs Much better than concatenated in performance tests Symmetrix (Metavolumes) Multiples of four and handled well by Symm Disk Adapters Example: 4 member meta, 8 member meta Striped metavolumes need a BCV to expand CLARiiON (MetaLUNs) Great performance, simple to expand Host Volume Sets Require Windows Dynamic Disks Present limitations in clusters, replication Stay with Basic Disk (2) RAID10 (8+8)’s to make MetaLUN Dynamic disks present problems in clusters. Metadevices can be expanded with a combination of storage system tools and DISKPART. Create meta volumes with enough members. This improves the amount of I/O done in parallel. The Symmetrix volume size may need to be smaller to accomplish this Start the Meta volumes on different physical volumes The first volume contains the NTFS log and gets extra I/O traffic (Don’t use Concat Metas unless you are striping from the host) Create separate meta volumes for each log and for each mailbox database Relocate SMTP Folder to a separate Symmetrix (meta) volume Distribute heavy users over all Exchange servers
19
Rule 6 – Consider All Elements in the Design
Server Align using 64 KB offset with Diskpart ( Widows 2003 ) Set allocation unit size on DB and Log LUNs to 64 KB Set HBA QDepth to 128 EMC PowerPath for load balancing Array Keep default 8 KB page size on CLARiiON 8 KB sector within new DMX3 cache slot provides better fit Don’t change LUN offsets HBA and FA ports Fan-out two HBA to at least four FA ports FA gives highest priority to queuing I/O requests from host During a write burst a FA can be overloaded More FA avoids overload It is NOT needed to dedicate four FA ports Share FA ports with other Exchange servers Do NOT share Exchange with high bandwidth servers on the same FA slices/CPU Increase queue depth to 128 ( or higher when possible ) Server has set HBA queue depth, and HBA has to set the queue depth to 128 Use PowerPath to: Give priority to log writes Throttle writes to avoid starvation of reads All variables need to be taken into account, not just the storage aspects of the design
20
Understand Bottlenecks/Weakest Link
Rule 6 – Consider All Elements in the Design Understand Bottlenecks/Weakest Link Application (users, layout, tuning, AD, isolation) Volume Manager (format, alignment, stripe size) Host Bus (HBA queue depth, PowerPath) Disk (spindles, metavolumes, replication, archiving) Understand the bottle necks and weakest links which could be: Application (users, layout, tuning, AD, isolation) Volume Manager (format, alignment, stripe size) Host Bus (HBA queue depth, PowerPath) Disk (spindles, metavolumes, replication, archiving) Take all variables into consideration. Your environment is as fast as, your slowest component
21
Rule 7: Test and Measuring Performance
Besides EMC Workload Analyzer/Performance Manager and NaviAnalyzer use the following tools: ExBPA Understand where you are today, use a base line information. JetStress Suggested by Microsoft for design testing – simulates I/O work load LoadGen Simulates load based on profiles Perfmon Help you understand current I/O characteristics TEST !!! Never go live without proper testing Here are just a few performance tools you can use to measure and test performance.
22
Rule 7: Test and Measuring Performance
JetStress generates the same throughput as Exchange in a production environment Each thread generates a mix of transactions (insert, delete, replace and seek) to mimic an Exchange workload Increasing thread count increases the throughput to match what is expected in production Mailbox count, and I/O profile are used to calculate expected IO/s Is only used to determine when a test has exceeded the expected IO/s and rated “successful” There is NO difference between 50,000 users at 0.1 Expected IOPS 5,000 5,000 users at 1.0 Expected IOPS 5,000 There is a difference in real life! Explain how JetStress is an I/O measurement only and you would want to test above and beyond what JetStress can do.
23
Rule 7: Test and Measuring Performance
Use JetStress to verify the performance and stability JetStress is used to validate the performance of a disk subsystem prior to putting an Exchange server into production. JetStress helps verify disk performance by simulating Exchange Watch out for very high user counts based upon JetStress Mailbox size is used to create the initial databases The stroking distance is always 100% Always do JetStress before deployment Any mistake, any imbalance will be made visible with the opportunity to correct before implementing in production Compare against EMC ESRP results Watch out for caching effect Test with LoadGen after JetStress testing Microsoft Exchange Load Generator (LoadGen) is a simulation tool to measure the impact of MAPI, OWA, IMAP, POP and SMTP clients on Exchange servers LoadGen can require a lot of hardware, but is able to reproduce results Explain the difference between JetStress I/O testing and LoadGen based testing
24
Rule 7: Test and Measuring Performance
Understand what the latency acceptable limits are: PhysicalDisk\Average Disk sec/Read Indicates the average time (in seconds) to read data from the disk. The average value should be below 16 ms. Spikes (maximum values) should not be higher than 20 ms. PhysicalDisk\Average Disk sec/Write Indicates the average time (in seconds) to write data to the disk. PhysicalDisk\Average Disk Queue Length Indicates the average number of both read and write requests that were queued for the selected disk during the sample interval. The average should be less than the number of spindles of the disk. If a SAN is being used, ignore this counter and concentrate on the latency counters: PhysicalDisk\Average Disk sec/Read and PhysicalDisk\Average Disk sec/Write. Here are just a few performance counters you can use to measure and test performance. Keep in mind the numbers outlines for acceptable latency. These numbers have been provided by Microsoft. Do validate latest acceptable latency limits
25
Rule 7: Test and Measuring Performance
Understand what the latency acceptable limits are: Here are just a few performance counters you can use to measure and test performance. Keep in mind the numbers outlines for acceptable latency. These numbers have been provided by Microsoft. Do validate latest acceptable latency limits
26
Rule 8: Help Plan the Entire Architecture
Storage is usually an afterthought Getting into Exchange and Active Directory Planning sessions will help you understand the environment better Poorly Planned Active Directory = Exchange Performance Problems may manifest EMC offers services that can evaluate the quality of a customers Active Directory deployment Exchange Insight Workshop Migration Assessment Migration Design Migration Implementation Make sure you are involved in the entire design. EMC Consulting can help you plan your entire architecture. Tie it all together Business requirements Storage Sizing Performance Back up Restore Recovery Distance Replication Security Management Archiving
27
Rule 9 - Backup Best Practices
Avoid unprotected clones and mirrored clones with VSS Recovery after a drive failure is VERY difficult Same when drive fails while mirroring from M1BCV to M2BCV RAID 5 clones are the best solution in combination with VSS Recovery after drive failure is simple Backup/restore operation can continue with a bad drive, but slow Protected clones are possible too Uses two mirror positions SNAPs are possible, but consider the following Change rate Activity during ESEUTIL, on-line maintenance and backup Sequential read is not optimal – long term consequences Understand the difference between recovery and restore Backup and restore have same granularity with hardware VSS Begin to explain that backups are very important, and it is no option to EMC to recommend “no backups” as it is sometimes recommended by other companies. Avoid unprotected clones and mirrored clones with VSS Recovery after a drive failure is VERY difficult Same when drive fails while mirroring from M1BCV to M2BCV RAID 5 Clones arethe best solution in combination with VSS Recovery after drive failure is simple Backup/restore operation can continue with a bad drive, but slow Protected clones are possible too Uses two mirror positions SNAPs are possible, but consider the following Change rate Activity during ESEUTIL, on-line maintenance and backup Sequential read is not optimal – long term consequences Understand the difference between recovery and restore Backup and restore have same granularity with hardware VSS 27
28
Rule 9 - Backup Best Practices
Complicated management 50 SG have 100 LUNs to manage Trend to have fewer SGs ( 8 to 16 ) Map backup order over time Avoid contention by carefully mapping out backup and eseutil processes. Very hard to do. More SGs make it more complicated One sequential stream per clone 1:1 map from source to clone Concurrent backups Have as many backup threads as possible to get to the required throughput. Sequential with random seeks to the clone drive Still fast enough to meet the windows. Uses a big pool of clone devices Make sure you understand the importance of backups Don’t just recommend “no backups” due to technology limitations Backups are subject to regulatory requirements Complicated management 50 SG have 100 LUNs to manage Trend to have fewer SGs ( 8 to 16 ) Map backup order over time Avoid contention by carefully mapping out backup and eseutil processes. Very hard to do. More SGs make it more complicated One sequential stream per clone 1:1 map from source to clone Concurrent backups Have as many backup threads as possible to get to the required throughput. Sequential with random seeks to the clone drive Still fast enough to meet the windows. Uses a big pool of clone devices Make sure you understand the importance of backups Don’t just recommend “no backups” due to technology limitations Backups are subject to regulatory requirements
29
…up to 500 times daily data reduction
Rule 9 - Backup Best Practices Include data de-duplication as part of the backup strategy Take advantage of VSS technology and off-load the backup mechanism off the production servers Global data de-duplication at the source is the unique technology that drives all of Avamar’s advantages and is the real reason Avamar is able to solve key enterprise data protection challenges. Enterprise data is highly redundant, with identical files and sub-file data segments stored within systems and across systems company-wide. Traditional backup software magnifies the problem by storing redundant data over and over again. Avamar solves this problem by viewing primary data in chunks—called data segments. The software generates and assigns each data segment a unique ID, based on its content, which is used to compare it with other data segments that have already been backed up. Only new, unique data segments are transferred during a backup operation, ensuring that only a single instance of each segment is stored in a central location. Avamar eliminates redundant backup data at the source—before that data is sent across the network. It also de- duplicates across sites and servers, hence the term “global.” As shown, data is de-duplicated at the source via an Avamar software agent, which allows for that dramatic reduction in network consumption for daily backup and recovery. De-duplication also happens at the storage device so that when the multiple backups are stored together, they are being stored very, very cost-effectively and efficiently. Break data into atom (sub-file, variable-length segments of data) Send and store each atom only once Avamar backup repository O H O H …up to 500 times daily data reduction At the source—De-duplication before data is transported across the network At the target—Assures coordinated de-duplication across sites, servers, and over time Granular—Small, variable-length segments guarantee most effective de-duplication
30
Rule 10 – Leverage Server Virtualization Technology
Server Virtualization technology is a great fit for Exchange Position VMware whenever possible Provides great TCO to Exchange deployments – much better than DAS Understand the latest testing on virtualization technology Begin by outlining Exchange is a great candidate for virtualization – outline EMC has solutions for both VMware and Hyper-V Server hardware changes - Exchange 2007 requires 64-bit servers - Now ship with multi-core 2/4/(soon 6) CPUs, 256GB RAM - Intel/AMD hardware-assisted virtualization - Huge potential for under-utilization - Opportunity to consolidate and reduce costs Changes in ESX 3.5 and beyond - Increased guest OS memory (64GB) - Increased physical RAM on ESX (256GB) - Network improvements lower CPU utilization - NUMA optimizations improve multiple VM performance - Improved storage efficiency Result Server hardware trends driving Exchange virtualization Result ESX 3.5 is ready for Exchange 2007
31
Virtualizing Microsoft Applications
Rule 10 – Leverage Server Virtualization Technology Virtualizing Microsoft Applications Historically, some customers feared virtualization of enterprise applications. High utilization of resources limited by 32-bit system Large amount of IO traffic Server memory limitations But, hardware has changed… From dual-core to multi-core technology increase available resources More memory per server Applications like Exchange have changed… better uses multiple cores, 64-bit Reduced I/O more server roles to consider Microsoft Application are excellent candidates for virtualization! Benefits of virtualization, such as encapsulation/portability, hardware availability, and change control are now accessible in the world of Microsoft applications Walk the audience through the changes one by one… ask questions in between
32
Dispelling Myths - Performance
Rule 10 – Leverage Server Virtualization Technology Dispelling Myths - Performance Many still convinced Exchange should never be virtualized Exchange 2007 performance testing proves otherwise What performance is important to Exchange administrators? Low latency within defined thresholds Exceptional user experience Available headroom for future growth Constant latency while scaling to multiple, concurrent mailbox servers Minimal storage latencies for large mailbox counts Flexibility to meet these requirements under changing workloads Let’s take a look !!! Talk about the myth that many still convinced Exchange should never be virtualized Exchange 2007 performance testing proves otherwise What performance is important to Exchange administrators? Low latency within defined thresholds Exceptional user experience Available headroom for future growth CConstant latency while scaling to multiple, concurrent mailbox servers Minimal storage latencies for large mailbox counts Flexibility to meet these requirements under changing workloads 32
33
Rule 10 – Leverage Server Virtualization Technology
16,000 “heavy” users on single server CLARiiON CX3-80 Replication Manager Dell R900 16 core 128GB RAM Virtual machines: 4,000 users each 4 vCPU 12GB RAM Whitepaper: Let’s walk through the configuration: 16,000 “heavy” users on single server CLARiiON CX3-80 Replication Manager Dell R900 16 core 128GB RAM Virtual machines: 4,000 users each 4 vCPU 12GB RAM 33
34
Leverage Dynamic Adaptability with Virtual LUN Technology
Rule 10 – Leverage Server Virtualization Technology Leverage Dynamic Adaptability with Virtual LUN Technology Unanticipated virtual machine growth New performance needs changed Alter performance of the virtual machine file system without disruption Move between disk or RAID types Non-disruptive to virtual machines and applications Adjust for unplanned changes Tier virtual machine groups Efficient usage of storage Worry-free adaptability Virtual Machines APP OS APP OS Consider the following when disusing virtual provisioning What are the current costs of placing Exchange data into CX thin pools? What benefits do we see in doing this? What are perceived benefits/requirements driving the need for Exchange to perform competitively within a CX virtual provisioning pool? Are there options other than thin pools that would be more effective and timely in meeting those requirements? How much improvement is needed to be competitive with Exchange in thin pools? What are planned improvements coming in Libra? What additional testing would be useful? Are there any changes we should make to the current guidance … VMware ESX Servers FC FC FC FC FC FC FC Virtual LUN Technology SATA II SATA II SATA II SATA II APP OS SATA II APP OS SATA II APP OS 34
35
Rule 10 – Leverage Server Virtualization Technology
In general Exchange is not a good candidate for thin LUN’s Tier 1 application Heavy, bursty IO activity Latency intolerant Preference for RAID 1/0 Often reaches max storage quickly But some Exchange installations may qualify Smaller user counts Low IO’s per user Mailboxes that won’t reach their max for more than a year VP configuration Dedicated Storage Pool Periodic provisioning NQM cruise control with mixed use Saves management and capital costs Mailbox size IO’s per user Small Low Large High Virtual Provisioning Candidates This is the value of virtualization and the advanced functionality a SAN provides . Provided by CX partner engineering 35
36
Maximize Resources - QoS Manager and DRS
Rule 10 – Leverage Server Virtualization Technology Maximize Resources - QoS Manager and DRS Concerned about introducing contention with consolidation? DRS balances host resources Monitors CPU & memory utilization Leverages policy-based VMotion NQM enforces application storage performance policies Throughput, Bandwidth, Response Together offer end-to-end policy-based performance management This is advance functionality you will not find in JBOD or DAS High Priority Medium Priority Low Priority APP OS APP OS APP OS APP OS APP OS APP OS Show the value of virtualization, and the advance functionality a SAN provides Concerned about introducing contention with consolidation? DRS balances host resources Monitors CPU & memory utilization Leverages policy-based VMotion NQM enforces application storage performance policies Throughput, Bandwidth, Response Together offer end-to-end policy-based performance management VMware ESX Server Available Performance Applications 36
37
Title Month Year
38
Core Storage Best Practices
39
Core Storage 1. Align all Microsoft Exchange-related disks using a value of 64. This aligns all of the Exchange-related NTFS partitions on a 64-KB boundary. With the release of Windows 2008, this issue has been addressed and corrected, so it is no longer necessary to perform this task in disk configured in Windows 2008.
40
Core Storage 2. Format all Microsoft Exchange-related NTFS partitions using 64-KB Allocation Unit (AU) cluster size. While this cluster size has been shown to have no effect on normal Microsoft Exchange database operations (transaction activity), studies have shown that a 64-KB cluster size increases performance with certain Microsoft Exchange and NTFS-related operations, such as Exchange backups and Exchange check-summing activities associated with VSS-related operations.
41
Core Storage 3. Isolate the Microsoft Exchange database workload from other I/O-intensive applications or workloads. This ensures the highest levels of performance for Microsoft Exchange and makes troubleshooting easier in the event of a disk-related Microsoft Exchange performance problem.
42
Core Storage 4. Separate logs and databases onto different disks and RAID groups. This may pose performance issues as database, and log files have very different I/O characteristics. It can also be an issue if placing log files and databases from the same ESG in a given volume group, as certain recovery options can be impaired. If desired, it is possible to combine logs, and databases of separate Exchange storage groups within the same physical spindle (typically in the Symmetrix.) Do NOT share logs and data on same logical volume (META). Microsoft has acknowledged this recommendation, and has updated this KB article.
43
Core Storage 5. Size and configure the environment for spindle performance as a primary consideration, with spindle or storage capacity as a secondary concern. In other words, size for performance first and then capacity requirements and performance results Microsoft has released a new Exchange 2010 sizing tool
44
Core Storage 6. Tuning the array storage system parameters is important in obtaining best performance. The following list details the optimal parameters for Exchange: Cache page size of 8 KB Maximized write cache size Read and write cache enabled for all LUNs
45
Core Storage 6. Use a Building Block approach for planning storage for Exchange
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.