Presentation is loading. Please wait.

Presentation is loading. Please wait.

Successfully Virtualizing SQL Server on vSphere

Similar presentations


Presentation on theme: "Successfully Virtualizing SQL Server on vSphere"— Presentation transcript:

1 Successfully Virtualizing SQL Server on vSphere
Deji Akomolafe Staff Solutions Architect VMware CTO Ambassador Global Field and Partner Readiness @Dejify

2 Is Your SQL Server Too Big To Virtualize?
IOPS Network Memory CPU VMware vSphere 6 1,000,000 > 40Gb/s 4TB per VM 128 vCPUs 1 million IOPS validated by VMware Performance Engineering A vSphere 5.1 a single virtual machine can achieve 1 million+ I/O operations per second Adequate storage infrastructure required to meet the demand With each version VMware has been increasing the performance and scalability. Any lingering performance concerns relating to VMware virtual machines are due to a lagging perception from very early generations which had limited capabilities.

3 Storage Scalability 1mm IOPS, >2ms latency, 8kb block, 32 OIO’s
Reference: Figure shows the aggregate number of I/O operations per second achieved as the number of VMs were increased. Aggregate I/O operations per second scaled from 200 thousand to slightly above 1 million as the number of VMs was increased from one to six. The latency of I/O operations remained under 2 milliseconds throughout the test, increasing by only 10% as the I/O load increased on the host.

4 Doing SQL Right on vSphere Cheat sheet
Physical Hardware VMware HCL BIOS / Firmware Power / C-States Hyper-threading NUMA ESXi Host Power Virtual Switches vMotion Portgroups Virtual Machine Resource Allocation Storage Memory CPU / vNUMA Networking vSCSI Controller Guest Operating System CPU Storage IO

5 Architecting VMware vSphere for performance SQL Servers

6 Everything Rides on the Physical Hardware
Hardware and Drivers MUST Be On VMware’s HCL Outdated Drivers, Firmware and BIOS Revs Adversely Impact Virtualization Always Disable unused physical hardware devices Leave memory scrubbing rate in BIOS at default Default Hardware Power Scheme Unsuitable for Virtualization Change Power setting to “OS controlled” Enable Turbo Boost (or Equivalent) Disable Processor C-states / C1E halt State Enable All CPU Cores – Don’t Let Hardware Turn off Cores Dynamically Enable Hyper-threading Enable NUMA Ask your Hardware Vendor for Specifics Hardware-assisted Virtualization(HV) CPU Virtualization Intel VT-x / AMD-V Memory Management Unit (MMU) Virtualization Intel Extended Page Tables(EPT) / AMD Rapid Virtualization Indexing (RVI) I/O MMU Virtualization Intel VT-d / AMD-Vi/IOMMU

7 Storage Optimization

8 Factors Affecting Storage Performance
vSCSI adapter Application VMKernel FC/iSCSI/NAS Virtual Adapter Queue Depth Adapter Type Number of Virtual Disks VMKernel Admittance ( Disk.SchedNumReqOutstanding) Per Path Queue Depth Adapter Queue Depth Storage network (link speed, zoning, subnetting) Number of Disks (Spindles) HBA Target Queues LUN Queue Depth Array SPs

9 Nobody Likes Long Queues
Checkout Arriving Customers Queue input output server queue time service time response time Utilization = busy-time at server / time elapsed

10 Additional vSCSI Controllers Improves Concurrency
Guest Device vSCSI Device Storage Subsystem

11 Optimize for Performance – Queue Depth
vSCSI Adapter Be aware of per device/adapter queue depth maximums (KB 1267) LSI Logic SAS = 32 PVSCSI = 64/254 Sometimes default queue depths are NOT ENOUGH, even for PVSCSI - Use multiple PVSCSI adapters At least for the Data, TempDB, and Logs volumes No native Windows drivers – Always update your VMware Tools Windows requires a Registry Key to take advantage Key: HKLM\SYSTEM\CurrentControlSet\services\pvscsi\Parameters\Device Value: DriverParameter | Value Data: "RequestRingPages=32,MaxQueueDepth=254“ Smaller or Larger Datastores? Datastores have queue depths Determined by the LUN queue depth IP Storage? – Use Jumbo Frames, if supported by physical network devices

12 Optimizing Performance – Increase the Queues
VMKernel Admittance VMKernel admittance policy affecting shared datastore (KB 1268) Use dedicated datastores for DB and Log Volumes VMKernel admittance changes dynamically when SIOC is enabled Physical HBAs Follow vendor recommendation on max queue depth per LUN ( Follow vendor recommendation on HBA execution throttling Settings are global if host is connected to multiple storage arrays Consult vendor for the right multi-pathing policy

13 VMware vSphere Provides Advanced Resource Management
Storage I/O Prioritization Data Mining Print Server Online Store Oracle With Storage I/O Control Data Mining Print Server Online Store Oracle Without Storage I/O Control SIOC VMware vSphere has also supports prioritization for storage traffic, and has since vSphere 4.1 During high I/O from non-critical application

14 VMFS or RDM? When to use raw device mapping (RDM) Otherwise use VMFS
Generally similar performance vSphere 5.5 and later support up to 62TB VMDK files Disk size no longer a limitation of VMFS VMFS RDM Better storage consolidation – multiple virtual disks/virtual machines per VMFS LUN. But still can assign one virtual machine per LUN Enforces 1:1 mapping between virtual machine and LUN Consolidating virtual machines in LUN – less likely to reach vSphere LUN Limit of 255 More likely to hit vSphere LUN limit of 255 Manage performance – combined IOPS of all virtual machines in LUN < IOPS rating of LUN Not impacted by IOPS of other virtual machines When to use raw device mapping (RDM) Required for shared-disk failover clustering Required by storage vendor for SAN management tools such as backup and snapshots Otherwise use VMFS

15 Strict Best Practices SQL Server VM Disk Layout Example
Characteristics: OS on shared DataStore/LUN 1 database; 4 equally-sized data files across 4 LUNs 1 TempDB; 4 (1/vCPU) equally- sized tempdb files across 4 LUNs Data, TempDB, and Log files spread across 3 PVSCSI adapters Data and TempDB files share PVSCSI adapters Virtual Disks could be RDMs Advantages: Optimal performance; each Data, TempDB, and Log file has a dedicated VMDK/Data Store/LUN I/O spread evenly across PVSCSI adapters Log traffic does not contend with random Data/TempDB traffic Disadvantages: You can quickly run out of Windows driver letters! More complicated storage management

16 Realistic SQL Server VM Disk Layout Example
Characteristics: OS on shared DataStore/LUN 1 database; 8 Equally-sized data files across 4 LUNs 1 TempDB; 4 files (1/vCPU) evenly distributed and mixed with data files to avoid “hot spots” Data, TempDB, and Log files spread across 3 PVSCSI adapters Virtual Disks could be RDMs Advantages: Fewer drive letters used I/O spread evenly/TempDB hot spots avoided Log traffic does not contend with random Data/TempDB traffic

17 Compute optimization

18 Optimizing Performance – Know Your NUMA
8 vCPU VMs less than 45GB RAM on each VM ESX Scheduler If VM is sized greater than 45GB or 8 CPUs, Then NUMA interleaving and subsequent migration occur and can cause 30% drop in memory throughput performance Memory Memory Memory Proc 1 Memory Proc 2 Memory Memory Each NUMA Node has 94/2 45GB (less 4GB for hypervisor overhead) Memory Memory 96 GB RAM on Server

19 NUMA and vNUMA Why VMware Recommends Enabling NUMA
Windows is NUMA-aware Microsoft SQL Server is NUMA-aware vSphere Benefits from NUMA Use it, People Enable Host-Level NUMA Disable “Node Inter-leaving” in BIOS – on HP Systems Consult Hardware Vendor for SPECIFIC Configuration Virtual NUMA Beloved by ALL MS SQL Servers Worldwide Auto-enabled on vSphere for Any VM with > 8 vCPUs Want to use it on Smaller VMs? Set “numa.vcpu.min” to # of vCPUs on the VM Virtual NUMA and CPU Hot-Plug Maybe Later 

20 NUMA Best Practices Avoid Remote NUMA access Size # of vCPUs to be <= the # of cores on a NUMA node (processor socket) Where possible, align VMs with physical NUMA boundaries For wide VMs, use a multiple or even divisor of NUMA boundaries Hyperthreading Initial conservative sizing: set vCPUs equal to # of physical cores Allocate vCPUs by socket count Leave the “Cores Per Socket” at the default value of “1” If Doing vMotion, move between hosts with the same NUMA architecture to avoid performance hit (until reboot)

21 Non-Wide VM Sizing Example (VM fits within NUMA Node)
1 vCPU per core with hyperthreading OFF Must license each core for SQL Server 1 vCPU per thread with hyperthreading ON 10%-25% gain in processing power Same licensing consideration HT does not alter core-licensing requirements “numa.vcpu.preferHT” to true to force 24-way VM to be scheduled within NUMA node

22 Wide VM Sizing Example (VM crosses NUMA Node)
Extends NUMA awareness to the guest OS Enabled through multicore UI On by default for 8+ vCPU multicore VM Existing VMs are not affected through upgrade For smaller VMs, enable by setting numa.vcpu.min=4 Do NOT turn on CPU Hot-Add For wide virtual machines, confirm feature is on for best performance

23 Why Your SQL Lamborghini Runs Like a Pinto
Default “Balanced” Power Setting Results in Core Parking De-scheduling and Re-scheduling CPUs Introduces Performance Latency Doesn’t even save power - Now (allegedly) changed in Windows Server 2012 How to Check: Perfmon: If "Processor Information(_Total)\% of Maximum Frequency“ < 100, “Core Parking” is going on Command Prompt: “Powerfcg –list” (Anything other than “High Performance”? You have “Core Parking”) Solution Set Power Scheme to “High Performance” Do Some other “complex” Things - us/library/dn aspx

24 Designing for Performance
The VM Itself Matters – In-Guest Optimization Windows CPU Core Parking = BAD Set Power to “High Performance” to avoid core parking Windows Receive Side Scaling Settings Impact CPU Utilization Must be enabled at NIC and Windows Kernel Level Use “netsh int tcp show global” to verify Application-level tuning Follow Vendor’s recommendation Virtualization does not change the consideration

25 Memory Optimization

26 Memory Virtualization Concepts
Guest mem free list

27 Large Pages in SQL Server Configuration Manager (Guest)
ON by default in 2012/ /64 bit Standard Edition and Higher Requires “Lock Pages in Memory” User Right for the SQL Server Service Account (sqlservr.exe) - Implication Monitor Mitigation Slow instance start due to memory pre-allocation ERRORLOG message Memory reservation might help Impact to RTO for FCI and VMware HA OK for AAG as no instance restart during failover SQL allocate less than “max server memory” or even fail to start due to memory fragmentation ERRORLOG or sys.dm_os_process_memory Dedicate server to SQL use Startup SQL earlier than others Revert back to standard page Slide self-explanatory.

28 Memory Reservations Guarantees memory for a VM – even where there is contention The VM is only allowed to power on if the CPU and memory reservation is available (strict admission) If Allocated RAM = Reserved RAM, You Avoid Swapping If Using Resource Pools, Put Lower-tiered VMs in Resource Pools SQL Supports “Memory Hot add” Don’t use it on ESXi versions lower than 6.0 Must run sp_configure, if setting Max Memory for SQL Instances Not necessary in SQL Server 2016 Virtual:Physical memory allocation ratio should not exceed 2:1 Remember NUMA? It’s not just about CPU Fetching remote memory is VERY expensive Use “numa.vcpu.maxPerVirtualNode” to control memory locality

29 Network Optimization

30 Network Best Practices
Allocate separate NICs for different traffic type if possible Can be connected to same uplink/physical NIC on 10GB network vSphere versions 5.0 and newer support multi-NIC, concurrent vMotion operations Use NIC load-based teaming (route based on physical NIC load) Recommend the use of NICs that support: Checksum offload , TCP segmentation offload (TSO) Jumbo frames (JF), Large receive offload (LRO) Ability to handle high-memory DMA (i.e. 64-bit DMA addresses) Ability to handle multiple Scatter Gather elements per Tx frame NICs should support offload of encapsulated packets (with VXLAN) Check and Update Physical NIC Drivers Keep VMware Tools Up-to-Date - ALWAYS

31 Network Best Practices (continued)
Use Virtual Distributed Switches for cross-ESX network convenience Optimize IP-based storage (iSCSI and NFS) Enable Jumbo Frames Use dedicated VLAN for ESXi host's vmknic & iSCSI/NFS server to minimize network interference from other packet sources Exclude in-Guest iSCSI NICs from WSFC use Be mindful of converged networks; storage load can affect network and vice versa as they use the same physical hardware; ensure no bottlenecks in the network between the source and destination Use VMXNET3 Paravirtualized adapter drivers to increase performance NEVER use any other vNIC type, unless for legacy OSes and applications Reduces overhead versus vlance or E1000 emulation Must have VMware Tools to enable VMXNET3 Tune Guest OS network buffers, maximum ports

32 A Word on Windows RSS Windows Default Behaviors Solution
Default RSS Behavior Result in Unbalanced CPU Usage Saturates CPU0, Service Network IOs Problem Manifested in In-Guest Packet Drops Problems Not Seen in vSphere Kernel, Making Problem Difficult to Detect Solution Enable RSS in 2 Places in Windows At the NIC Properties Get-NetAdapterRss |fl name, enabled Enable-NetAdapterRss -name <Adaptername> At the Windows Kernel Netsh int tcp show global Netsh int tcp set global rss=enabled Please See and

33 To cluster or not to cluster?
Do you NEED SQL Clustering? Purely business and administrative decision Virtualization does not preclude you from doing so vSphere HA is NOT a Replacement for SQL Clustering Want AG? No “Special” requirements on vSphere <EOM> Want FCI? MSCS? You MUST use Raw Device Mapping (RDM) Disks Type for Shared Disks MUST be connected to vSCSI controllers in PHYSICAL Mode Bus Sharing Wonder why it’s called “Physical Mode RDM”, eh? In Pre-vSphere 6.0, FCI/MSCS nodes CANNOT be vMotioned. Period In vSphere 6.0, you have vMotions capabilities under following conditions Clustered VMs are at Hardware Version 11 vMotion VMKernel Portgroup Connected to 10GB Network

34 vCenter Server 6.0 - Cross vCenter & Long distance vMotion
Simultaneously changes Compute Storage Network vCenter vMotion without shared storage Increased scale Pool resources across vCenter servers Targeted topologies Local Metro Intra-Continental 150 RTT vCenter Server vCenter Server vMotion vMotion Network Expanding on the Cross vSwitch vMotion enhancement, we are also excited to announce support for Cross vCenter vMotion. vMotion can now perform the following changes simultaneously. Change compute (vMotion) - Performs the migration of virtual machines across compute hosts Change storage (Storage vMotion) - Performs the migration of the virtual machine disks across datastores Change network (Cross vSwitch vMotion) - Performs the migration of a VM across different virtual switches and finally… Change vCenter (Cross vCenter vMotion) - Performs the migration of the vCenter which manages the VM. All of these types of vMotion are seamless to the guest OS. Like with vSwitch vMotion, Cross vCenter vMotion requires L2 network connectiviy since the IP of the VM will not be changed. This functionality builds upon Enhanced vMotion and shared storage is not required. Target support for local (single site), metro (multiple well-connected sites), and cross-continental sites. Use Cases: Migrate from a VCSA to a Windows version of vCenter & vice versa Replace/retire vCenter server without distruption Resource pooling across vCenters where additional vCenters were used due to vCenter scalability limits Migrate VMs across local, metro, and continental distances Public/Private cloud environments with several vCenters Business Benefits: Reduce cost Migration to a VCSA eliminates Windows and SQL licenses Increased reliability Migration to a Windows vCenter with a SQL cluster can increase availability of vCenter services. Increased availability during planned maintenance activities. vCenters can be drained and upgraded without impacting managed virtual machines. vDS A vDS B VM Network (L2 Connectivity)

35 vMotion of Clustered SQL Nodes – Avoid the Common Pitfall
AG, FCI, MSCS(!) Use Windows Server Failover Clustering (WSFC) WSFC has a Default 5 Seconds Heartbeat Timeout Threshold vMotion Operations MAY Exceed 5 Seconds (During VM Quiescing) Leading to Unintended and Disruptive Database and Resource Failover Events Solutions (Pick One, Anyone) See Section 4.2 (vMotion Considerations for Windows and SQL Clustering) of Microsoft SQL Server on VMware Availability and Recovery Option - Use MULTIPLE vMotion Portgroups, where possible Enable jumbo frames on all vmkernel ports, IF PHYSICAL Network Supports it Consider modifying default WSFC behaviors: See Microsoft’s “Tuning Failover Cluster Network Thresholds” – (get-cluster).SameSubnetThreshold = 10 (get-cluster).CrossSubnetThreshold = 20 (get-cluster).RouteHistoryLength = 40 Behavior NOT Unique to VMware or Virtualization If Your Backup Software Quiesces Your SQL Servers, You Experience Same Symptom

36 SQL Server Licensing

37 SQL Server Licensing Facts
Always refer to official Microsoft documentation Microsoft SQL Server 2014 Licensing Guide Microsoft SQL Server 2014 Virtualization Licensing Guide Licensing models Server/CAL (only for Business Intelligence or Standard Edition) Requires a client access license (CAL) for every user or device connected) Per-core (SQL Server 2012 and 2014 Enterprise Edition) 1 VM per core limit without Software Assurance Unlimited VMs with Software Assurance (limited by resource requirements) When virtual machines move, licenses don’t necessarily move with them Rules might vary depending on SQL Server version and edition SQL Server 2014 Eliminate vMotion accounting with purchase of software assurance (SA) Without SA, vMotion is possible, but target host needs to have available license to accommodate the vMotion addition (no more than 1 VM per core) Previous Versions SQL Server 2008 R2 Licensing Quick Reference Guide Microsoft SQL Server 2012 Licensing Guide Microsoft SQL Server 2012 Virtualization Licensing Guide Can license by VM in either model

38 SQL Server 2014 – License by Virtual Machine
Core-based licensing Require a core license for each virtual core Minimum 4 core licenses per virtual machine with 2- core increment Server licensing ONLY Standard and Business Intelligence Require one server license per virtual machine CAL licenses need to be purchased separately # of vCPUs 2 # of core Licenses 4 # of vCPUs 5 # of core Licenses 6 # of vCPUs 6 # of core Licenses Virtual machines can move freely within a server farm, third-party hoster, or cloud services provider with purchase of Software Assurance

39 SQL Server 2014 – High Density Virtualization
(Recommended for Maximum Savings) License all physical cores on the host Deploy unlimited number of virtual machines with purchase of SA Limited to one virtual machine per core without SA Virtual machine can move freely as long as the target server has valid licenses Available with Enterprise edition only Core factor for AMD processors EBC9AAFC49AD/SQL2012_CoreFactorTable_Mar2012.pdf 2 sockets x 8 cores (16 cores total) 8 EE 2-core licenses + SA Deploy unlimited virtual machines

40 SQL Server Licensing for VMware
SQL Server 2014 Consolidation Example Config: 2x8 – 16 cores per server Total of 32 cores Avg. Host Utilization – 75% Config: 2x8 – 16 cores per server Total of 160 cores Avg. Utilization – 15% Physical SQL Servers ~$1.3M ~$400K $200K $400K $600K $800K $1000K $1200K 80 Enterprise Edition licenses (2-core) 10 Servers SA $200K $400K $600K $800K $1000K 16 EE Licenses 10 Servers SA $1200K ~70% cost reduction NOTE: While CONSOLIDATION May Impress Your Bosses Your Databases and Applications May Disagree. Choose WISELY 40 40 40

41 Licensing for Disaster Recovery Environments
VMware Feature SQL Server License Required at Primary SQL Server License Required at Secondary VMware HA Yes No1 vMotion Yes2 VMware Fault Tolerance (FT) Site Recovery Manager No 1. License required for non-failure scenario, such as planned host maintenance 2. Sufficient licenses required on target host Reference: Application Server License Mobility –

42 SQL Server Licensing - Cluster
Licensing a full vSphere cluster Sub-cluster licensing Maximize consolidation Maximize VMware tooling Potential cost savings Insufficient instances for full vSphere cluster Restrict VM movement DRS host affinity rules Potentially lower consolidation VM movement audit trail vCenter Cluster 2 vCenter Cluster 1 SQL Server Host 1 App Host 3 App Host 4 VMotion SQL Server DB SQL Server Host 2

43 When Things Go Sideways

44 Performance Needs Monitoring at Every Level
Application Level App Specific Perf tools/stats Guest OS CPU Utilization, Memory Utilization, I/O Latency Virtualization Level vCenter Performance Metrics /Charts Limits, Shares, Virtualization Contention Physical Server Level CPU and Memory Saturation, Power Saving Connectivity Level Network/FC Switches and data paths Packet loss, Bandwidth Utilization Peripherals Level SAN or NAS Devices Utilization, Latency, Throughput Application Guest OS ESXi Stack START HERE Physical Server Connectivity Peripherals

45 Host Level Monitoring VMware vSphere Client™ resxtop
GUI interface, primary tool for observing performance and configuration data for one or more vSphere hosts Does not require high levels of privilege to access the data resxtop Gives access to detailed performance data of a single vSphere host Provides fast access to a large number of performance metrics Runs in interactive, batch, or replay mode

46 Key Metrics to Monitor for vSphere
Resource Metric Host / VM Description CPU %USED Both CPU used over the collection interval (%) %RDY VM CPU time spent in ready state %SYS Percentage of time spent in the ESX Server VMKernel Memory Swapin, Swapout Memory ESX host swaps in/out from/to disk (per VM, or cumulative over host) MCTLSZ (MB) Amount of memory reclaimed from resource pool by way of ballooning Disk READs/s, WRITEs/s Reads and Writes issued in the collection interval DAVG/cmd Average latency (ms) of the device (LUN) KAVG/cmd Average latency (ms) in the VMkernel, also known as “queuing time” GAVG/cmd Average latency (ms) in the guest. GAVG = DAVG + KAVG Network MbRX/s, MbTX/s Amount of data transmitted per second PKTRX/s, PKTTX/s Packets transmitted per second %DRPRX, %DRPTX Drop packets per second 46

47 CPU Key Indicators Ready (%RDY) Co-Stop (%CSTP) Max Limited (%MLMTD)
% time a vCPU was ready to be scheduled on a physical processor but couldn't’t due to processor contention Investigation Threshold: 10% per vCPU Co-Stop (%CSTP) % time a vCPU in an SMP virtual machine is “stopped” from executing, so that another vCPU in the same virtual machine could be run to “catch-up” and make sure the skew between the two virtual processors doesn't’t grow too large Investigation Threshold: 3% Max Limited (%MLMTD) % time VM was ready to run but wasn’t scheduled because it violated the CPU Limit set ; added to %RDY time Virtual machine level – processor queue length

48 Key Performance Indicators
Memory Balloon driver size (MCTLSZ) the total amount of guest physical memory reclaimed by the balloon driver Investigation Threshold: 1 Swapping (SWCUR) the current amount of guest physical memory that is swapped out to the ESX kernel VM swap file. Swap Reads/sec (SWR/s) the rate at which machine memory is swapped in from disk. Swap Writes/sec (SWW/s) The rate at which machine memory is swapped out to disk. Network Transmit Dropped Packets (%DRPTX) The percentage of transmit packets dropped. Investigation Threshold: 1 Receive Dropped Packets (%DRPRX) The percentage of receive packets dropped.

49 Logical Storage Layers: from Physical Disks to vmdks
GAVG Tracks latency of I/O in the guest VM Investigation Threshold: 15-20ms Guest OS disk Virtual Machine .vmdk file VMware Data store (VMFS Volume) KAVG Tracks latency of I/O passing thru the Kernel Investigation Threshold: 1ms DAVG Tracks latency at the device driver; includes round-trip time between HBA and storage Investigation Threshold: ms, lower is better, some spikes okay Aborts (ABRT/s) # commands aborted / sec Investigation Threshold: 1 Storage LUN Physical Disks Storage Array

50 Storage Key Indicators Kernel Latency Average (KAVG)
This counter tracks the latencies of IO passing thru the Kernel Investigation Threshold: 1ms Device Latency Average (DAVG) This is the latency seen at the device driver level. It includes the roundtrip time between the HBA and the storage. Investigation Threshold: 15-20ms, lower is better, some spikes okay Aborts (ABRT/s) The number of commands aborted per second. Investigation Threshold: 1 Size Storage Arrays appropriately for Total VM usage > 15-20ms Disk Latency could be a performance problem > 1ms Kernel Latency could be a performance problem or a undersized ESX device queue

51 Monitoring Disk Performance with esxtop
very large values for DAVG/cmd and GAVG/cmd Rule of thumb GAVG/cmd > 20ms = high latency! What does this mean? When command reaches device, latency is high Latency as seen by the guest is high Low KAVG/cmd means command is not queuing in VMkernel

52 Resources

53 The Links are Free. Really
Virtualizing Business Critical Applications Everything About Clustering Windows Applications on VMware vSphere VMware’s Performance – Technical Papers Performance Best Practices Something for the DBA in You


Download ppt "Successfully Virtualizing SQL Server on vSphere"

Similar presentations


Ads by Google