Presentation is loading. Please wait.

Presentation is loading. Please wait.

Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks.

Similar presentations


Presentation on theme: "Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks."— Presentation transcript:

1

2

3 Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks in Hyper over SMB configurations. Outline a few Hyper-V over SMB configurations that can provide continuous availability, including details on networking and storage. Session Objectives

4 Sample Configurations Agenda Hyper-V over SMB - Overview Performance Considerations Basic Configurations

5

6 Hyper-V over SMB What is it? Store Hyper-V files in shares over the SMB 3.0 protocol (including VM configuration, VHD files, snapshots) Works with both standalone and clustered servers (file storage used as cluster shared storage) Highlights Increases flexibility Eases provisioning, management and migration Leverages converged network Reduces CapEx and OpEx Supporting Features SMB Transparent Failover - Continuous availability SMB Scale-Out – Active/Active file server clusters SMB Direct (SMB over RDMA) - Low latency, low CPU use SMB Multichannel – Network throughput and failover SMB Encryption - Security VSS for SMB File Shares - Backup and restore SMB PowerShell - Manageability

7 SMB Transparent Failover Failover transparent to server application Zero downtime – small IO delay during failover Supports planned and unplanned failovers Hardware/Software Maintenance Hardware/Software Failures Load Rebalancing Resilient for both file and directory operations Requires: File Servers configured as Windows Failover Cluster Windows Server 2012 on both the servers running the application and file server cluster nodes Shares enabled for “continuous availability” (default configuration for clustered file shares) Works for both classic file server clusters (cluster disks) or scale-out file server clusters (CSV) Hyper-V Failover share - connections and handles lost, temporary stall of IO 2 2 2 2 Normal operation 1 1 Connections and handles auto-recovered Application IO continues with no errors 3 3 1 1 3 3 \\fs\share

8 SMB Scale-Out Targeted for server app storage Example: Hyper-V and SQL Server Increase available bandwidth by adding nodes Leverages Cluster Shared Volumes (CSV) Key capabilities: Active/Active file shares Fault tolerance with zero downtime Fast failure recovery CHKDSK with zero downtime Support for app consistent snapshots Support for RDMA enabled networks Optimization for server apps Simple management

9 User Kernel SMB Direct (SMB over RDMA)

10 SMB Multichannel Full Throughput Bandwidth aggregation with multiple NICs Multiple CPUs cores engaged when NIC offers Receive Side Scaling (RSS) Automatic Failover SMB Multichannel implements end-to-end failure detection Leverages NIC teaming (LBFO) if present, but does not require it Automatic Configuration SMB detects and uses multiple paths Sample Configurations

11 SMB Encryption End-to-end encryption of SMB data in flight Protects data from eavesdropping or snooping attacks on untrusted networks Zero new deployment costs No need for IPSec, specialized hardware, or WAN accelerators Configured per share or for the entire server Can be turned on for a variety of scenarios where data traverses untrusted networks Application workload over unsecured networks Branch Offices over WAN networks ServerClient SMB Encryption

12 VSS for SMB File Shares Application consistent shadow copies for server application data stored on Windows Server 2012 file shares Backup and restore scenarios Full integration with VSS infrastructure \\fs\foo Data volume \\fs\foo@t1 Shadow Copy Backup Server Application ServerFile Server Backup A A B B C C D D E E Read from Shadow Copy Share G G Relay Shadow Copy request F F

13

14 File Server Configurations Single-node File Server Lowest cost for shared storage Shares not continuously available Dual-node File Server Low cost for continuously available shared storage Limited scalability (up to a few hundred disks) Multi-node File Server Highest scalability (up to thousands of disks) Higher cost, but still lower than connecting all Hyper-V hosts with FC Config VHD Disk Config VHD Disk Share1Share2 Disk Config VHD Disk Config VHD Disk Share1Share2Share1Share2 Disk Config VHD Disk Config VHD Disk Share1 Disk Share2Share3Share4 A A B B C C

15 Network Configurations 1GbE Networks Mixed 1GbE/10GbE 10GbE or InfiniBand Networks Clients B B C C A A D D

16

17

18 Important notes on Hyper-V over SMB

19

20 Hyper-V Host Typical Configuration for Hyper-V over SMB SAS HBA Storage Spaces SMB 3.0 Server SMB 3.0 Client VM Virtual Machine vDisk File Share Space File Share Space … … … … File Server Cluster JBODs Clients Hyper-V Cluster SAS Module SAS Module Disk SAS Module SAS Module Disk SAS Module SAS Module Disk SAS HBA

21 Performance considerations Hyper-V Host SAS HBA Storage Spaces SMB 3.0 Server SMB 3.0 Client VM Virtual Machine vDisk File Share Space File Share Space … … … … VMs per host Virtual processes per VM RAM per VM R-NICs per Hyper-V host Speed of R-NICs SAS ports per module SAS Speed SAS HBAs per File Server SAS Speed R-NICs per file server, Speed of R-NICs NICs per Hyper-V host Speed of NICs Disks per JBOD Disk Speed SAS Speed Number of Spaces Columns per space CSV cache config Hyper-V hosts Cores per Hyper-V host RAM per Hyper-V host Number of clients Speed of client NICs SAS Module SAS Module Disk

22 Designing a solution

23 VDI workload (sample only, your requirements may vary) ~4.4 GB/sec 2 x 10GbE x 2 Hyper-V Host Storage Spaces SMB 3.0 Server SMB 3.0 Client VM Virtual Machine vDisk File Share Space File Share Space … … … … 2GB per VM 50 VMs per host 500 VMs total 50GB VHD per VM 2 R-NIC @ 10Gbps 4 SAS ports @ 6 Gbps 2 SAS HBAs @ 6Gbps 2 SAS ports/HBA 2 R-NIC @ 10Gbps 2 NICs @ 10Gbps 60 disks/JBOD 120 disks total 900GB @ 10Krpm 8 mirrored spaces 16 columns/space 12 GB CSV cache 11 Hyper-V hosts 16 cores/host 128GB RAM/host 500 clients 1 Gbps NICs SAS HBA SAS Module SAS Module Disk 8.8 GB/sec 2 x 6Gb SAS x4 x 2

24 Speeds and Feeds – Maximum Theoretical Throughput NICThroughput 1Gb Ethernet~0.1 GB/sec 10Gb Ethernet~1.1 GB/sec 40Gb Ethernet~4.5 GB/sec 32Gb InfiniBand (QDR)~3.8 GB/sec 56Gb InfiniBand (FDR)~6.5 GB/sec HBAThroughput 3Gb SAS x4~1.1 GB/sec 6Gb SAS x4~2.2 GB/sec 4Gb FC~0.4 GB/sec 8Gb FC~0.8 GB/sec 16Gb FC~1.5 GB/sec Bus SlotThroughput PCIe Gen2 x4~1.7 GB/sec PCIe Gen2 x8~3.4 GB/sec PCIe Gen2 x16~6.8 GB/sec PCIe Gen3 x4~3.3 GB/sec PCIe Gen3 x8~6.7 GB/sec PCIe Gen3 x16~13.5 GB/sec MemoryThroughput DDR2-400 (PC2-3200)~3.4 GB/sec DDR2-667 (PC2-5300)~5.7 GB/sec DDR2-1066 (PC2-8500)~9.1 GB/sec DDR3-800 (PC3-6400)~6.8 GB/sec DDR3-1333 (PC3-10600)~11.4 GB/sec DDR3-1600 (PC3-12800)~13.7 GB/sec DDR3-2133 (PC3-17000)~18.3 GB/sec Intel QPIThroughput 4.8 GT/s ~9.8 GB/sec 5.86 GT/s ~12.0 GB/sec 6.4 GT/s ~13.0 GB/sec 7.2 GT/s ~14.7 GB/sec 8.0 GT/s ~16.4 GB/sec

25 Potential Variations Hyper-V Host Storage Spaces SMB 3.0 Server SMB 3.0 Client VM Virtual Machine vDisk File Share Space File Share Space … … … … Regular NICs instead of RDMA NICs Fibre Channel or iSCSI instead of SAS Third-party SMB 3.0 NAS Instead of Windows File Server Cluster SAS HBA SAS Module SAS Module Disk Traditional SAN instead of JBODS

26

27 WorkloadBW GB/sec IOPS IOs/sec %CPU Privileged Physical host, 512KB IOs, 100% read, 2t, 12o ~16.8 ~32K~16% Physical host, 32KB IOs, 100% read, 8t, 4o ~10.9 ~334K ~52% 12 VMs, 4VP, 512KB IOs, 100% read, 2t, 16o ~16.8 ~32K~12% 12 VMs, 4VP, 32KB IOs, 100% read, 4t, 32o ~10.7 ~328K ~62% SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD …

28

29 Sample Configurations

30 Standalone, 1GbE, FC Array ClientServer CPU2 sockets, 8 cores total, 2.26 GHz Memory24 GB RAM Network1 x 1GbE NIC (onboard) Storage adapter N/A1 FC adapter 2 x 4Gbps links DisksN/A24 x 10Krpm HDD 20 used for data 2 used for log

31 Dell Servers, Standalone, 10GbE, SAS, Storage Spaces VMs Local IOPS Remote IOPS Remote/ Local 190085094.4% 21,7501,70097.1% 43,5003,35095.7% 65,8505,60095.7% 87,0006,85097.9%

32 Intel, Standalone, 56GbIB, FusionIO Configuration BW MB/sec IOPS 512KB IOs/sec %CPU Privileged Non-RDMA (Ethernet, 10Gbps) 1,1292,259~9.8 RDMA (InfiniBand QDR, 32Gbps) 3,7547,508~3.5 RDMA (InfiniBand FDR, 54Gbps) 5,79211,565~4.8 Local5,80811,616~6.6 SMB 3.0 + RDMA (InfiniBand FDR)

33 SuperMicro, Standalone, 2 x 56GbIB, SAS, LSI RAID Configuration BW MB/sec IOPS 512KB IOs/sec %CPU Privileged Latency milliseconds 1 – Local10,09038,492~2.5%~3ms 2 – Remote9,85237,584~5.1%~3ms 3 - Remote VM10,36739,548~4.6%~3 ms SAS RAID Controller JBOD SSD SAS RAID Controller JBOD SSD SAS RAID Controller JBOD SSD SAS RAID Controller JBOD SSD

34 EchoStreams, Standalone, 3 x 56GbIB, Storage Spaces SAS RAID Controller JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD SAS SAS HBA JBOD SSD Workload BW MB/sec IOPS IOs/sec %CPU Privileged Latency milliseconds 512KB IOs, 100% read, 2t, 8o16,77832,002~11%~ 2 ms 8KB IOs, 100% read, 16t, 2o4,027491,665~65%< 1 ms

35 X-IO, Standalone, FC SAN, 3 x 56GbIB

36 Wistron, Cluster-in-a-box, 10GbE, SAS, Storage Spaces

37 HP StoreEasy 5000, Cluster-in-a-box, 10GbE, SAS

38 Quanta, Cluster-in-a-box, 56GbIB, SAS, LSI HA- DAS

39 Violin Memory Prototype, Cluster-in-a-box, 56GbIB

40 VDI boot storm, Scale-Out (File Server test team)

41 holSystems: Dell Servers, Cluster, 10GbE, FC Arrays … … Failover Cluster 2Failover Cluster 1

42 Microsoft IT: HP Servers, Cluster, 6 x 10GbE, FC Array … 10GbE Switch 3a 10GbE Switch 3b

43 NTTX: Dell servers, Clusters, 10GbE, SAS JBODs

44 In Review: Session Objectives Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks in Hyper over SMB configurations. Outline a few Hyper-V over SMB configurations that can provide continuous availability, including details on networking and storage.

45 Complete your session evaluations today and enter to win prizes daily. Provide your feedback at a CommNet kiosk or log on at www.2013mms.com. Upon submission you will receive instant notification if you have won a prize. Prize pickup is at the Information Desk located in Attendee Services in the Mandalay Bay Foyer. Entry details can be found on the MMS website.

46 Resources

47


Download ppt "Describe the basics of the Hyper-V over SMB scenario, including the main reasons to implement it. Enumerate the most common performance bottlenecks."

Similar presentations


Ads by Google