Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V WSV312.

Similar presentations


Presentation on theme: "Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V WSV312."— Presentation transcript:

1

2 Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V WSV312

3 Session Objectives And Takeaways Understand the storage options with Hyper-V as well as use cases for DAS and SAN Learn what’s new in Server 2008 R2 for storage and Hyper-V Understand different high availability options of Hyper-V with SANs Learn performance improvements with VHDs, Passthrough & iSCSI Direct scenarios

4 Storage Performance/Sizing Important to scale performance to the total workload requirements of each VM Spindles are still key Don’t migrate 20 physical servers with 40 spindles each to a Hyper-V host with 10 spindles Don’t use left over servers as a production SAN

5 Windows Storage Stack Bus – Scan up to 8 buses (Storport) Target – Up to 255 targets LUNs – Up to 255 Support for up to 256TB volumes >2T supported since Server 2003 SP1 Common Q: What is supported maximum transfer size? Dependent on adapter/miniport (i.e. Qlogic/Emulex)

6 Hyper-V Storage Parameters VHD max size 2040GB Physical disk size not limited by Hyper-V Up to 4 IDE devices Up to 4 SCSI controllers with 64 devices Optical devices only on IDE

7 Storage Connectivity From parent partition Direct Attached (SAS/SATA) Fiber Channel iSCSI Network attached storage not supported Except for ISOs Hot add and remove Virtual Disks to SCSI controller only

8 ISOs on network Shares Machine account access to share Constrained delegation

9 SCSI Support in VMs Supported In Windows XP Professional x64 Windows Server 2003 Windows Server 2008 & 2008 R2 Windows Vista & Windows 7 SuSE Linux Not Supported In Windows XP Professional x86 All other operating systems Requires integration services installed

10 Antivirus and Hyper-V Exclude VHDs & AVHDs (or directories) VM configuration directory VMMS.exe and VMWP.exe May not be required on core with no other roles Run Antivirus in virtual machines

11 Encryption and Compression Bitlocker on parent partition supported Encrypted File System (EFS) Not supported on parent partition Supported in Virtual Machines NTFS Compression (Parent partition) Allowed in Windows Server 2008 Blocked in Windows Server 2008 R2

12 Step by Step Instructions

13 Hyper-V Storage... Performance wise from fastest to slowest… Fixed Disk VHDs/Pass Through Disks The same in terms of performance with R2 Dynamically Expanding VHDs Grow as needed Pass Through Disks Pro: VM writes directly to a disk/LUN without encapsulation in a VHD Cons: You can’t use VM snapshots Dedicating a disk to a vm

14 More Hyper-V Storage Hyper-V provides flexible storage options DAS: SCSI, SATA, eSATA, USB, Firewire SAN: iSCSI, Fibre Channel, SAS High Availability/Live Migration Requires block based, shared storage Guest Clustering Via iSCSI only

15 VM Setting No Pass Through

16 Computer Management: Disk

17 Taking a disk offline

18 Disk is offline…

19 Pass Through Configured

20

21 Disk type comparison (Read)

22 Hyper-V R2 Fixed Disks Fixed Virtual Hard Disks (Write) Windows Server 2008 R1: ~96% of native Windows Server 2008 R2: Equal to Native Fixed Virtual Hard Disks vs. Pass Through Windows Server 2008 R1: ~96% of pass-through Windows Server 2008 R2: Equal to Native

23 Hyper-V R2 Dynamic Disks Massive Performance Boost 64 Sequential Write Windows Server 2008 R2: 94% of native Equal to Hyper R1 Fixed Disks 4k Random Write Windows Server 2008 R2: 85% of native

24 Disk layout - FAQ Assuming Integration Services are installed: Do I Use IDE or SCSI? One IDE channel or two? One VHD per SCSI controller? Multiple VHDs on a single SCSI controller? R2: Can Hot Add VHD’s to Virtual SCSI…

25 Disk layout - results

26 Differencing VHDs Performance vs chain length

27 Passthrough Disks When to use Performance is not the only consideration If you need support for Storage Management software Backup & Recovery applications which require direct access to disk VSS/VDS providers Allows VM to communicate via inband SCSI unfiltered (application compatibility)

28 Storage Device Ecosystem Storage Device support maps to same support as exists in physical servers Advanced scenarios: Live Migration require shared storage Hyper-V supports both Fibre Channel & iSCSI SANs connected from parent Fibre Channel SANs still represent largest install base for SANs and high usage with Virtualization Live Migration is supported with storage arrays which have obtained the Designed for Windows Logo and which pass Cluster Validation

29 Storage Hardware that is qualified with Windows Server is qualified for Hyper-V Applies to running devices from Hyper-V parent Storage devices qualified for Server 2008 R2 are qualified with Server 2008 R2 Hyper-V No additional storage device qualification for Hyper-V Storage Hardware & Hyper-V R2

30 SAN Boot and Hyper-V Booting Hyper-V Host from SAN is supported Fibre Channel or iSCSI from parent Booting child VM from SAN supported using iSCSI boot with PXE solution (ex: emBoot/Doubletake) Must use legacy NIC Native VHD boot Boot physical system from local VHD is new feature in Server 2008 R2 Booting a VHD located on SAN (iSCSI or FC) not currently supported (considering for future)

31 iSCSI Direct Microsoft iSCSI Software initiator runs transparently from within the VM VM operates with full control of LUN LUN not visible to parent iSCSI initiator communicates to storage array over TCP stack Best for application transparency LUNs can be hot added & hot removed without requiring reboot of VM (2008 and 2008 R2) VSS hardware providers run transparently within the VM Backup/Recovery runs in the context of VM Enables guest clustering scenario

32 High Speed Storage & Hyper-V Larger virtualization workloads require higher throughput True for all scenarios VHD Passthrough iSCSI Direct Fibre Channel 8 gig & 10 Gig iSCSI will become more common As throughput grows, requirements to support higher IO to disks also grows

33 High Speed Storage & Hyper-V Customers concerned about performance should not use a single 1 Gig Ethernet NIC port to connect to iSCSI storage Multiple NIC ports & aggregate throughput using MPIO or MCS is recommended The Microsoft iSCSI Software Initiator performs very well at 10 Gig wire speed 10Gig Ethernet adoption is ramping up Driven by increasing use of virtualization Fibre Channel 8 gig & 10 Gig iSCSI becoming more common As throughput grows, requirements to support IO to disks also grows

34 Jumbo Frames Offers significant performance for TCP connections including iSCSI Max frame size 9K Reduces TCP/IP overhead by up to 84% Must be enabled at all end points (switches, NICs, target devices Virtual switch is defined as an end point Virtual NIC is defined as an end point

35 Jumbo Frames in Hyper-V R2 Added support in virtual switch Added support in virtual NIC Integration components required How to validate if jumbo frames is configured end to end Ping –n 1 –l 8000 –f (hostname) -l (length) -f (don’t fragment packet into multiple Ethernet frames) -n (count)

36 Windows* 2008 Hyper-V Network I/O Path Data packets get sorted and routed to respective VMs by the VM Switch NIC TCP/IP VM1 VM NIC1 TCP/IP VM2 VM NIC2 Port 2 Port 1 Routing VLAN Filtering Data Copy Miniport Driver Management OS Virtual Machine Switch VMBus Ethernet

37 Windows Server 2008 R2 VMQ Data packets get sorted into multiple queues in the Ethernet Controller based on MAC Address and/or VLAN tags Sorted and queued data packets are then routed to the VMs by the VM Switch Enables the data packets to DMA directly into the VMs Removes data copy between the memory of the Management OS and the VM’s memory NIC TCP/IP VM1 VM NIC1 TCP/IP VM2 VM NIC2 Port 2 Port 1 Routing VLAN Filtering Data Copy Minipor t Driver Management OS Virtual Machine Switch VM Bus Ethernet Switch/Routing Unit Defaul t Queue Q2 Q1

38 Intel tests with Microsoft VMQ Quad core Intel® server, Windows* 2008 R2 Beta, ntttcp benchmark, standard frame size (1500 bytes) Intel® Gigabit Ethernet Controller Near line rate throughput with VMDq for 4 VMs Throughput increase from 5.4Gbps to 9.3Gbps Source: Microsoft Lab, Mar 2009 More than 25% throughput gain with VMDq/VMQ as VMs scale *Other names and brands may be claimed as the property of others.

39

40 Manageability Scalability Performance Storport support for >64 cores Scale up storage workload Improved scalability for iSCSI & Fibre Channel SANs Improved Solid State disk performance (70% reduction in latency) iSCSI digest offload iSCSI Increased Performance MPIO New Load Balancing algorithm MPIO Datacenter Automation MPIO automate setting default load balance policy Enterprise Storage Features Automation iSCSI Quick Connect Improved SAN Configuration and usability Storage Management support for SAS Reliability Storport error log extensions Multipath health & statistics reporting Configuration reporting for MPIO Configuration reporting for iSCSI Diagnosability Additional redundancy for Boot from SAN – up to 32 paths

41

42 High Availability with Hyper-V using MPIO & Fibre Channel SAN VHDs LUNs

43 MCS & MPIO with Hyper-V Provides High Availabilty to storage arrays Especially important in virtualized environments to reduce single points of failure Load balancing & fail over using redundant HBAs, NICs, switches and fabric infrastructure Aggregates bandwidth to maximum performance MPIO supported with Fibre Channel, iSCSI, Shared SAS 2 Options for multi-pathing with iSCSI Multiple Connections per Session Microsoft MPIO (Multipathing Input/Output) Protects against loss of data path during firmware upgrades on storage controller

44 Configuring MPIO with Hyper-V MPIO Connect from parent Applies to: Creating vhds for each VM Passthrough disks Additional sessions to target can also be added through MPIO directly from guest Additional connections can be added through MCS with iSCSI using iSCSI direct

45 iSCSI Perf Best Practices with Hyper-V Standard Networking & iSCSI best practices apply Use Jumbo Frames Use Dedicated NIC ports for iSCSI traffic (Server to SAN) Multiple to scale Client  Server (LAN) Multiple to scale Cluster heartbeat (if using cluster) Hyper-V Management

46 Hyper-V Enterprise Storage Testing Performance Configuration Windows Server 2008 R2 Hyper-V Microsoft MPIO 4 Sessions 64K request size 100% read Microsoft iSCSI Software Initiator Intel 10 Gb/E NIC RSS enabled (applicable to parent only) Jumbo Frames (9000 byte MTU) LSO V2 (offloads packets up to 256K) LRO Hyper-V Server 2008 R2 NetApp FAS 3070

47

48 Hyper-V Networking Two 1 Gb/E physical network adapters at a minimum One for management One (or more) for VM networking Dedicated NIC(s) for iSCSI Connect parent to back- end management network Only expose guests to internet traffic

49 Hyper-V Network Configurations Example 1: Physical Server has 4 network adapters NIC 1: Assigned to parent partition for management NICs 2/3/4: Assigned to virtual switches for virtual machine networking Storage is non-iSCSI such as: Direct attach SAS or Fibre Channel

50 Hyper-V Setup & Networking 1

51 Hyper-V Setup & Networking 2

52 Hyper-V Setup & Networking 3

53 Windows Server 2008 Each VM on its own Switch… VM 2 VM 1 “Designed for Windows” Server Hardware Windows hypervisor VM 3 Parent PartitionChild Partitions User Mode Kernel Mode Ring -1 Mgmt NIC 1 VSwitch 1 NIC 2 VSP VSwitch 2 NIC 3 VSwitch 3 NIC 4 Applications VM Service WMI Provider VM Worker Processes Windows Kernel VSC Windows Kernel VSC Linux Kernel VSC VMBus

54 Hyper-V Network Configurations Example 2: Server has 4 physical network adapters NIC 1: Assigned to parent partition for management NIC 2: Assigned to parent partition for iSCSI NICs 3/4: Assigned to virtual switches for virtual machine networking

55 Hyper-V Setup, Networking & iSCSI

56 Windows Server 2008 Now with iSCSI… VM 2 VM 1 “Designed for Windows” Server Hardware Windows hypervisor VM 3 Parent PartitionChild Partitions User Mode Kernel Mode Ring -1 Mgmt NIC 1 iSCSI NIC 2 VSP VSwitch 2 NIC 3 VSwitch 3 NIC 4 Applications VM Service WMI Provider VM Worker Processes Windows Kernel VSC Windows Kernel VSC Linux Kernel VSC VMBus

57 Networking: Parent Partition

58 Networking: Virtual Switches

59 New in R2: Core Deployment There’s no GUI in a Core Deployment, how do I configure which NICs are bound to switches or kept separate for the parent partition?

60 No Problem… Hyper-V R2 Manager includes option to set bindings per virtual switch…

61 Microsoft Confidential

62 Avanade NetApp ® Fabric-Attached Storage (FAS) System 4-Node Hyper-V Cluster Production VMs 1 Gbit/s LAN iSCSI SAN “Hyper-V allows us to provision new servers quickly and more efficiently utilize hardware resources. Using Hyper-V with our existing NetApp infrastructure provided a cost-effective and flexible solution without sacrificing performance.” — Andy Schneider, infrastructure architect, Avanade “Hyper-V allows us to provision new servers quickly and more efficiently utilize hardware resources. Using Hyper-V with our existing NetApp infrastructure provided a cost-effective and flexible solution without sacrificing performance.” — Andy Schneider, infrastructure architect, Avanade

63 Global IT – Windows Server 2008 Hyper- V ISCSI SAN SQL Server Windows Server 2008 Failover Cluster MS Exchange 2007 on Windows Server File Shares 300+ Hyper-V Virtual Machines iSCSI Fibre Channel SAN SAN Gateway iSCSI “Hyper-V has allowed us to consolidate 300+ servers to virtual machines. This configuration when combined with Microsoft’s iSCSI, Fibre Channel and multipathing support provides great flexibility in storage options. We chose FalconStor’s SAN Gateway which enables advanced storage features to be used with any SAN storage and our iSCSI based virtual machines” — Frank Smith, Sr. Systems Engineer “Hyper-V has allowed us to consolidate 300+ servers to virtual machines. This configuration when combined with Microsoft’s iSCSI, Fibre Channel and multipathing support provides great flexibility in storage options. We chose FalconStor’s SAN Gateway which enables advanced storage features to be used with any SAN storage and our iSCSI based virtual machines” — Frank Smith, Sr. Systems Engineer

64 SQL Server on Windows Server 2008 Windows 2008 File Servers Hyper-V Hosts Fibre Channel Switch With 4GB Dual Path HBAs

65 Applications Used Exchange, SharePoint, Dynamics Windows Server 2003 / 2008 / Hyper-V Terminal Services Windows Server 2008 components Microsoft iSCSI Software Initiator Microsoft MPIO Pain Points High growth and change No disaster protection Poor storage utilization Complex storage management Solution Windows Server 2008 iSCSI hosts 30TB iSCSI SAN with MPIO load balancing Lefthand MPIO DSM Two storage pools: SAS and SATA Multi-site SAN between two sites Benefits High availability across sites Reduced storage management costs Increased flexibility in dealing with change and growth Multi-Site iSCSI SAN Highly Available Terminal Server Infrastructure SharePoint Server Farm Exchange Mail Servers Dynamics iSCSI SAN Switched Gb- Ethernet Terminal Server – SITE A Terminal Server – SITE B SITE ASITE B “When combining Hyper-V, and native Server 2008 technologies such as Microsoft MPIO and the Microsoft iSCSI software initiator, our administration was greatly simplified.” — Michael Johnston, VP of Information Technology “When combining Hyper-V, and native Server 2008 technologies such as Microsoft MPIO and the Microsoft iSCSI software initiator, our administration was greatly simplified.” — Michael Johnston, VP of Information Technology

66 iStor iSCSI Disk Arrays Exchange Mail Server VM File Server VM Sales SQL Database VM iSCSI SAN Switched Gb-Ethernet Windows Server Hyper-V “An iSCSI SAN allowed us to control costs and deliver better services to our clients.” — Stephen Ames, Virtualization Performance “An iSCSI SAN allowed us to control costs and deliver better services to our clients.” — Stephen Ames, Virtualization Performance

67

68 Microsoft Hyper-V Server V2 New Features Live Migration High Availability New Processor Support Second Level Address Translation Core Parking Networking Enhancements TCP/IP Offload Support VMQ & Jumbo Frame Support Hot Add/Remove virtual storage Enhancements to SCONFIG Enhanced scalability

69 Manage Remotely…

70 Hyper-V Server V1 vs. V2

71 Live Migration $$ Comparison For $500 add VMM 2008 R2 (Workgroup Edition) to manage MS Hyper-V Server R2: Physical to Virtual Conversion (P2V); Quick Storage Migration; Library Management; Physical to Virtual Conversion (P2V); Quick Storage Migration; Library Management; Heterogeneous Management; PowerShell Automation; Self-Service Portal and more… Heterogeneous Management; PowerShell Automation; Self-Service Portal and more… For $500 add VMM 2008 R2 (Workgroup Edition) to manage MS Hyper-V Server R2: Physical to Virtual Conversion (P2V); Quick Storage Migration; Library Management; Physical to Virtual Conversion (P2V); Quick Storage Migration; Library Management; Heterogeneous Management; PowerShell Automation; Self-Service Portal and more… Heterogeneous Management; PowerShell Automation; Self-Service Portal and more…

72

73 Deployment Considerations Minimize risk to the Parent Partition Use Server Core Don’t run arbitrary apps, no web surfing Run your apps and services in guests Moving VMs from Virtual Server to Hyper-V FIRST: Uninstall the VM Additions Two physical network adapters at a minimum One for management (use a VLAN too) One (or more) for vm networking Dedicated iSCSI NICs Connect to back-end management network Only expose guests to internet traffic

74 Don't forget the ICs! Emulated vs. VSC

75 Cluster Hyper-V Servers

76 Live Migration/HA Best Practices Best Practices: Cluster Nodes: Hardware with Windows Logo + Failover Cluster Configuration Program (FCCP) Storage: Cluster Shared Volumes Storage with Windows Logo + FCCP Multi-Path IO (MPIO) is your friend… Networking: Standardize the names of your virtual switches Multiple Interfaces CSV uses separate network Use ISOs not physical CD/DVDs You can’t Live Migrate a VM that has a physical DVD attached!

77 More… Mitigate Bottlenecks Processors Memory Storage Don't run everything off a single spindle… Networking VHD Compaction/Expansion Run it on a non-production system Use.isos Great performance Can be mounted and unmounted remotely Having them in SCVMM Library fast & convenient

78 Creating Virtual Machines Use SCVMM Library Steps: 1. Create virtual machine 2. Install guest operating system 3. Install integration components 4. Install anti-virus 5. Install management agents 6. SYSPREP 7. Add it to the VMM Library Windows Server 2003 Creat vms using 2-way to ensure an MP HAL

79 Conclusions Significant performance gains between Server 2008 and Server 2008 R2 for enterprise storage workloads Performance improvements in Hyper-V, MPIO, iSCSI, Core storage stack & Networking stack For general workloads with multiple VMs, performance delta is minimal between SCSI passthrough & VHD iSCSI Performance especially with iSCSI direct scenarios is vastly improved

80 Additional Resources Microsoft MPIO: MPIO DDK MPIO DSM sample, interfaces and libraries will be included in Windows 7 DDK/SDK Microsoft iSCSI: iSCSI WMI Interfaces: Storport Website: Storport Documentation Windows Driver Kit MSDN: Microsoft Virtualization:

81 Additional Resouces Hyper-V Planning & Deployment Guide us/library/cc aspx Microsoft Virtualization Website mspx

82 Partner References Intel: Emulex: Alacritech: NetApp: 3Par: iStor: Lefthand Networks Doubletake: Compellent: Dell/Equallogic: Falconstor:

83

84 Sessions On-Demand & Community Resources for IT Professionals Resources for Developers Microsoft Certification and Training Resources Microsoft Certification & Training Resources Resources Required Slide Speakers, TechEd 2009 is not producing a DVD. Please announce that attendees can access session recordings at TechEd Online. Required Slide Speakers, TechEd 2009 is not producing a DVD. Please announce that attendees can access session recordings at TechEd Online.

85 Complete an evaluation on CommNet and enter to win! Required Slide

86 © 2009 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION. Required Slide


Download ppt "Jeff Woolsey Principal Group Program Manager Windows Server, Hyper-V WSV312."

Similar presentations


Ads by Google