Presentation is loading. Please wait.

Presentation is loading. Please wait.

Module 2: Server Virtualization in Windows Server 2012 R2

Similar presentations


Presentation on theme: "Module 2: Server Virtualization in Windows Server 2012 R2"— Presentation transcript:

1 Module 2: Server Virtualization in Windows Server 2012 R2
Rick Claus Microsoft Sr. Technical Evangelist Corey Hynes Lead Technical Architect, holSystems

2 Jump Start Target Agenda
Day 1 Introducing Windows Server 2012 R2 Server Virtualization in Windows Server 2012 R2 Cloud Optimized Networking in Windows Server 2012 R2 Storage in Windows Server 2012 R2 Day 2 Server Management & Automation with Windows Server 2012 R2 VDI with Windows Server 2012 R2 Access & Information Protection with Windows Server 2012 R2 Web Application & Platform with Windows Server 2012 R2 Server Virtualization in Windows Server 2012 R2

3 System Center 2012 R2 Jumpstart July 15th - http://aka.ms/SCR2JS
#WS2012R2JS Talk with our Experts in Chat tool. Hands-On Labs DOWNLOAD Windows Server 2012 R2 Preview aka.ms/ws2012r2 DOWNLOAD System Center 2012 R2 Preview aka.ms/sc2012r2 Generic question and interactive reminder slide What Twitter hashtag will we be using for this jumpstart to talk outside of the chat, besides our names? System Center 2012 R2 Jumpstart July 15th -

4 Agenda Introduction Scalability & Performance Security & Multitenancy
8/7/2018 Agenda Introduction Scalability & Performance Security & Multitenancy Flexible Infrastructure High Availability & Resiliency Virtualization Innovation Summary & Wrap Up

5 Customer Needs and Challenges
8/7/2018 Customer Needs and Challenges NEEDS CHALLENGES Bigger, faster, and more available virtual machines Greater flexibility and agility to deliver solutions Ability to handle complex storage and networking requests Removal of limits in virtual machine mobility Support for new hardware technologies Keep services up and running, and meet SLAs Decrease capital and operational costs of infrastructure Use bigger, more capable servers more effectively Protect and use existing investments and infrastructure Maintain separation of resources in multitenant environments We have listened to you about what you need to deliver to you customers. Some of the needs an organization has are: They need bigger, better, faster and more available virtual machines They need more flexibility to deliver these solutions so that they aren’t locked in any particular solution and they can easily handle the needs of the types of the virtual machines that are requested They want to be able to handle the different storage and networking requirements both for resources in place or resources that may be purchased in the future They want the flexibility to move virtual machines wherever it would be best to run them whether on premises or at a Service Provider They want to ensure that the virtualization solution they provide is the one that will allow them to handle the new hardware technologies that are coming from different manufacturers With these needs, come challenges: They want to meet their customers’ SLA, they must keep the servers and VM’s up and running They want to decrease the capital cost and lower the operational cost of managing their infrastructure They want to use these new servers as they come out and be able to fully leverage the raw power that the servers provide They must be able to keep using the servers that they have in place today. They must be able to securely run a common infrastructure serving multiple groups or customers With Windows Server 2012 Hyper‑V, we are able to support these customers’ needs and challenges.

6 Before Windows Server 2012 R2 Preview
Windows Server Management Marketing 8/7/2018 Before Windows Server 2012 R2 Preview Windows Server 2008 R2 Hyper-V & Hyper-V Server 2008 R2 launched SP1 for Windows Server 2008 R2 & Hyper-V Server 2008 R2 launched Windows Server 2012 Hyper-V & Hyper-V Server 2012 Launched Hyper-V introduced in Windows Server 2008 Hyper-V Server 2008 launched Huge Scalability Storage Spaces Metering & QoS Migration Enhancements Extensibility Hardware Offloading Network Virtualization Replication Live Migration Cluster Shared Volumes Processor Compatibility Hot-Add Storage Performance & Scalability Improvements June 2008 October 2008 October 2009 February 2011 September 2012 Dynamic Memory RemoteFX Before Windows Server 2012 R2 Preview Let’s first review the Hyper V improvements that the earlier versions of Windows Server provide. Beginning with Windows Server 2008, server virtualization via Hyper V technology has been an integral part of the operating system. A new version of Hyper V is included as a part of Windows Server 2008 R2, and this was further enhanced with Service Pack 1 (SP1). There are two manifestations of the Hyper V technology: Hyper V is the hypervisor-based virtualization feature of Windows Server 2008 R2. Microsoft Hyper V Server is the hypervisor-based server virtualization product that allows customers to consolidate workloads onto a single physical server. This is available as a free download. Windows Server 2008 R2 Hyper V enhancements With the launch of Windows Server 2008 R2 Hyper-V, Microsoft introduced a number of compelling capabilities to help organizations reduce costs, whilst increasing agility and flexibility. Key features introduced included: Live Migration – Enabling the movement of virtual machines (VMs) with no interruption or downtime Cluster Shared Volumes – Highly scalable and flexible use of shared storage (SAN) for VMs Processor Compatibility – Increase the Flexibility for Live Migration across hosts with differing CPU architectures Hot Add Storage – Flexibly add or remove storage to and from VMs Improved Virtual Networking Performance – Support for Jumbo Frames and Virtual Machine Queue (VMq) With the addition of Service Pack 1 (SP1) for Hyper-V, Microsoft introduced 2 new, key capabilities to help organizations realize even greater value from the platform: Dynamic Memory – More efficient use of memory while maintaining consistent workload performance and scalability. RemoteFX – Provides the richest virtualized Windows 7 experience for Virtual Desktop Infrastructure (VDI) deployments. Windows Server 2008 R2 Hyper V Benefits Hyper V is an integral part of Windows Server and provides a foundational virtualization platform that lets customers transition to the cloud. With Windows Server 2008 R2, customers get a compelling solution for core virtualization scenarios; production server consolidation, dynamic data center, business continuity, Virtual Desktop Infrastructure (VDI), and test and development. Hyper V provides customers with better flexibility with features like live migration and cluster shared volumes for storage flexibility. In Windows Server 2008 R2, Hyper V also delivers greater scalability with support for up to 64 logical processors and improved performance with support for dynamic memory and enhanced networking support. © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

7 Scalability & Performance
Run the most demanding applications with the highest levels of performance & scalability Massive Host, Cluster & Virtual Machine scalability Ensure optimal resource availability for key applications & workloads Provide guaranteed levels of service for the key applications and workloads Guest NUMA Support Network & Storage Quality of Service Enhanced Dynamic Memory Capabilities Take advantage of hardware innovations, while still using existing hardware to maximum advantage Hardware Offloading & integration across compute, networking & storage

8 Physical & Virtual Scalability
Massive scalability for the most demanding workloads Enterprise Class Scale for Key Workloads Virtual CPU 64 Hosts Support for up to 320 logical processors & 4TB physical memory per host Support for up to 1,024 virtual machines per host Clusters Support for up to 64 physical nodes & 8,000 virtual machines per cluster Virtual Machines Support for up to 64 virtual processors and 1TB memory per VM Virtual Memory 1TB 320 4TB Logical Processors Physical Memory Cluster Nodes 64

9 Generation 2 Virtual Machines
VMs built on Optimized, Software-Based Devices Synthetic NIC PXE Boot Ease of Management & Operations PXE boot from Optimized vNIC Hot-Add CD/DVD Drive Dynamic Storage VMs have UEFI firmware with support for GPT partitioned OS boot disks >2TB Faster Boot from Virtual SCSI with Online Resize & increased performance Security Removal of emulated devices reduces attack surface VM UEFI firmware supports Secure Boot Hot-Add CD/DVD Drive UEFI Firmware with Secure Boot Boot From Virtual SCSI Generation 2 Virtual Machine

10 New Virtual Hard Disk Format
8/7/2018 New Virtual Hard Disk Format Features Storage capacity up to 64 TBs Corruption protection during power failures Optimal structure alignment for large-sector disks VHDX Provides Increased Scale, Protection & Alignment Large allocations and 1 MB aligned Data region (large allocations and 1 MB aligned) Block Allocation Table (BAT) User data blocks Benefits Increases storage capacity Protects data Helps to ensure quality performance on large-sector disks Intent log Sector bitmap blocks Header region Metadata region (small allocations and unaligned) Metadata table User metadata With the evolution of storage systems, and the ever-increasing reliance on virtualized enterprise workloads, the VHD format of Windows Server needed to also evolve. The new format is better suited to address the current and future requirements for running enterprise- class workloads, specifically: Where the size of the VHD is larger then 2,040 GB. To reliably protect against issues for dynamic and differencing disks during power failures. To prevent performance degradation issues on the new, large-sector physical disks. Hyper‑V in Windows Server 2012 contains an update to the VHD format, called VHDX, that has much larger capacity and additional resiliency. VHDX supports up to 64 terabytes of storage. It also provides additional protection from corruption from power failures by logging updates to the VHDX metadata structures and prevents performance degradation on large-sector physical disks by optimizing structure alignment. Technical description The VHDX format’s principal new features are: Support for virtual hard disk storage capacity of up to 64 terabytes. Protection against corruption during power failures by logging updates to the VHDX metadata structures. The format contains an internal log that is used to capture updates to the metadata of the virtual hard disk file before being written to its final location. In case of a power failure, if the write to the final destination is corrupted, then it is played back from the log to promote consistency of the virtual hard disk file. Optimal structure alignment of the virtual hard disk format to suit large sector disks. If unaligned I/Os are issued to these disks, an associated performance penalty is caused by the Read-Modify-Write cycles that are required to satisfy these I/Os. The structures in the format are aligned to help ensure that are no unaligned I/Os exist. The VHDX format also provides the following features: Larger block sizes for dynamic and differential disks, which lets these disks attune to the needs of the workload. A 4-KB logical sector virtual disk that results in increased performance when applications and workloads that are designed for 4-KB sectors use it. The ability to store custom metadata about the file that you might want to record, such as operating system version or patches applied. Efficiency (called trim) in representing data, which results in smaller files and lets the underlying physical storage device reclaim unused space. (Trim requires pass-through or SCSI disks and trim-compatible hardware.) The figure illustrates the VHDX hard disk format. As you can see in the preceding figure, most of the structures are large allocations and are MB aligned. This alleviates the alignment issue that is associated with virtual hard disks. The different regions of the VHDX format are as follows: Header region. The header region is the first region of the file and identifies the location of the other structures, including the log, block allocation table (BAT), and metadata region. The header region contains two headers, only one of which is active at a time, to increase resiliency to corruptions. Intent log. The intent log is a circular ring buffer. Changes to the VHDX metastructures are written to the log before they are written to the final location. If corruption occurs during a power failure while an update is being written to the actual location, on the subsequent open, the change is applied again from the log, and the VHDX file is brought back to a consistent state. The log does not track changes to the payload blocks, so it does not protect data contained within them. Data region. The BAT contains entries that point to both the user data blocks and sector bitmap block locations within the VHDX file. This is an important difference from the VHD format because sector bitmaps are aggregated into their own blocks instead of being appended in front of each payload block. Metadata region. The metadata region contains a table that points to both user- defined metadata and virtual hard disk file metadata such as block size, physical sector size, and logical sector size. Hyper‑V in Windows Server 2012 also introduces support that lets VHDX files be more efficient when they represent that data within it. Because the VHDX files can be large, based on the workload they are supporting, the space they consume can grow quickly. Currently, when applications delete content within a virtual hard disk, the Windows storage stack in both the guest operating system and the Hyper‑V host have limitations that prevent this information from being communicated to the virtual hard disk and the physical storage device. This contains the Hyper‑V storage stack from optimizing the space used and prevents the underlying storage device from reclaiming the space previously occupied by the deleted data. In Windows Server 2012, Hyper‑V now supports unmap notifications, which lets VHDX files be more efficient in representing that data within it. This results in smaller files size, which lets the underlying physical storage device reclaim unused space. Benefits VHDX, which is designed to handle current and future workloads, has a much larger storage capacity than the earlier formats and addresses the technological demands of evolving enterprises. The VDHX performance-enhancing features make it easier to handle large workloads, protect data better during power outages, and optimize structure alignments of dynamic and differential disks to prevent performance degradation on new, large-sector physical disks. Requirements To take advantage of the new version of the new VHDX format, you need the following: Windows Server 2012 or Windows 8 The Hyper‑V server role To take advantage of the trim feature, you need the following: VHDX-based virtual disks connected as virtual SCSI devices or as directly attached physical disks (sometimes referred to as pass-through disks). This optimization also is supported for natively attached VHDX-based virtual disks. Trim-capable hardware. Header File metadata

11 Expanded Virtual Disk & Volume without Downtime
Online VHDX Resize Online VHDX Resize provides VM storage flexibility Expand Virtual SCSI Disks Grow VHD & VHDX files whilst attached to a running virtual machine Then expand volume within the guest Shrink Virtual SCSI Disks Reduce volume size inside the guest Shrink the size of the VHD or VHDX file whilst the VM is running 40GB Primary Partition 30 GB Primary Partition 10 GB Unallocated Expanded Virtual Disk & Volume without Downtime

12 Offloaded Data Transfer (ODX)
Token-based data transfer within the storage array Benefits Rapid virtual machine provisioning and migration Faster transfers on large files Minimized latency Maximized array throughput Less CPU and network use Performance not limited by network throughput or server use Improved datacenter capacity and scale Offload Copy Request Write Request Token Successful Write Result Token But how does it work? Offloaded Data Transfer (ODX) support is a feature of the storage stack of Hyper‑V in Windows Server ODX, when used with offload-capable SAN storage hardware, lets a storage device perform a file copy operation without the main processor of the Hyper‑V host actually reading the content from one storage place and writing it to another. ODX uses a token-based mechanism for reading and writing data within or between intelligent storage arrays. Instead of routing the data through the host, a small token is copied between the source and destination. The token simply serves as a point-in-time representation of the data. As an example, when you copy a file or migrate a virtual machine between storage locations (either within or between storage arrays), a token that represents the virtual machine file is copied, which removes the need to copy the underlying data through the servers. In a token-based copy operation, the steps are as follows: <Click> A user initiates a file copy or move in Windows Explorer, a command-line interface, or a virtual machine migration. Windows Server automatically translates this transfer request into an ODX (if supported by the storage array) and receives a token representation of the data. The token is copied between the source and destination systems. The token is delivered to the storage array. The storage array performs the copy internally and returns progress status. ODX is especially significant in the cloud space when you must provision new virtual machines from virtual machine template libraries or when virtual hard disk operations are triggered and require large blocks of data to be copied, as in virtual hard disk merges, storage migration, and live migration. These copy operations are then handled by the storage device that must be able to perform offloads (such as an offload-capable iSCSI, Fibre Channel SAN, or a file server based in Windows Server 2012) and frees up the Hyper‑V host processors to carry more virtual machine workloads. As you can imagine having an ODX compliant array provides a wide range of benefits: ODX frees up the main processor to handle virtual machine workloads and lets you achieve native-like performance when your virtual machines read from and write to storage. ODX greatly reduces time to copy large amounts of data. With ODX, copy operations don’t use processor time. Virtualized workload now operates as efficiently as it would in a non-virtualized environment. <next slide> External Intelligent Storage Array Actual Data Virtual Disk Virtual Disk Token

13 Virtual Fibre Channel in Hyper‑V
8/7/2018 Virtual Fibre Channel in Hyper‑V Access Fibre Channel SAN data from a virtual machine Hyper‑V host 1 Hyper‑V host 2 Unmediated access to a storage area network (SAN) Hardware-based I/O path to virtual hard disk stack N_Port ID Virtualization (NPIV) support Single Hyper‑V host connected to different SANs Up to four Virtual Fibre Channel adapters on a virtual machine Multipath I/O (MPIO) functionality Supports Live migration Virtual machine Virtual machine LIVE MIGRATION Worldwide Name Set A Worldwide Name Set B Worldwide Name Set A Worldwide Name Set B Note: This slide has 2 Clicks for animation to describe how live migration works when you use Virtual Fibre Channel in the VM. Current situation You need your virtualized workloads to connect to your existing storage arrays with as little trouble as possible. Many enterprises have already invested in Fibre Channel SANs, deploying them in their data centers to address their growing storage requirements. These customers often want the ability to use this storage from within their virtual machines instead of having it only accessible from and used by the Hyper‑V host. Virtual Fibre Channel for Hyper‑V, a new feature of Windows Server 2012, provides Fibre Channel ports within the guest operating system, which lets you connect to Fibre Channel directly from within virtual machines. With Windows Server 2012 Virtual Fibre Channel support includes the following: Unmediated access to a SAN. Virtual Fibre Channel for Hyper‑V provides the guest operating system with unmediated access to a SAN by using a standard World Wide Name (WWN) associated with a virtual machine. Hyper‑V lets you use Fibre Channel SANs to virtualize workloads that require direct access to SAN logical unit numbers (LUNs). Fibre Channel SANs also allow you to operate in new scenarios, such as running the Windows Failover Cluster Management feature inside the guest operating system of a virtual machine connected to shared Fibre Channel storage. A hardware-based I/O path to the Windows software virtual hard disk stack. Mid- range and high-end storage arrays include advanced storage functionality that helps offload certain management tasks from the hosts to the SANs. Virtual Fibre Channel presents an alternative, hardware-based I/O path to the Windows software virtual hard disk stack. This path lets you use the advanced functionality of your SANs directly from Hyper‑V virtual machines. For example, Hyper‑V users can offload storage functionality (such as taking a snapshot of a LUN) to the SAN hardware simply by using a hardware Volume Shadow Copy Service (VSS) provider from within a Hyper‑V virtual machine. N_Port ID Virtualization (NPIV). NPIV is a Fibre Channel facility that allows multiple N_Port IDs to share a single physical N_Port. This allows multiple Fibre Channel initiators to occupy a single physical port, easing hardware requirements in SAN design, especially where virtual SANs are called for. Virtual Fibre Channel for Hyper‑V guests uses NPIV (T11 standard) to create multiple NPIV ports on top of the host’s physical Fibre Channel ports. A new NPIV port is created on the host each time a virtual host bus adapter (HBA) is created inside a virtual machine. When the virtual machine stops running on the host, the NPIV port is removed. A single Hyper‑V host connected to different SANs with multiple Fibre Channel ports. Hyper‑V allows you to define virtual SANs on the host to accommodate scenarios where a single Hyper‑V host is connected to different SANs via multiple Fibre Channel ports. A virtual SAN defines a named group of physical Fibre Channel ports that are connected to the same physical SAN. For example, assume a Hyper‑V host is connected to two SANs—a production SAN and a test SAN. The host is connected to each SAN through two physical Fibre Channel ports. In this example, you might configure two virtual SANs—one named “Production SAN” that has two physical Fibre Channel ports connected to the production SAN and one named “Test SAN” that has two physical Fibre Channel ports connected to the test SAN. You can use the same technique to name two separate paths to a single storage target. Up to four virtual Fibre Channel adapters on a virtual machine. You can configure as many as four virtual Fibre Channel adapters on a virtual machine and associate each one with a virtual SAN. Each virtual Fibre Channel adapter is associated with one WWN address, or two WWN addresses to support live migration. Each WWN address can be set automatically or manually. MPIO functionality. Hyper‑V in Windows Server 2012 can use the multipath I/O (MPIO) functionality to help ensure optimal connectivity to Fibre Channel storage from within a virtual machine. You can use MPIO functionality with Fibre Channel in the following ways: Virtualize workloads that use MPIO. Install multiple Fibre Channel ports in a virtual machine, and use MPIO to provide highly available connectivity to the LUNs accessible by the host. Configure multiple virtual Fibre Channel adapters inside a virtual machine, and use a separate copy of MPIO within the guest operating system of the virtual machine to connect to the LUNs the virtual machine can access. This configuration can coexist with a host MPIO setup. Use different device-specific modules (DSMs) for the host or each virtual machine. This approach allows live migration of the virtual machine configuration, including the configuration of DSM and connectivity between hosts and compatibility with existing server configurations and DSMs. Live migration support with virtual Fibre Channel in Hyper‑V: To support live migration of virtual machines across hosts running Hyper‑V while maintaining Fibre Channel connectivity, two WWNs are configured for each virtual Fibre Channel adapter: Set A and Set B. Hyper‑V automatically alternates between the Set A and Set B WWN addresses during a live migration. This helps to ensure that all LUNs are available on the destination host before the migration and minimal downtime occurs during the migration. Requirements for Virtual Fibre Channel in Hyper‑V: One or more installations of Windows Server 2012 with the Hyper‑V role installed. Hyper‑V requires a computer with processor support for hardware virtualization. A computer with one or more Fibre Channel HBAs, each with an updated HBA driver that supports Virtual Fibre Channel. Updated HBA drivers are included with the in-box HBA drivers for some models. Windows Server 2008, Windows Server 2008 R2, or Windows Server 2012 as the guest operating system. Connection only to data LUNs. Storage accessed through a Virtual Fibre Channel connected to a LUN can’t be used as boot media. Live migration maintaining Fibre Channel connectivity

14 Dynamic Memory Achieve higher levels of density for your Hyper-V hosts
Windows Server 2008 R2 SP1 Introduced Dynamic Memory to enable reallocation of memory automatically between running virtual machines Enhanced in Windows Server 2012 & R2 Minimum & Startup Memory Smart Paging Memory Ballooning Runtime Configuration VM1 Maximum memory Memory in use Maximum memory Memory in use Minimum memory Administrator can increase maximum memory without a restart Hyper‑V As was stated earlier, Dynamic Memory was introduced with Windows Server 2008 R2 SP1 and is used to reallocate memory between virtual machines that are running on a Hyper-V host. Improvements made within Windows Server 2012 Hyper-V include Minimum memory setting – being able to set a minimum value for the memory assigned to a virtual machine that is lower than the startup memory setting Hyper-V smart paging – which is paging that is used to enable a virtual machine to reboot while the Hyper-V host is under extreme memory pressure Memory ballooning – the technique used to reclaim unused memory from a virtual machine to be given to another virtual machine that has memory needs Runtime configuration – the ability to adjust the minimum memory setting and the maximum memory configuration setting on the fly while the virtual machine is running without requiring a reboot. Because a memory upgrade requires shutting down the virtual machine, a common challenge for administrators is upgrading the maximum amount of memory for a virtual machine as demand increases. For example, consider a virtual machine running SQL Server and configured with a maximum of 8 GB of RAM. Because of an increase in the size of the databases, the virtual machine now requires more memory. In Windows Server 2008 R2 with SP1, you must shut down the virtual machine to perform the upgrade, which requires planning for downtime and decreasing business productivity. With Windows Server 2012, you can apply that change while the virtual machine is running. [Click] As memory pressure on the virtual machine increases, an administrator can change the maximum memory value of the virtual machine, while it is running and without any downtime to the VM. Then, the Hot-Add memory process of the VM will ask for more memory and that memory is now available for the virtual machine to use. <next slide> Physical memory pool Physical memory pool Physical memory pool

15 Dynamic Memory | Smart Paging
Utilize disk as additional, temporary memory Hyper-V Smart Paging Reliable way to keep a VM running when no physical memory is available Performance will be degraded as disk is much slower than memory Used in the following situations: VM restart No physical memory is available No memory can be reclaimed from other virtual machines on that host VM1 VM2 Maximum memory Minimum VMn Minimum memory Maximum Maximum memory Startup increases memory in use Memory in use after startup Minimum memory Hyper‑V Paging file provides additional memory for startup Memory reclaimed after startup A further improvement in 2012, is the inclusion of Hyper-V Smart Paging. Hyper-V Smart Paging is a memory management technique that uses disk resources as additional, temporary memory when more memory is required to restart a virtual machine. This approach has both advantages and drawbacks. It provides a reliable way to keep the virtual machines running when no physical memory is available. However, it can degrade virtual machine performance because disk access speeds are much slower than memory access speeds. To minimize the performance impact of Smart Paging, Hyper-V uses it only when all of the following occur: The virtual machine is being restarted. No physical memory is available. No memory can be reclaimed from other virtual machines that are running on the host. Hyper-V Smart Paging is not used when: A virtual machine is being started from an off state (instead of a restart). Oversubscribing memory for a running virtual machine would result. A virtual machine is failing over in Hyper-V clusters. Hyper-V continues to rely on internal guest paging when host memory is oversubscribed because it is more effective than Hyper-V Smart Paging. With internal guest paging, the paging operation inside virtual machines is performed by Windows Memory Manager. Windows Memory Manager has more information than does the Hyper-V host about memory use within the virtual machine, which means it can provide Hyper-V with better information to use when it chooses the memory to be paged. Because of this, internal guest paging incurs less overhead to the system than Hyper-V Smart Paging. In this example, we have multiple VMs running, and we are restarting the last virtual machine. Normally, that VM would be using some amount of memory between the Minimum and Maximum values. In this case, the Hyper-V host is running fairly loaded and there isn’t enough memory available to give the virtual machine all of the startup value needed to boot. [Click] When this occurs, a Hyper-V Smart Paging file is created for the VM to give it enough RAM to be able to start. After some time, the Hyper-V host will use the Dynamic Memory techniques like ballooning to pull the RAM away from this or other virtual machines to free up enough RAM to bring all of the Smart Paging contents back off of the disk. <next slide> Physical memory pool Physical memory pool Physical memory pool Physical memory pool Virtual machine starting with Hyper‑V smart paging Removing paged memory after virtual machine restart

16 8/7/2018 Resource Metering Resource Pool Internet Resource Metering Features Uses resource pools Compatible with all Hyper‑V operations Unaffected by virtual machine movement Uses Network Metering Port ACLs 30 20 15 25 10 5 55 15 10 30 20 35 45 25 50 40 5 Resource Pool Internet Customer 1 Resource Pool Internet Customer 2 Metrics Average CPU use Average memory use Minimum memory use Maximum memory use Maximum disk allocation Incoming network traffic Outgoing network traffic Incoming storage IOPS Outgoing storage IOPS VM 1 Customer 1 VM 2 Customer 1 VM 3 Customer 1 VM 1 Customer 2 VM 2 Customer 2 VM 3 Customer 2 Benefits of Resource Metering Easier to track virtual machine use Can be used to aggregate data for multiple virtual machines Can be used to build accurate lookback and chargeback solutions Easier to obtain resource use data Virtual Machine Resource Metering Your computing resources are limited. You need to know how different workloads draw upon these resources—even when they are virtualized. In Windows Server 2012, Hyper‑V introduces Resource Metering, a technology that helps you track historical data of the use of virtual machines. With Resource Metering, you can gain insight into the resource use of specific servers. You can use this data to perform capacity planning, to monitor consumption by different business units or customers, or to capture data needed to help redistribute the costs of running a workload. You could also use the information that this feature provides to help build a billing solution, so that customers of your hosting services can be charged appropriately for resource usage. Hyper‑V in Windows Server 2012 lets providers build a multitenant environment, in which virtual machines can be served to multiple clients in a more isolated and secure way, as shown in the figure. Because a single client may have many virtual machines, aggregation of resource usage data can be a challenging task. However, Windows Server 2012 simplifies this task by using resource pools, a feature available in Hyper‑V. Resource pools are logical containers that collect resources of the virtual machines that belong to one client, permitting single-point querying of the client’s overall resource use. Hyper‑V Resource Metering has the following features: Uses resource pools, logical containers that collect resources of the virtual machines that belong to one client and allow single-point querying of the client’s overall resource use. Works with all Hyper‑V operations. Helps ensure that movement of virtual machines between Hyper‑V hosts (such as through live, offline, or storage migration) doesn’t affect the collected data. Uses Network Metering Port ACLs to differentiate between Internet and intranet traffic, so providers can measure incoming and outgoing network traffic for a given IP address range. Resource Metering can measure the following: Average CPU use. Average CPU, in megahertz, used by a virtual machine over a period of time. Average memory use. Average physical memory, in megabytes, used by a virtual machine over a period of time. Minimum memory use. Lowest amount of physical memory, in megabytes, assigned to a virtual machine over a period of time. Maximum memory use. Highest amount of physical memory, in megabytes, assigned to a virtual machine over a period of time. Maximum disk allocation. Highest amount of disk space capacity, in megabytes, allocated to a virtual machine over a period of time. Incoming network traffic. Total incoming network traffic, in megabytes, for a virtual network adapter over a period of time. Outgoing network traffic. Total outgoing network traffic, in megabytes, for a virtual network adapter over a period of time. <next slide> 20 45 30 10 40 25 A two-tenant environment built with Hyper‑V in Windows Server 2012 R2 Preview

17 Storage Quality of Service
8/7/2018 Storage Quality of Service Control allocation of Storage IOPS between VM Disks Allows an administrator to specify a maximum IOPS cap Takes into account incoming & outgoing IOPS Configurable on a VHDX by VHDX basis for granular control whilst VM is running Prevents VMs from consuming all of the available I/O bandwidth to the underlying physical resource Supports Dynamic, Fixed & Differencing OS VHDX Virtual Machine Data VHDX Hyper-V Host 500 1000 1,500 IOPS

18 Security & Multitenancy
Ensure workloads have the highest levels of security & isolation with granular control capabilities PVLANS, Virtual Port ACLs, Port Monitoring & Port Mirroring Integrate with new and existing software & hardware investments Meet compliancy requirements with through encryption Hyper-V Extensible Switch DHCP & Router Guard BitLocker Drive Encryption Provide in-box hooks for in-house extensibility and customization Rich partner ecosystem extending platform with powerful solutions

19 Isolated (Private) VLANs
Segregate traffic within VLANs Enables Isolation of virtual machines from other virtual machines even within the same VLAN Creation of community groups of virtual machines that can exchange data packets 3 Port Types Isolated Community Promiscuous Guest OS Guest OS Guest OS Guest OS Guest OS 2, (4) 2 , (4) 2, (5) 2, (5) 2, (5) 2, (4, 5) PVLAN Port types: Isolated Community Promiscuous Trunk mode Virtual Physical NOTE: This slide is animated and has 3 clicks Multitenant security and isolation using the Hyper‑V Extensible Switch is accomplished with private virtual LANs (PVLANs) (this slide) and other tools (next slide). Virtual machine isolation with PVLANs. VLAN technology is traditionally used to subdivide a network and provide isolation for individual groups that share a single physical infrastructure. Windows Server 2012 introduces support for PVLANs, a technique used with VLANs that can be used to provide isolation between two virtual machines on the same VLAN. When a virtual machine doesn’t need to communicate with other virtual machines, you can use PVLANs to isolate it from other virtual machines in your data center. By assigning each virtual machine in a PVLAN one primary VLAN ID and one or more secondary VLAN IDs, you can put the secondary PVLANs into one of three modes (as shown in the following table). These PVLAN modes determine which other virtual machines on the PVLAN a virtual machine can talk to. If you want to isolate a virtual machine, put it in isolated mode. The animation shows how the three PVLAN modes can be used to isolate virtual machines that share a primary VLAN ID. In this example the primary VLAN ID is 2, and the two secondary VLAN IDs are 4 and 5. You can put the secondary PVLANs into one of three modes: [Click] Isolated. Isolated ports cannot exchange packets with each other at layer 2. If fact, isolated ports can only talk to promiscuous ports. Community. Community ports on the same VLAN ID can exchange packets with each other at layer 2. They can also talk to promiscuous ports. They cannot talk to isolated ports. Promiscuous. Promiscuous ports can exchange packets with any other port on the same primary VLAN ID (secondary VLAN ID makes no difference). NIC NIC Example PVLAN: ‒ Primary VLAN ID is 2 ‒ Secondary VLAN IDs are 4 and 5

20 BitLocker Drive Encryption
8/7/2018 BitLocker Drive Encryption In-box Disk Encryption to Protect Sensitive Data VHDX on Traditional LUN E:\VM2 Data Protection, built in Supports Used Disk Space Only Encryption Integrates with TPM chip Network Unlock & AD Integration Multiple Disk Type Support Direct Attached Storage (DAS) Traditional SAN LUN Cluster Shared Volumes Windows Server 2012 File Server Share VHDX on DAS F:\VM1 VHDX on Cluster Shared Volumes C:\ClusterStorage\Volume1\VM4 VHDX on File Server \\FileServer\VM3

21 Flexible Infrastructure
Complete flexibility for migrating virtualized workloads without interruption or downtime Simultaneous Live Migration Live Cloning Storage Live Migration Enable a scalable, isolated, multitenant infrastructure without VLANs Duplicate virtual machines for testing & troubleshooting Live Migration Upgrades Live Migration with Compression Live Migration over RDMA Live Migration over SMB Support for non-Microsoft guest operating systems Shared Nothing Live Migration Upgrade to the latest version of Hyper-V without downtime for key workloads Network Virtualization

22 Linux Support on Hyper-V
8/7/2018 Linux Support on Hyper-V Comprehensive feature support for virtualized Linux Significant Improvements in Interoperability Multiple supported Linux distributions and versions on Hyper-V. Includes Red Hat, SUSE, OpenSUSE, CentOS, and Ubuntu Comprehensive Feature Support 64 vCPU SMP Virtual SCSI, Hot-Add & Online Resize Full Dynamic Memory Support Live Backup Deeper Integration Services Support Configuration Store Worker Processes Applications Applications Management Service WMI Provider Enlightened Mode Optimized Performance Optimized Synthetic Devices Enlightened Mode Optimized Performance Optimized Synthetic Devices Windows Kernel Virtual Service Provider Virtualization Service Client Virtualization Service Client Independent Hardware Vendor Drivers Hyper-V Server Hardware

23 Virtual Machine Live Cloning
8/7/2018 Virtual Machine Live Cloning Duplication of a Virtual Machine whilst Running Export a clone of a running VM Point-time image of running VM exported to an alternate location Useful for troubleshooting VM without downtime for primary VM Export from an existing checkpoint Export a full cloned virtual machine from a point-in-time, existing checkpoint of a virtual machine Checkpoints automatically merged into single virtual disk VM1 VM2 1 User Initiates an export of a running VM 2 Hyper-V performs a live, point-in-time export of the VM, which remains running, creating the new files in the target location 3 Admin imports new, powered-off VM on the target host, finalizes configuration and starts VM 4 With Virtual Machine Manager, Admin can select host as part of the clone wizard

24 Live Migration Faster, Simultaneous Migration of VMs Without Downtime
Faster live migrations, taking full advantage of available network Simultaneous Live Migrations Uses SMB Direct if network bandwidth available is over 10 gigabits Supports flexible storage choices No clustering required if virtual machine resides on SMB 3.0 File Share Memory pages transferred Modified pages transferred Live migration setup Storage handle moved VM VM VM Modified memory pages Memory content Configuration data MEMORY MEMORY IP connection Target host NOTE: This slide is animated and has 5 clicks To maintain optimal use of physical resources and to add new virtual machines easily, you must be able to move virtual machines whenever necessary – without disrupting your business. Windows Server 2008 R2 introduced live migration, which made it possible to move a running virtual machine from one physical computer to another with no downtime and no service interruption. However, this assumed that the virtual hard disk for the virtual machine remained consistent on a shared storage device such as a Fibre Channel or iSCSI SAN. In Windows Server 2012, live migrations are no longer limited to a cluster and virtual machines can be migrated across cluster boundaries, including to any Hyper-V host server in your environment. Hyper-V builds on this feature, adding support for simultaneous live migrations, enabling you to move several virtual machines at the same time. When combined with features such as Network Virtualization, this feature even allows virtual machines to be moved between local and cloud hosts with ease. In this example, we are going to show how live migration works when connected to an SMB File Share. With Windows Server 2012 and SMB3, you can store your virtual machine hard disk files and configuration files on an SMB share and live migrate the VM to another host whether that host is part of a cluster or not. [Click] Live migration setup: During the live migration setup stage, the source host creates a TCP connection with the destination host. This connection transfers the virtual machine configuration data to the destination host. A skeleton virtual machine is set up on the destination host, and memory is allocated to the destination virtual machine. Memory page transfer: In the second stage of a SMB-based live migration, the memory that is assigned to the migrating virtual machine is copied over the network from the source host to the destination host. This memory is referred to as the “working set” of the migrating virtual machine. A page of memory is 4 KB. During this phase of the migration, the migrating virtual machine continues to run. Hyper-V iterates the memory copy process several times, with each iteration requiring a smaller number of modified pages to be copied. After the working set is copied to the destination host, the next stage of the live migration begins. Memory page copy process: This stage is a memory copy process that duplicates the remaining modified memory pages for “Test VM” to the destination host. The source host transfers the CPU and device state of the virtual machine to the destination host. During this stage, the available network bandwidth between the source and destination hosts is critical to the speed of the live migration. Use of a 1‑gigabit Ethernet (GbE) or faster connection is important. The faster the source host transfers the modified pages from the migrating virtual machine’s working set, the more quickly live migration is completed. The number of pages transferred in this stage is determined by how actively the virtual machine accesses and modifies the memory pages. The more modified pages, the longer it takes to transfer all pages to the destination host. Moving the storage handle from source to destination: During this stage of a live migration, control of the storage that is associated with “Test VM”, such as any virtual hard disk files or physical storage attached through a virtual Fibre Channel adapter, is transferred to the destination host. (Virtual Fibre Channel is also a new feature of Hyper-V. For more information, see “Virtual Fibre Channel in Hyper-V”). The following figure shows this stage. Bringing the virtual machine online on the destination server: In this stage of a live migration, the destination server has the up-to-date working set for the virtual machine and access to any storage that the VM uses. At this time, the VM resumes operation. Network cleanup: In the final stage of a live migration, the migrated virtual machine runs on the destination server. At this time, a message is sent to the network switch, which causes the switch to obtain the new MAC addresses of the migrated virtual machine so that network traffic to and from the VM can use the correct switch port. The live migration process completes in less time than the TCP time-out interval for the virtual machine that is being migrated. TCP time-out intervals vary based on network topology and other factors. iSCSI, FC or SMB Storage

25 Live Migration Compression
8/7/2018 Live Migration Compression Intelligently Accelerates Live Migration Transfer Speed Memory pages compressed, then transferred Storage handle moved Modified pages compressed, then transferred Live migration setup Utilizes available CPU resources on the host to perform compression Compressed memory sent across the network faster Operates on networks with less than 10 gigabit bandwidth available Enables a 2X improvement in Live Migration performance VM VM VM MEMORY MEMORY Modified memory pages Configuration data Memory content IP connection Target host iSCSI, FC or SMB Storage

26 Live Migration over RDMA
8/7/2018 Live Migration over RDMA Harness RDMA to Accelerate Live Migration Performance Storage handle moved Memory pages transferred at high speed Live migration setup Modified pages transferred at high speed Remote Direct Memory Access delivers low latency network, CPU utilization & higher bandwidth Supports speeds up to 56Gb/s Windows Server 2012 R2 Preview supports RoCE, iWARP & Infiniband RDMA solutions Delivers the highest performance for Live Migrations Cannot be used with Compression VM VM VM Modified memory pages Configuration data Memory content MEMORY MEMORY IP Connection using RDMA Target host iSCSI, FC or SMB Storage

27 Storage Live Migration
8/7/2018 Storage Live Migration Move virtual hard disks attached to a running virtual machine Manage storage in a cloud environment with greater flexibility and control Move storage with no downtime Update physical storage available to a virtual machine (such as SMB-based storage) Windows PowerShell cmdlets Disk writes are mirrored; outstanding changes are replicated Reads and writes go to new destination VHD Reads and writes go to the source VHD Disk contents are copied to new destination VHD Increased Flexibility through Live Migration of VM Storage Virtual machine Host running Hyper‑V NOTE: This slide is animated and has 3 clicks Not only can we live migrate a virtual machine between two physical hosts, Hyper‑V in Windows Server 2012 introduces live storage migration, which lets you move virtual hard disks that are attached to a running virtual machine without downtime. Through this feature, you can transfer virtual hard disks, with no downtime, to a new location for upgrading or migrating storage, performing backend storage maintenance, or redistributing your storage load. You can perform this operation by using a new wizard in Hyper‑V Manager or the new Hyper‑V cmdlets for Windows PowerShell. Live storage migration is available for both storage area network (SAN)-based and file-based storage. When you move a running virtual machine’s virtual hard disks, Hyper‑V performs the following steps to move storage: Throughout most of the move operation, disk reads and writes go to the source virtual hard disk. [Click] After live storage migration is initiated, a new virtual hard disk is created on the target storage device. While reads and writes occur on the source virtual hard disk, the disk contents are copied to the new destination virtual hard disk. After the initial disk copy is complete, disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated. After the source and destination virtual hard disks are synchronized, the virtual machine switches over to using the destination virtual hard disk. The source virtual hard disk is deleted. Just as virtual machines might need to be dynamically moved in a cloud data center, allocated storage for running virtual hard disks might sometimes need to be moved for storage load distribution, storage device servicing, or other reasons. [Additional information] Updating the physical storage that is available to Hyper‑V is the most common reason for moving a virtual machine’s storage. You also may want to move virtual machine storage between physical storage devices, at runtime, to take advantage of new, lower-cost storage that is supported in this version of Hyper‑V, such as SMB-based storage, or to respond to reduced performance that can result from bottlenecks in the storage throughput. Windows Server 2012 provides the flexibility to move virtual hard disks both on shared storage subsystems and on non-shared storage as long as a Windows Server 2012 SMB3 network shared folder is visible to both Hyper‑V hosts. You can add physical storage to either a stand-alone system or to a Hyper‑V cluster and then move the virtual machine’s virtual hard disks to the new physical storage while the virtual machines continue to run. Storage migration, combined with live migration, also lets you move a virtual machine between hosts on different servers that are not using the same storage. For example, if two Hyper‑V servers are each configured to use different storage devices and a virtual machine must be migrated between these two servers, you can use storage migration to a shared folder on a file server that is accessible to both servers and then migrate the virtual machine between the servers (because they both have access to that share). Following the live migration, you can use another storage migration to move the virtual hard disk to the storage that is allocated for the target server. You can easily perform the live storage migration using a wizard in Hyper‑V Manager or Hyper‑V cmdlets for Windows PowerShell. Benefits Hyper‑V in Windows Server 2012 lets you manage the storage of your cloud environment with greater flexibility and control while you avoid disruption of user productivity. Storage migration with Hyper‑V in Windows Server 2012 gives you the flexibility to perform maintenance on storage subsystems, upgrade storage appliance firmware and software, and balance loads as capacity is used without shutting down virtual machines. Requirements for live storage migration Windows Server 2012. The Hyper‑V role. Virtual machines configured to use virtual hard disks for storage. Source device Target device VHD VHD

28 Shared-Nothing Live Migration
8/7/2018 Shared-Nothing Live Migration Reads and writes go to the source VHD. Live Migration Begins Disk contents are copied to new destination VHD Live Migration Continues Live Migration Completes Disk writes are mirrored; outstanding changes are replicated Reads and writes go to the source VHD Complete Flexibility for Virtual Machine Migrations Increase flexibility of virtual machine placement & increased administrator efficiency Simultaneously live migrate VM & virtual disks between hosts Nothing shared but an ethernet cable No clustering or shared storage requirements Reduce downtime for migrations across cluster boundaries Virtual machine Destination Hyper‑V Virtual machine Source Hyper‑V Live Migration MEMORY MEMORY Memory content Configuration data Modified memory pages IP connection NOTE: This slide is animated and has 4 clicks With Windows Server 2012 Hyper-V, you can also perform a “Shared Nothing” Live Migration where you can move a virtual machine, live, from one physical system to another even if they don’t have connectivity to the same shared storage. This is useful, for example, in a branch office where you may be storing the virtual machines on local disk, and you want to move a VM from one node to another. This is also especially useful when you have two independent clusters and you want to move a virtual machine, live, between them, without having to expose their shared storage to one another. You can also use “Shared Nothing” Live Migration to migrate a virtual machine from one datacenter to another provided your bandwidth is large enough to transfer all of the data between the datacenters. As you can see in the animation, when you perform a live migration of a virtual machine between two computers that do not share an infrastructure, Hyper-V first performs a partial migration of the virtual machine’s storage by creating a virtual machine on the remote system and creating the virtual hard disk on the target storage device. [Click] While reads and writes occur on the source virtual hard disk, the disk contents are copied over the network to the new destination virtual hard disk. This copy is performed by transferring the contents of the VHD between the two servers over the IP connection between the Hyper-V hosts. After the initial disk copy is complete, disk writes are mirrored to both the source and destination virtual hard disks while outstanding disk changes are replicated. After the source and destination virtual hard disks are synchronized, the virtual machine live migration process is initiated, following the same process that was used for live migration with shared storage. After the virtual machine’s storage is migrated, the virtual machine migrates while it continues to run and provide network services. After the live migration is complete and the virtual machine is successfully running on the destination server, the files on the source server are deleted. Source device Target device VHD VHD

29 Live Migration Upgrades
8/7/2018 Live Migration Upgrades Simplified upgrade process from 2012 to 2012 R2 Customers can upgrade from Windows Server 2012 Hyper-V to Windows Server R2 Preview Hyper-V with no VM downtime Supports Shared Nothing Live Migration for migration when changing storage locations If using SMB share, migration transfers only the VM running state for faster completion Automated with PowerShell One-way Migration Only Hyper-V Cluster Upgrade without Downtime 2012 Cluster Nodes 2012 R2 Cluster Nodes 1 2 3 1 2 3 Hyper-V Hosts SMB Storage

30 High Availability & Resiliency
Robust, reliable & resilient infrastructure foundation for running continuous services Failover Clustering Online Backup NIC Teaming Provide flexibility for application-level resiliency Simplify infrastructure maintenance Hyper-V Replica with Extended Replication Guest Clustering Hyper-V Recovery Manager Provide granular solutions for enabling disaster recovery Shared VHDX Failover Priority & Affinity Rules Integration with cloud services Cluster Aware Updating

31 Failover Clustering Cluster Dynamic Quorum Configuration
Integrated Solution for Resilient Virtual Machines Cluster Dynamic Quorum Configuration Massive scalability with support for 64 physical nodes & 8,000 VMs VMs automatically failover & restart on physical host outage Enhanced Cluster Shared Volumes Cluster VMs on SMB 3.0 Storage Dynamic Quorum & Witness Reduced AD dependencies Drain Roles – Maintenance Mode VM Drain on Shutdown VM Network Health Detection Enhanced Cluster Dashboard Node & Disk Majority Node Majority Node 1 Node 2 Node 3… …Node 64 V V V V V iSCSI, Fibre Channel or SMB 3.0 Storage V = Vote

32 Guest Clustering Complete Flexibility for Deploying App-Level HA
Guest Cluster running on a Hyper-V Cluster Guest cluster nodes supported with Live Migration Guest cluster node restarts on physical host failure Full support for running clustered workloads on Hyper-V host cluster Guest Clusters that require shared storage can utilize software iSCSI, Virtual FC or SMB Full support for Live Migration of Guest Cluster Nodes Full Support for Dynamic Memory of Guest Cluster Nodes Restart Priority, Possible & Preferred Ownership, & AntiAffinityClassNames help ensure optimal operation Guest Cluster Hyper-V Host Cluster iSCSI, Fibre Channel or SMB Storage

33 Guest Clustering with Shared VHDX
8/7/2018 Guest Clustering with Shared VHDX Guest Clustering No Longer Bound to Storage Topology Flexible choices for placement of Shared VHDX VHDX files can be presented to multiple VMs simultaneously, as shared storage VM sees shared virtual SAS disk Unrestricted number of VMs can connect to a shared VHDX file Utilizes SCSI-persistent reservations VHDX can reside on a Cluster Shared Volume on block storage, or on File-based storage Supports both Dynamic and Fixed VHDX Guest Cluster Shared VHDX File Guest Cluster Shared VHDX File Hyper-V Host Clusters CSV on Block Storage SMB Share File Based Storage

34 Virtual Machine Monitoring
Monitor Health of Applications Inside Clustered VMs Upon service failure, Service Control Manager inside guest will attempt to restart the service After 3 failures, Cluster Service will trigger event log entry 1250 VM State = Application in VM Critical VM can be automatically restarted on the same node Upon subsequent failure, VM can be failed over and restarted on alternative node Extensible by Partners Clustered VM with Monitoring Enabled Hyper-V Cluster Node

35 Cluster Aware Updating
Integrated Patching Solution for Hyper-V Clusters Reduces server downtime and user disruption by orchestration of cluster node updates Maintains service availability without impacting cluster quorum Detects required updates and moves workloads off nodes for updates Uses Windows Update Agent or extensible plug-in Windows Server Update Services (WSUS) Application Client U Third-party plug-in for updates SMB share with third-party update files Windows Server Cluster Cluster-Aware Updating (CAU) reduces server downtime and user disruption by allowing IT administrators to update clustered servers with little or no loss in availability during updates performed on cluster notes. <click> When an update is available, CAU transparently takes one node of the cluster offline, installs the updates, performs a restart if necessary, brings the node back online, and moves on to the next node. In the animation, you see a four node cluster with varying loads. You have an application client, say SQL, that constantly accesses the cluster. The computer running the CAU process also called Orchestrator identifies the node with the least amount of load, transfers the workload from that node to another node in the cluster, installs the updates, restarts the node and then moves on to the next node. So without any manual intervention, the orchestrator automatically talks to WSUS server or other update servers and updates nodes in the cluster. This feature is integrated into existing Windows update management infrastructure and can be further extended and automated with Windows PowerShell for integrating into larger IT automation initiatives. Can update GDRs, QFEs, update BIOS, any other application update. Can be launched from Server Manager UI. At the end of the update, workloads come back to the same configuration Updating process itself is resilient to planned/unplanned failures Can start updates from RSAT tool (update coordinator is outside the cluster) Supports hotfixes as well No IO failures, no downtime Use PowerShell for entire process Plugin APIs, reference documentation available on MSDN <next slide> Current Workload

36 Failover Priority, Affinity & Anti-Affinity
8/7/2018 Failover Priority, Affinity & Anti-Affinity Ensure Optimal VM Placement and Restart Operations Failover Priority ensures certain VMs start before others on the cluster Affinity rules allow VMs to reside on certain hosts in the cluster AntiAffinityClassNames helps to keep virtual machines apart on separate physical cluster nodes AntiAffinityClassNames exposed through VMM as Availability Set Anti-Affinity keeps related VMs apart Hyper-V cluster with VMs on each node Upon failover, VMs restart in priority order 2 1 Hyper-V Hosts iSCSI, FC or SMB Storage

37 Hyper‑V Replica Replicate Hyper‑V VMs from a Primary to a Replica site
8/7/2018 Hyper‑V Replica Replicate Hyper‑V VMs from a Primary to a Replica site Affordable in-box business continuity and disaster recovery Configurable replication frequencies of 30 seconds, 5 minutes and 15 minutes Secure replication across network Agnostic of hardware on either site No need for other virtual machine replication technologies Automatic handling of live migration Simple configuration and management Once Hyper-V Replica is enabled, VMs begin replication Upon site failure, VMs can be started on secondary site Once replicated, changes replicated on chosen frequency Primary Site Secondary Site Replicated Changes Initial Replica Current situation Business continuity is the ability to quickly recover business functions from a downtime event with minimal or no data loss. There are number of reasons why businesses experience outage including power failure, IT hardware failure, network outage, human errors, IT software failures, and natural disasters. Depending on the type of outage, customers need a high availability solution that simply restores the service. However, some outages that impact the entire data center such as natural disaster or an extended power outage require a disaster recovery solution that restores data at a remote site in addition to bringing up the services and connectivity. Organizations need an affordable and reliable business continuity solution that helps them recover from a failure. Before Windows Server 2012 Beginning with Windows Server 2008 R2, Hyper‑V and Failover Clustering can be used together to make a virtual machine highly available and minimize disruptions. Administrators can seamlessly migrate their virtual machines to a different host in the cluster in the event of outage or to load balance their virtual machines without impacting virtualized applications. While this can protect virtualized workloads from a local host failure or scheduled maintenance of a host in a cluster, this does not protect businesses from outage of an entire data center. While Failover Clustering can be used with hardware-based SAN replication across data centers, these are typically expensive. Hyper‑V Replica fills an important gap in the Windows Server Hyper‑V offering by providing an affordable in-box disaster recovery solution. Windows Server 2012 Hyper‑V Replica Windows Server 2012 introduces Hyper‑V Replica, a built-in feature that provides asynchronous replication of virtual machines for the purposes of business continuity and disaster recovery. In the event of failures (such as power failure, fire, or natural disaster) at the primary site, the administrator can manually fail over the production virtual machines to the Hyper‑V server at the recovery site. During failover, the virtual machines are brought back to a consistent point in time, and within minutes they can be accessed by the rest of the network with minimal impact to the business. Once the primary site comes back, the administrators can manually revert the virtual machines to the Hyper‑V server at the primary site. Hyper‑V Replica is a new feature in Windows Server It lets you replicate your Hyper‑V virtual machines over a network link from one Hyper‑V host at a primary site to another Hyper‑V host at a Replica site without reliance on storage arrays or other software replication technologies. The figure shows secure replication of virtual machines from different systems and clusters to a remote site over a WAN. Benefits of Hyper‑V Replica Hyper‑V Replica fills an important gap in the Windows Server Hyper‑V offering by providing an affordable in-box business continuity and disaster recovery solution. Failure recovery in minutes. In the event of an unplanned shutdown, Hyper‑V Replica can restore your system in just minutes. More secure replication across the network. Hyper‑V Replica tracks the write operations on the primary virtual machine and replicates these changes to the Replica server efficiently over a WAN. The network connection between the two servers uses the HTTP or HTTPS protocol and supports both integrated and certificate-based authentication. Connections configured to use integrated authentication are not encrypted; for an encrypted connection, you should choose certificate-based authentication. Hyper‑V Replica is closely integrated with Windows failover clustering and provides easier replication across different migration scenarios in the primary and Replica servers. Hyper‑V Replica doesn’t rely on storage arrays. Hyper‑V Replica doesn’t rely on other software replication technologies. Hyper‑V Replica automatically handles live migration. Configuration and management are simpler with Hyper‑V Replica: Integrated user interface (UI) with Hyper‑V Manager. Failover Cluster Manager snap-in for Microsoft Management Console (MMC). Extensible WMI interface. Windows PowerShell command-line interface scripting capability. <next slide> CSV on Block Storage SMB Share File Based Storage

38 Hyper-V Replica | Extended Replication
Replicate to 3rd Location for Extra Level of Resiliency Once a VM has been successfully replicated to the replica site, replica can be replicated to a 3rd location Chained Replication Extended Replica contents match the original replication contents Extended Replica replication frequencies can differ from original replica Useful for scenarios such as SMB -> Service Provider -> Service Provider DR Site Replication can be enabled on the 1st replica to a 3rd site Replication configured from primary to secondary DR Site Replication DAS Storage

39 Hyper-V Recovery Manager
Orchestrate protection and recovery of private clouds Windows Azure Hyper-V Recovery Manager Protect important services by coordinating replication and recovery of VMM-managed private clouds Automates replication of VMs within clouds between sites Hyper-V Replica provides replication, orchestrated by Hyper-V Recovery Manager Can be used for planned, unplanned and testing failover between sites Integrate with scripts for customization of recovery plans Communication channel Communication channel LOB cloud/Dev-test LOB cloud/Dev-test Failover System Center 2012 R2 Preview System Center 2012 R2 Preview Hyper-V Hosts Hyper-V Hosts Replication Channel Datacenter 1 Datacenter 2

40 Enhanced Session Mode Enhancing VMConnect for the Richest Experience
Improved VMBus Capabilities enable: Audio over VMConnect Copy & Paste between Host & Guest Smart Card Redirection Remote Desktop Over VMBus Enabled for Hyper-V on both Server & Client Fully supports Live Migration of VMs

41 Automatic Virtual Machine Activation
Simplifying Activation of Windows Server 2012 R2 Preview VMs Activate VMs without managing product keys on a VM by VM basis VMs activated on start-up Reporting & Tracking built-in Activate VMs in remote locations, with or without internet connectivity Works with VM Migration Generic AVMA key for VMs activates against a valid, activated Windows Server 2012 R2 Hyper-V host 1 Windows Server 2012 R2 Preview Datacenter host activated with regular license key Windows Server 2012 R2 Preview VM 2 Windows Server 2012 R2 Preview VM is created, with an AVMA key injected in the build 3 On start-up, VM checks for an activated, Windows Server 2012 R2 Preview Datacenter Hyper-V host Windows Server 2012 R2 Preview Datacenter Hyper-V Host 4 Guest OS activates and won’t recheck against host until next guest reboot, or after 7 days.

42 Windows Server Management Marketing
8/7/2018 Summary Scalability & Performance Security & Multitenancy Flexible Infrastructure High Availability & Resiliency Virtualization Innovation Virtualization technologies help customers lower costs and deliver greater agility and economies of scale. Either as a stand-alone product or an integrated part of Windows Server, Hyper‑V is the leading virtualization platform for today and the transformational opportunity with cloud computing. With Hyper‑V, it is now easier than ever for organizations to take advantage of the cost savings of virtualization, and make the optimum use of server hardware investments by consolidating multiple server roles as separate virtual machines that are running on a single physical machine. Customers can use Hyper‑V to efficiently run multiple operating systems, Windows, Linux, and others, in parallel, on a single server. Windows Server 2012 extends this with more features, greater scalability and further inbuilt reliability mechanisms. In the data center, on the desktop, and now in the cloud, the Microsoft virtualization platform, which is led by Hyper‑V and management tools, simply makes more sense and offers better value for money when compared to the competition. This presentation will focus on comparing Windows Server 2012 Hyper-V, with VMware vSphere Hypervisor and vSphere 5.1, across 4 key areas: Scalability, Performance & Density Secure Multitenancy Flexible Infrastructure High Availability & Resiliency Hyper-V: A More Complete Virtualization Platform © 2012 Microsoft Corporation. All rights reserved. Microsoft, Windows, and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.

43 System Center 2012 R2 Jumpstart July 15th - http://aka.ms/SCR2JS
#WS2012R2JS Talk with our Experts in Chat tool. Hands-On Labs DOWNLOAD Windows Server 2012 R2 Preview aka.ms/ws2012r2 DOWNLOAD System Center 2012 R2 Preview aka.ms/sc2012r2 Generic question and interactive reminder slide What Twitter hashtag will we be using for this jumpstart to talk outside of the chat, besides our names? System Center 2012 R2 Jumpstart July 15th -


Download ppt "Module 2: Server Virtualization in Windows Server 2012 R2"

Similar presentations


Ads by Google