Download presentation
Presentation is loading. Please wait.
2
Building the Foundation.. Server Virtualisation and Management Julius Davies Datacenter Technology Specialist Microsoft UK Julius.Davies@microsoft.com Clive Watson Datacenter Technology Specialist Microsoft UK Clive.Watson@microsoft.com
3
Where are we today?
4
How can we optimise?
5
HYPER-V
6
Thin Provisioning VHD VM Guest OS needs to see 100GB but may only consume % of that With Fixed VHDs, a 100GB VHD would consume 100GB on SAN With Dynamic VHDs, the physical space consumed is only equal to that consumed by Guest OS VHD Performance Whitepaper: Link hereLink here
7
Dynamic Storage Flexible solution for adjusting available VM storage without downtime Utilises SCSI Controller for Hot-Add and Hot-Remove of VHD/PTD Each VM can have up to 4 SCSI Controllers Each SCSI Controller can have up to 64 disks attached
8
DEMO: Dynamic Storage
9
Hyper-v Networking 3 types of network: Private, Internal, External – Private = VM 2 VM – Internal = VM 2 VM & VM 2 Host – External = VM 2 VM, VM 2 Host VM 2 VM across Hosts Each VM can have up to 12 vNICs – 8 Synthetic & 4 Legacy (PXE) – Each with different VLAN ID Teaming Support provided by NIC Vendor Intel = PROSet, Broadcom = BACS, HP = NCU Best practice: install/enable Hyper-V, then install networking utilities
10
DEMO: A look at networking
11
Hyper-V Networking for Clusters Great guide here: http://technet.microsoft.com/en-us/library/ff428137(WS.10).aspxhttp://technet.microsoft.com/en-us/library/ff428137(WS.10).aspx Best Practice Suggests: – 1 Network for Host Management – 1 Network for Cluster Heartbeat – 1 Network for Cluster Shared Volumes – 1 Network for Live Migration – 1 Network for Virtual Machine Traffic – If using ISCSI: 2 Networks for iSCSI Storage with MPIO The above numbers represent networks, not ports. You may wish to team certain ports to provide resiliency.
12
High availability – Clustering 1.2 Hyper-V R2 Nodes in a Failover Cluster. Each Node has 2 VMs running. VMs are stored on the SAN. 2.Node 1 Fails, and also brings down 2 VMs 3.Failover Clustering in Hyper-V R2 ensures that VMs restart on Node 2 of the Hyper-V Cluster SAN
13
Cluster Shared Volumes Enabling multiple nodes to concurrently access a single ‘truly’ shared LUN Provides VM’s complete transparency with respect to which nodes actually own a LUN Guest VMs can be moved without requiring any drive ownership changes No dismounting and remounting of volumes is required
14
Cluster Shared Volumes 1.We’ve set up a WS2008 R2 Cluster, and created 4 LUNs on the SAN. 2.We’ve made the LUNs available to the Cluster 3.In Failover Clustering MMC, we mark the LUNs as CSV’s. 4.Each Node in our Cluster then has a Consistent Namespace for accessing the LUNs. We can now drop as many VMs on each CSV as we like SAN C:\ClusterStorage\Volume1 C:\ClusterStorage\Volume2 C:\ClusterStorage\Volume3 C:\ClusterStorage\Volume4 C:\ClusterStorage\Volume1 C:\ClusterStorage\Volume2 C:\ClusterStorage\Volume3 C:\ClusterStorage\Volume4 C:\ClusterStorage\Volume1 C:\ClusterStorage\Volume2 C:\ClusterStorage\Volume3 C:\ClusterStorage\Volume4 C:\ClusterStorage\Volume1 C:\ClusterStorage\Volume2 C:\ClusterStorage\Volume3 C:\ClusterStorage\Volume4
15
Live Migration 1.2 Hyper-V R2 Nodes in a Failover Cluster. Each Node has 2 VMs running. VMs are stored on the SAN. 2.We decide we’d like to Live Migrate a VM from Node 1 to Node 2. 3.Live Migration in Hyper-V R2 ensures that VMs are migrated with no downtime. SAN
16
DEMO: Live Migration
17
Dynamic Memory Automatic, dynamic balancing of memory between running VMs Understands the needs of the Guest OS Available as part of WS2008 R2 SP1 at no cost “on the hardware I was testing with, I saw an increase from 64 VMs (Windows 7 on Hyper-V R2) to 133 VMs (Windows 7 on Hyper-V R2 SP1), we also ran performance testing against this so this wasn't a case of "let's see how many VMs we can fire up“ Matt Evans, Quest Software
18
DEMO: Hyper-V
19
RemoteFX Not a replacement for RDP! – Enhancement to the graphical capabilities of RDP 7.1 vGPU (WDDM) – Single GPU for multiple Hyper-V Guests Host Side Rendering – Apps run at full-speed on host Intelligent Screen Capture & Hardware-Based Encode – Screen deltas sent to client based on network/client availability Bitmap Remoting & Hardware-Based Decode – Full range of client devices – HW and SW manifestations by design
20
Hyper-V R2 SP1 – Summary Business Continuity - High Availability & Live Migration Host Scalability - 64 Cores & 1TB RAM VM Scalability - 64 GB RAM & 4vCPUs Per VM Density – Dynamic Memory included with SP1 Power Efficiency - Core Parking & Many Power Improvements Dynamic Storage - Add/Remove disks with no downtime Thin Provisioned VHDs - Use Less Storage Networking Improvements - NIC Teaming via NIC Vendor, Jumbo Frames, TCP Offload, VMq, vLANs etc. Familiarity - Based on Windows, managed through Windows and System Center Hardware Optimised – Takes advantage of latest h/w innovations (e.g. SLAT) Huge HCL – http://www.windowsservercatalog.com OS Support - In-lifecycle Windows Server/Clients & Linux (SUSE/RHEL/CENTOS)
21
How can we better manage? VMware
22
Self Service Portal 1.0 SQL DATABASE LIBRARY SERVER VMM SERVER SCVMM 2008 R2 SP1 ADMIN CONSOLE
23
SCVMM 2008 R2 SP1 Multi-Hypervisor P2V & V2V Live Migration Support Quick Storage Migration OpsMgr Integration: – Unlocks PRO Capabilities Rapid Provisioning Intelligent Placement Library & Web Portal AD Integration Granular Management PowerShell Maintenance Mode
24
Self Service Portal 1.0
25
DEMO: SCVMM 2008 R2 SP1
26
SCVMM 2012 - key pillars HA VMM SERVER UPGRADE CUSTOM PROPERTIES POWERSHELL DEPLOYMENT HYPER-V BARE METAL HYPER-V MANAGEMENT VMWARE MANAGEMENT XENSERVER MANAGEMENT CLUSTER CREATION DYNAMIC OPTIMIZATION POWER MANAGEMENT NETWORK MANAGEMENT NETWORK MANAGEMENT MONITORING INTEGRATION MONITORING INTEGRATION STORAGE MANAGEMENT STORAGE MANAGEMENT FABRIC SERVICE LIFECYCLE SERVICE LIFECYCLE APP DEPLOYMENT IMAGE BASED SERVICING CLOUD CAPACITY & CAPABILITY DELEGATION & QUOTA SELF-SERVICE APP OWNER USAGE SERVICES
27
SCVMM 2012 - interface
28
SCVMM 2012 in action: System Center Virtual Machine Manager 2012: Fabric Management for the Private Cloud – http://www.msteched.com/2010/Europe/MGT306 System Center Virtual Machine Manager 2012: Service Lifecycle Management for the Private Cloud – http://www.msteched.com/2010/Europe/MGT206
29
How can we provide SELF SERVICE? VMware Self Service Portal V2
30
Datacenter and Line-of- Business Administrators Datacenter Administrato r Focused Solutions Flexible Management LOB Administrator End Users/Consumers 24x7 Infrastructure Management & Monitoring Security
31
VMM Self-Service Portal 2.0 Step 1- Configuration and Extensibility Pool Infrastructure Assets in toolkit Extend Virtual Machine actions through Extensibility UI Step 2- Onboarding and Infrastructure Request Onboard Business Unit Create Infrastructure Request (i.e. request a sandbox) Step 3- Approval /Provisioning Verify Asset Availability and Capacity Assign Assets Approve Infrastructure Request and Provision Step 4- Self Service VM Provisioning Manage Environment Manage VMs Access Reports
32
VMM Self-Service Portal 2.0
33
DEMO: SELF SERVICE PORTAL V2
34
Learn More SYSTEM CENTER: http://www.microsoft.com/systemcenter HYPER-V: http://www.microsoft.com/hypervPRIVATE CLOUD: http://microsoft.com/privatecloud APPLICATION VIRTUALISATION: http://microsoft.com/appv
35
© 2008 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries. The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.