Download presentation
Presentation is loading. Please wait.
1
FusionCompute Computing Virtualization
2
Objectives Upon completion of this course, you will be able to:
Understand the development background and working principles of the computing virtualization technology. Understand the working principles, deployment, and configuration of FusionCompute computing virtualization features.
3
Contents Introduction to Computing Virtualization
Development Background Working Principles Introduction to FusionCompute Computing Virtualization Features
4
What Is Virtualization?
Before After App App App App App App App App App App App App App App App Win Linux Win Linux Win Linux Win Linux Win Linux Win Linux Windows Linux Linux Virtualization Resource pool Virtualization layer Virtualization layer Virtualization layer Server 1 Server 2 Server 3 IT resources are isolated. The OS and hardware are closely coupled. Resources are virtualized to form shareable resource pools. The OS is decoupled from hardware, and resources are allocated from the resource pools. Virtualization Virtualization converts physical resources into logical resources. The virtualization technology adds a virtualization layer on a system to virtualize underlying resources. 4
5
Origin History Impetus
The virtualization technology applied on mainframe computers in 1960s. Logic partition was used on midrange computers in 1999. The x86 platform virtualization technology began to appear in 2000. The x86 platform virtualization technology is used on servers in 2001. History The CPU processes data much faster than that required by software. Intel and AMD added virtualization instructions to the CPU. Enterprises need to reduce costs. Environment protection becomes increasingly urgent. Service pressure consistently increases. Impetus For traditional system architecture, each physical server can have only one operating system (OS) and only one load (in most cases). As a result, it is difficult to run multiple primary applications on the server because conflicts and performance problems may occur. The ideal solution is to run only one application on one server to prevent such problems. However, this will cause low server usage, which means that a large part of the purchased computing capabilities will be wasted. It is important to find balance between purchasing more hardware resources and pursuing higher resource usage. With service development, cost pressures rise along, management efficiency drops, and resource consumption skyrockets. The core purpose of the virtualization strategy for enterprises is to improve the IT department working efficiency, thereby saving costs and improving efficiency at the same time. One of the essential missions of virtualization is to improve management efficiency, lower costs, and enhance hardware usage, thereby facilitating management. Virtualization reduces the number of physical servers to be deployed and seamless transfers the OSs and applications on the physical servers to VMs, so that all these resources can be managed in a centralized manner. In 1990s, some virtualization software vendors represented by VMware used a virtual-machine-monitor centered software solution to virtualize the x86 platform. However, this solution is based on the full virtualization techniques. Resources obtained by each guest OS are managed and allocated by the VMM. Therefore, the binary translation is required, which deteriorates the performance of full virtualization. To resolve the performance problem, paravirtualization, a new virtualization technology is introduced. This technology does not require binary conversion but modifies the guest OS kernel to achieve high performance and expandability. However, the modified guest OS resulted in system instruction set conflict and compromises efficiency. Therefore, a lot of efforts must be made for optimization. The virtualization technology has entered the hardware-assisted virtualization stage. The hardware-assisted virtualization uses hardware circuit to implement functions provided by the software-only virtualization solution. This technology reduces VMM running footprint and overhead, thereby removing the need for either CPU paravirtualization or binary translation. By using hardware-assisted virtualization, the design of the VMM becomes simpler, and the VMM can be programmed based on Common Criteria (CC). In addition to integrating privileged instructions on processors, the hardware-assisted virtualization supports I/O virtualization so that the whole platform can be virtualized. Both the development and implementation of x86 platform virtualization shows that the virtualization era has come, and this technology will be widely used. Virtualization can be classified into full virtualization, paravirtualization, and hardware-assisted virtualization based on the technology architecture. 5
6
Independent from hardware
Benefits Hardware utilization is improved. Power consumption is reduced. IT O&M efficiency is enhanced, and therefore fewer system administrators are required. The OS and hardware are decoupled. Partition Isolation Multiple VMs can run on a single physical server. VMs that run on the same server are isolated from one another. Encapsulation Independent from hardware Resource zone: A resource zone indicates the VMM capability of allocating server resources to multiple VMs. Each VM can run an independent OS (same or different OSs) so that multiple applications can coexist on one server. Each OS can have access only to its own virtual hardware (including the virtual NICs, virtual memory, and virtual CPUs) provided by the VMM. Isolation: VMs are isolated from each other. Even if one VM crashes down or fails due to an OS failure, application breakdown or driver failure, other VMs can still run properly. It seems that each VM locates at an independent physical machine. If a VM is infected with worms or viruses, the worms and viruses are isolated from other VMs. Resources can be managed to provide performance isolation. Specifically, users can specify the maximum and minimum resource usage for each VM to ensure that one VM does not use all resources, leaving no available resources for other VMs in the same system. Multiple loads, applications, or OSs can run concurrently on one physical server, preventing problems that may occur on the x86 server, for example, application program conflicts or DLL conflicts. Encapsulation: Encapsulation stores all VM-related content, including the hardware configuration, BIOS configuration, memory status, disk status, and CPU status in a group of files that are independent of physical hardware. This enables users to clone, save, and migrate a VM by copying, saving, and migrating several files. Before and after Huawei desktop cloud is deployed Before Server: PC PC Resource usage: 5% Power consumption/24 hours (w): Service server preparation duration: 3 months Maintenance efficiency: 100/person After Server: 4093 servers TCs Resource usage: > 52% (NC+CI) Power consumption/24 hours (w): Service server preparation duration: < 3 days Maintenance efficiency: > 1000/person All data of a VM is saved in files. A VM can be moved and replicated by moving and replicating the files. VMs can be migrated between hosts without any modifications. 6
7
Concepts Guest OS: OS installed on a VM Guest machine: Virtual machine (VM) Hypervisor: Virtualized software layer/virtual machine monitor (VMM) Host OS: OS installed on a physical server Host machine: Physical server Physical server VM Host OS Guest OS Guest OS VM (guest machine) VM (guest machine) VM monitor (Hypervisor) Hardware (host machine) Hardware (host machine) Host machine Customer's resources Guest machine Virtualized resources Guest OS and host OS If one physical machine is virtualized into multiple virtual machines, the physical machine is a host machine, and the virtual machines are guest machines. The OS installed on the physical machine is the host OS, and that installed on the virtual machine is the guest OS. Hypervisor Through virtualization by the virtualization layer, upper-layer software regards the VM as a real machine. The virtualization layer is called virtual machine monitor (VMM) or hypervisor. 7
8
Mainstream Virtualization Types
Hosted virtualization Bare-metal virtualization OS virtualization Hybrid virtualization Application Application Application Application Virtual container Virtual storage Application Application Application Application Application Application OS System template base System template base OS OS service management console OS OS OS Virtualization layer Host OS Virtualization layer Host OS Host OS Management of virtual hardware Hosted Virtualization Bare-Metal Virtualization OS Virtualization Hybrid Virtualization Advantage Simple and easy to implement VMs independent of the OS Multiple OSs and application programs are supported. Very low management costs Not provide redundancy and has high performance (compared with hosted virtualization) Multiple OSs are supported. Disadvantage Installation and running of application programs requires that the host OS support the VMs. High management costs and performance consumption Difficult virtualization layer kernel development Poor isolation Multiple containers share one OS. Underlying hardware must support virtualization expansion. Vendor VMware Workstation WMware ESX Server Citrix XenServer Huawei FusionSphere Virtuozzo Red Hat KVM Hosted virtualization: The virtualization management software serves as a common application on the underlying OS (Windows or Linux) and shares the underlying server resources through the VMs created by the virtualization management software. Bare-metal virtualization: A hypervisor refers to a VM monitoring program running on the physical hardware. The hypervisor implements two basic functions: Identify, capture, and receive CPU privilege commands or protection commands. Process the VM queuing and scheduling and sends the physical hardware process results to the corresponding VM. OS virtualization: No independent hypervisor layer is provided. On the contrary, the host OS allocates hardware resources to multiple virtual servers and isolates these servers. A distinct difference of OS virtualization from other types of virtualization is that if OS virtualization is used, all virtual servers must run the same OS. However, each instance has its own applications and user accounts. Hybrid virtualization: Like hosted virtualization, the host OS is also used in hybrid virtualization. However, a kernel driver is inserted into the OS kernel instead of the management program is deployed on the host OS. This kernel driver serves as the VM hardware monitor (VHM) to coordinate hardware accessing between VMs and the host OS. This means that hybrid virtualization relies on the CPU scheduling tool provided by the existing kernel and memory manager. Like bare-metal virtualization and OS virtualization, no redundant memory monitor and CPU scheduling tool significantly improves the system performance in this mode. Bare-metal virtualization and hybrid virtualization are the virtualization architecture development trends. 8
9
Contents Introduction to Virtualization
Development Background Working Principles CPU virtualization, memory virtualization, and I/O virtualization Introduction to FusionCompute Computing Virtualization Features
10
CPU Virtualization CPU Shared by VMs OS Instruction 2 Instruction 1
This feature uses the mechanism similar to the traditional OS, that is, timer interrupts, to trigger the privileged instructions trapped in the VMM. Then, the system schedules CPU resources based on the scheduling mechanism. OS Instruction 2 Instruction 1 VMM Instruction (1) The protection mode of theX86 processor has four priorities: Ring0 to Ring3. Priorities vary with functions actually implemented. Ring0, used for the OS kernel, is the highest priority and has the highest privilege. Ring1 and Ring2 are used for OS services and have the second highest priority. Ring3 is used for applications and has the lowest priority. The classical virtualization methods are Privilege Deprivileging and Trap-and- Emulation. Specifically, the guest OS is running on the non-privilege level (Deprivileging) and the virtual machine monitor (VMM) is running on the highest privilege level, that is, the VMM fully controls system resources. After the privilege of the guest OS is removed, most instructions of the guest OS can run directly on the hardware except that only privileged instructions are trapped in the VMM for emulated execution (Trap-and-Emulation). Physical CPU 10
11
Memory Virtualization
Memory virtualization centrally manages physical memory of physical servers and allocates the memory resources to multiple VMs. Guest virtual memory VM1 VM2 VM3 VM4 1 2 3 4 5 Virtual machine monitor (VMM) 5 1 2 The following agreements are reached on the memory reorganization and management by the OS of the physical machine: The memory starts from physical address 0. Memory has consecutive addresses. Following issues occur after virtualization is introduced: Start from physical address 0: There is only one physical address 0, which cannot meet multiple concurrent VM requirements. Consecutive addresses: Although consecutive physical addresses can be allocated, memory usage is low and the flexibility is lacked (for example, memory sharing). The core of memory virtualization is to introduce a new layer of address space – physical address space of customer computers. These computers think that they are running in the physical address space. Actually, they access physical addresses through the VMM. The mapping between the address space of the customer computer and that of the physical computer is saved in the VMM. 3 4 Machine physical memory 11
12
Physical hardware (processor, memory, and I/O device)
I/O Virtualization VMs reuse limited peripheral resources: The VMM intercepts the access request sent by the guest OS to the device and simulates device's actions using software. The front-end drive forwards data to the back-end drive through the VMM's interface. The back-end drive processes VM data in different time segments and channels. Domain 0 Domain U User mode User mode Control panel Kernel Kernel Device drive Back-end drive Front drive VMM Domain U is an ordinary VM that runs on the Xen hypervisor. Domain 0, a modified Linux kernel, functions as a privileged VM running on the Xen hypervisor. Domain 0 can access physical I/O resources and interwork with other VMs running in the system. Domain 0 must starts before other domains. Physical hardware (processor, memory, and I/O device) I/O virtualization needs to resolve the following two issues: Device discovery Devices that VMs can access can be controlled. Access interception VMs access devices through the I/O port. Front-end and back-end drives need to re-implement the drive program. How to implement the device discovery? Device information of all VMs is stored in the XenStore of Domain0. The XenBus (a semi-virtualization drive developed for Xen) in a VM obtains device information by communicating with the XenStore of Domain0. Load the front-end drive program corresponding to the device. How to implement data interception? The front-end drive forwards data to the back-end drive through the VMM's interface. The back-end drive processes VM data in different time segments and channels. 12
13
Virtualization Supported by Intel Hardware (Extensions)
VT-x: Virtualization Technology for IA-32 For example, GPU passthrough VT-d: Virtualization Technology for Directed I/O For example, USB emulation and CD/DVD-ROM mounting VT-c: Virtualization Technology for Connectivity For example, NIC passthrough TXT- Trusted Execution Technology VT-x adds two operating modes (VMX root operation and VMX non-root operation) to the IA 32 processor. The VMM is running in VMX root operation mode, and guest OS is running in VMX non-root operation mode. Both operating modes support four priorities Ring0 to Ring3. Therefore, both the VMM and guest OS can select the expected priority. VMs are allowed to execute some instructions to reduce the load on the VMM, providing VMs with more stable performance at a faster speed. This is implemented in the processor. VT-x is the VT technology provided by the Xeon processor. VT-I is the VT technology provided by the Itanium processor. VT-d: VT for Direct I/O, is implemented in the chip group. It allows VMs to directly access I/O devices, reducing the load on VMM and CPU. VT-c: VT for Connectivity, is implemented on the NIC through two core technologies: VMDq and VMDc. VMDq: classifies data packets for different VMs using hardware on the NIC and distributes data packets to each VM through the VMM, reducing the CPU overhead caused by the data packet classification by the VMM. VMDc: allows VMs to directly access NIC devices. Single Root I/O Virtualization (SR-IOV), a PCI-SIG standard, allocates one PCIe device to multiple VMs for direct access. The GE and GE controllers support SR-IOV. Trusted Execute Technology (TXT) effectively prevents various security threats for user computers through the application of the TXT module chip. The hardware kernel and subsystem are used to control computer resources to be accessed. After the TXT is used, computer virus, malicious codes, spyware, and other security risks will be greatly reduced. Intel TXT can protect data in the hypervisor, which is important for information managers who adopt the new virtualization technology. The hardware kernel and subsystems are used to prevent VMs from security threats. 13
14
Contents Introduction to Virtualization
Introduction to FusionCompute Computing Virtualization Features Across-CPU VM Live Migration VM HA CPU QoS Memory overcommitment Remote CD/DVD-ROM Drive Mounting Dynamic Resource Adjustment Computing Resource Scheduling FusionCompute V100R005C00 New Features
15
Working Principles of VM Live Migration
Definition: VM live migration is to migrate a running VM to a specified host on the same site. Working principles: Transmits VM configurations and device information to the target host. Transmits VM memory. Synchronizes the initial memory and memory defragmentation upon the memory migration to the target host. Pauses the original VM and transmits the VM status. Pauses the VM to be migrated on the original host. Transmits latest memory changes to the target host. Resumes the target VM. Resumes the VM on the target host and stops the VM on the original host.
16
Across-CPU VM Live Migration
Background: Live migration requires that CPUs of the original and target physical servers have the same features or are compatible with each other. Otherwise, the live migration fails. The Across-CPU VM Live Migration feature is introduced to resolve the compatibility issue. Working principles: Based on the Intel Flex Migration technology, the hypervisor intercepts and modifies CPUID instructions and returns responses. By doing so, the VM cannot sense changes of CPUs in the same cluster. Therefore, the VM can be migrated to any host within the cluster. Application suggestions: In the host list, you can view the IMC mode with the highest level supported by each host. Users can specify the across-CPU migration mode in the cluster configuration or try from the highest to the lowest levels. When a mode level passes the verification, this level is the highest mode level supported by the cluster. The cluster can only be set to a mode with a lower level. Therefore, the CPU generation support capability between nodes cannot be too high during the cluster planning. Otherwise, the overall performance deteriorates and resources are wasted.
17
Across-CPU VM Live Migration Application and Configuration
In the advanced settings of the cluster, an option for enabling the Incompatible Migration Cluster (IMC) is provided. The following five baselines are supported: Merom Penryn Nehalem Westmere Sandy Bridge Ivy Bridge "Tick-Tock" is a model adopted by chip manufacturer Intel Corporation to follow every microarchitectural change with a die shrink of the process technology. Every "tick" is a shrinking of process technology of the previous microarchitecture and every "tock" is a new microarchitecture. Tick-Tock: The architecture is developed in the following ascending time order: Merom, Penryn, Nehalem, Westmere, Sandy Bridge, Ivy Bridge, and Haswell.
18
Contents Introduction to Virtualization
Introduction to FusionCompute Computing Virtualization Features Across-CPU VM Live Migration VM HA CPU QoS Memory overcommitment Remote CD/DVD-ROM Drive Mounting Dynamic Resource Adjustment Computing Resource Scheduling FusionCompute V100R005C00 New Features
19
VM HA Definition: If a server or a VM fails, the system automatically restarts the VM on another available server. Physical server faults: host power-off, restart, and breakdown VM OS faults: Windows blue screen of death (BSOD) and Linux panic Working principles: A VM or a host is faulty. The management node queries the VM status and detects that the VM is faulty. The management node finds that the faulty VM has the HA function enabled and starts the VM on an available host based on the stored VM information (specifications and volume). After receiving the HA request, the host starts the VM on another host based on the VM specifications and volume information. The host attaches the original VM disks (including user disks) to the destination host.
20
Automatic Service Recovery
3. Selects a host based on the scheduling algorithm and starts a VM on the node. 2. Detects faults at a scheduled time. 5 6 1 2 3 4 5 6 4' 1' 1. A physical server is faulty. 2' 6 ' 6' 5' 4. Attaches the original storage volume to this node and starts the VM. 3' 5' Application scenario: This feature can apply to unattended systems that run automatically to effectively prevent service interruption. 1. If a server or a VM is faulty, the system automatically migrates the VM to another available server. 2. The detectable VM OS faults include the Windows blue screen of death, Linux Panic, BUG_ON, and Oops. 3. The service interruption duration is the time after which the VM restarts.
21
Contents Introduction to Virtualization
Introduction to FusionCompute Computing Virtualization Features Across-CPU VM Live Migration VM HA CPU QoS Memory Overcommitment Remote CD/DVD-ROM Drive Mounting Dynamic Resource Adjustment Computing Resource Scheduling FusionCompute V100R005C00 New Features
22
CPU QoS The hypervisor schedules CPUs in time-based sharing mode based on configured CPU QoS parameters. By doing so, the hypervisor controls the percentage of physical computing resources allocated to each VM, ensuring service QoS. CPU QoS parameters Limit (MHz): specifies the maximum CPU resources that can be allocated to a VM. For example, if a VM has one CPU and this parameter is set to 2000, a maximum of 2000 MHz CPU resources can be allocated to the VM. Reserved (MHz): specifies the minimum CPU resources required by a VM. For example, if a VM has one CPU and this parameter is set to 1000, the VM requires a minimum of 1000 MHz CPU resources. Quota: specifies the CPU quota a VM can obtain in resource contention. The quota indicates the relative priority or importance of a VM. For example, if the CPU quota of a VM is twice that of another VM, this VM has the priority to consume the CPU resources as twice as that of the other VM.
23
CPU QoS Application and Configuration
CPU QoS: upper limit, quota, and reservation
24
Contents Introduction to Virtualization
Introduction to FusionCompute Computing Virtualization Features Across-CPU VM Live Migration VM HA CPU QoS Memory Overcommitment Remote CD/DVD-ROM Drive Mounting Dynamic Resource Adjustment Computing Resource Scheduling FusionCompute V100R005C00 New Features
25
Definition of Memory Overcommitment
The Memory Overcommitment feature allows VMs running on a host to use more memory than the physical server has available through virtualization technologies (such as memory ballooning, zero page sharing, and memory swapping), improving VM density and reducing the cost of a single VM. For different VMs, such as the management VMs and service VMs, this feature provides different QoS policies to meet customer's service requirements. Windows VM (6 GB) Linux VM (6 GB) Virtualization The memory overcommitment feature enables the total memory of VMs running on the same host to be greater than the memory of the host through the application of virtualization technology, improving the VM density and reducing the cost of a single VM without adding physical memory. For different VMs, such as the management VM, this feature provides different QoS policies to meet customer's service requirements. Host Physical memory (8 GB)
26
Working Principle of Memory Overcommitment
Memory overcommitment: The memory ballooning, zero page sharing, and memory swapping technologies employed by memory overcommitment enable a VM to promptly respond to memory access requests through proper scheduling, reducing the performance overheads of the VM with memory overcommitment enabled. VM1 VM2 Idle Idle Memory ballooning Used Used Zero page sharing: The zero page memory of multiple VMs is combined in the physical memory to release more physical memory for VMs to use. Memory ballooning: The hypervisor uses the memory ballooning technology to release memory of idle VMs for VMs with high memory usage to use, improving VM memory utilization. The memory ballooning technology has small adverse impact on the VM performance. However, users can sense the memory reduction. Memory swapping: When a VM is high memory pressure, the memory page is swapped to the disk to release memory. After the memory page of the VM is swapped to the disk, the VM performance obviously deteriorates. The zero page sharing is a special case of the transparent page sharing.
27
Memory Overcommitment Configuration
Memory overcommitment can be configured for a cluster. By default, memory overcommitment is disabled for a cluster. You can enable this function for the cluster. If the memory overcommitment function is disabled for a cluster, this function does not take effect for all hosts in the cluster. Host memory overcommitment does not relate to HA resource reservation or the VM startup policy. These three are advanced attributes of the cluster.
28
QoS Configuration for Memory Overcommitment
QoS configuration for VM memory includes the following: Reserved (MB): indicates the minimum memory reserved for a VM. If Reserved (MB) is set to 0, the system reserves the minimum required memory resources for the VM. Quota: specifies the CPU quota a VM can obtain in resource contention. The quota indicates the relative priority or importance of a VM. For example, if the CPU quota of a VM is twice that of another VM, this VM has the priority to consume the CPU resources as twice as that of the other VM. Size: specifies the maximum memory resource required by a VM. Note: (1) The VM can start only when the host reserves required resources for the VM. (2) The memory quota takes effect only when the host memory is insufficient. (3) The limited memory is the same as the VM specification. Example A host has 4 GB memory. Two VMs respectively with 3 GB memory and 1 GB reserved memory are running on the host. In this situation, at least 1 GB memory is reserved for each VM, and the remaining 2 GB memory is allocated to VM_1 and VM_2 as required. If VM_1 requires 1 GB memory and VM_2 requires 3 GB memory, the reserved 1 GB memory for VM_1 is sufficient while VM_2 uses the remaining 2 GB memory of the host. If both two VMs require 3 GB memory, the remaining 2 GB memory is allocated according to the quota. If the quota of VM_1 is 2000 and that of VM_2 is 1000, the following formulas are used according to the quota weight: Memory for VM_1 = Reserved 1 GB + [2000/( )] x 2 GB = = 2.33 Memory for VM_2 = Reserved 1 GB + [1000/( )] x 2 GB = = 1.67
29
Relationship Between VM QoS and Host Memory
Relationship between the VM memory QoS and host memory specifications: The total memory reserved for VMs on a host is less than or equal to the host memory specifications. The total limited memory for VMs on a host can be greater than the host memory specifications. The quota for VMs on a host is not limited. Not reserved Reserved 6 GB for VIPs Windows VM (6 GB) Linux VM (6 GB) Virtualization Host Physical memory (10 GB)
30
Contents Introduction to Virtualization
Introduction to FusionCompute Computing Virtualization Features Across-CPU VM Live Migration VM HA CPU QoS Memory Overcommitment Remote CD/DVD-ROM Drive Mounting Dynamic Resource Adjustment Computing Resource Scheduling FusionCompute V100R005C00 New Features
31
Remote CD/DVD-ROM Drive Mounting
With this feature, the local CD/DVD-ROM drive or ISO image mounted to a server can be remotely accessed by VMs running on the server over a network.
32
Working Principle of Remote CD/DVD-ROM Drive Mounting
CD/DVD-ROM Drive Mounting provides virtual USB CD/DVD drives and allows local media, such as the CD/DVD-ROM drive and ISO file, to be remotely accessed over a network.
33
Contents Introduction to Virtualization
Introduction to FusionCompute Computing Virtualization Features Across-CPU VM Live Migration VM HA CPU QoS Memory Overcommitment Remote CD/DVD-ROM Drive Mounting Dynamic Resource Adjustment Computing Resource Scheduling FusionCompute V100R005C00 New Features
34
Working Principle of Dynamic Resource Adjustment
Dynamic resource adjustment allows the number of resources used by a VM to be changed without affecting VM services. Resources that can be adjusted include the following: Number of CPUs, memory size, number of NICs, and number of disks Resource Type Offline Adjustment Online Adjustment and Effective upon Restart Online Adjustment and Effective Immediately CPU Supported Supported in the competition tests Memory Common NIC N/A iNIC Not supported Number of disks Disk size (used in virtualization scenarios, and the disk size can only be increased) Partially supported (for details, see the following remarks) The disk capacity can be expanded only when the data storage type of the disk is virtualized local disks, virtualized SAN storage, NAS storage, or FusionStorage. If the data storage type of the disk is NAS storage and the disk configuration mode is common, you cannot expand the disk capacity in online mode. If the data storage type of the disk is FusionStorage, the online disk expansion takes effect only after the VM restarts. If the data storage type of the disk is virtualized local disks, virtualized SAN devices, NAS device, the online disk expansion takes effect immediately when the VM is running the following operating systems. For other operating systems, such online disk expansion takes effect after the VM restarts. Windows Server 2003 Windows Server 2008 Windows XP Windows 7
35
Dynamic Resource Adjustment and Configuration
You can adjust VM resources on the Hardware tab under VM and Template.
36
Contents Introduction to Virtualization
Introduction to FusionCompute Computing Virtualization Features Across-CPU VM Live Migration VM HA CPU QoS Memory Overcommitment Remote CD/DVD-ROM Drive Mounting Dynamic Resource Adjustment Computing Resource Scheduling FusionCompute V100R005C00 New Features
37
Introduction to Computing Resource Scheduling
What is computing resource scheduling? Computing resource scheduling automatically adjust VM placement on computing nodes by leveraging the VM live migration technology. What functions does computing resource scheduling provide? Load balancing for computing nodes Automated power management: The system automatically powers off hosts when the system has a lot of idle resources and powers on the hosts when resources in the system are insufficient. VMs can be set to mutually exclusive on a host and to be prohibited from running on certain hosts. What policies can be configured for computing resource scheduling? Load balancing Automated power management Advanced scheduling rules
38
Computing Resource Scheduling: Load Balancing
Based on policies configured by users, the system balances CPU usages and memory usages of nodes that can properly communicate with the management node and are not isolated through VM live migration. The load-balancing is performed at an interval of 50 minutes. Enable the computing resource scheduling: Only after the computing resource scheduling is enabled, the load-balancing policy, power saving policy, and advanced scheduling rule can be configured. The automation level is classified into manual and automatic. Manual: During the load-balancing, the system provides migration suggestions, and the administrator applies the suggestions on the portal. Automatic: The system automatically triggers the VM migration to achieve load balancing to the greatest extent. Measured by: The system determines the VM migration based on the CPU usage, memory usage, and the difference between the CPU usage and memory usage. Migration threshold: The horizontal axis indicates the time in the unit of hour, and the vertical axis indicates the radicalization. Different radicalization levels are represented by different colors. White color indicates that the load-balancing scheduling is not implemented. Threshold: indicates the initial threshold of the standard deviation of usages for the cluster host. Conservative: not scheduled; medium: 0.282; radical: 0.07
39
Computing Resource Scheduling: Load Balancing Workflow
Collect CPU and memory usages of each node in the cluster. (for 10 collection periods) Is the standard load deviation of the node for at least five sampling points and the last sampling point greater than the initial threshold? No Yes Collect CPU and memory usages for VMs that can be scheduled in the cluster. Calculate the change of standard load deviations for each VM migrated to other nodes in the cluster. Does the standard deviation caused by the migration is less than 0? No Yes Sort out migrations with largest reduction of the standard deviation and update the standard deviation. Yes Is the number of such migrations greater than 40? Initial threshold: Medium threshold = Initial threshold x sqrt(N) x 2/3; Radical threshold = Initial threshold x sqrt(N) x 2/12 After a VM is migrated from host_A to host_B, the CPU/memory usage of host_A reduces (CPU/memory usage of the VM x CPU/memory specification of the VM Total CPU/memory of host_A), and the CPU/memory usage of host_A increases (CPU/memory usage of the VM x CPU/memory specification of the VM Total CPU/memory of host_B). No Add such migrations to the migration queue. Complete the scheduling of this cluster and start a next one.
40
Computing Resource Scheduling: Automated Power Management
Before automated power management is enabled After the automated power management is enabled VM 1 VM 2 VM 2 VM 1 VM 3 VM 3 Host 1 Host 2 Host 1 Host 2 1: If the ratio CPU /memory usage of each VM to the CPU /memory usage of the host are all 10%, the usage of Host1 and Host2 are 10% and 20% respectively. 2: VMs on Host1 that applies the load-balancing management policy are migrated to Host2, and Host1 is powered off.
41
Computing Resource Scheduling: Automated Power Management
The system implements the automated power management for hosts based on policies configured by users. When the loads on hosts in a cluster are light, the system automatically migrates all VMs from the hosts and powers off the hosts. When the loads on hosts in a cluster are heavy, the system automatically powers on the some hosts. Automated power management is performed at an interval of 10 minutes. Enable automated power management: The administrator can set the power management threshold for each time segment only after the automated power management is enabled. Power Management Threshold: The horizontal axis indicates the time in the unit of hour, and the vertical axis indicates the radicalization. Different level of radicalization is indicated using different colors. The white color indicates the automatic power-on and power-off.
42
Computing Resource Scheduling: Automated Power Management Workflow
Start Collect CPU and memory usages of each node in the cluster. (for 10 collection periods) Check whether the sampling point is overloaded, light, or normal due to CPU or memory. Is the node heavy-loaded? Calculate the light-loaded score of CPU and memory usages for each sampling point. Yes No Is the node light-loaded? Calculate the heavy-loaded score of CPU and memory usages for each sampling point. Power on the host and select an appropriate node to migrate VMs. No Yes Select an appropriate node, migrate VMs to the node, and power off the original node. Light-loaded score calculation: Sum of light-loaded squares in each node (If the CPU/memory usage is smaller than the light-loaded threshold, Light-loaded value = Light-loaded threshold – CPU/memory usage of the node. Otherwise, the light-loaded value is 0.) Heavy-loaded score calculation: Sum of heavy-loaded squares in each node (If the CPU/memory usage is greater than the heavy-loaded threshold, Heavy-loaded value = CPU/memory usage of the node – Heavy-loaded threshold. Otherwise, the heavy- loaded value is 0.) Method for determining the light or heavy loading: 1. If the light-loaded score equals to the heavy-loaded score, the node is not powered on or powered off. 2. If the light-loaded score is greater than the light-loaded score, it is considered as light loading. 3. If the light-loaded score is smaller than the heavy-loaded score, it is considered as the heavy loading. Power-off node selection principle: 1. The node is not automatically powered off within one hour. 2. The live migration can be performed for all VMs on the node. 3. The host does not connect the FusionStorage. Power-on node selection principle: 1. The node is not automatically powered on within one hour. 2. Nodes with large specifications are preferred. End
43
Computing Resource Scheduling: Advanced Scheduling Rules
What are advanced scheduling rules? After computing resource scheduling is enabled, the system schedules VMs based on the configured advanced scheduling rules. The scheduling interval is 50 minutes. What are the advanced scheduling rules? Mutually exclusive: A group of VMs, each two of which must run on different hosts. VMs to hosts: A group of VMs that must run on a specified group of hosts Relationship between advanced scheduling rules and load-balancing as well as automated power management, respectively The priority of advanced scheduling rules is higher than that of load balancing and lower than that of the automated power management.
44
Computing Resource Scheduling: Advanced Scheduling Rule Configuration — Mutually Exclusive
Select a cluster, choose Set Computing Resource Scheduling > Rule Management, click Add, and choose Keep VMs mutually exclusive for Type. The following dialog box is displayed. Create the exclusiveness for VMs.
45
Computing Resource Scheduling: Advanced Scheduling Rules Configuration — VMs to Hosts
Preparations: A host group and a VM group have been created. Procedure: Select a cluster, choose Set Computing Resource Scheduling > Rule Management, click Add, and choose VMs to hosts for Type. The following dialog box is displayed. Create the VMs to hosts rule.
46
Computing Resource Scheduling: Example of Advanced Scheduling Rules
Effect of the combined application of load balancing and energy saving policies VM 1 VM 2 VM 3 VM 1 VM 8 VM 6 VM 7 VM 4 VM 5 VM 8 Host 1 Host 2 Host 3 If the ratio of CPU /memory usage of each VM to the CPU/memory of the host (CPU/memory usage of the VM x VM specifications/Total resources on the host) are 10%, the usage of Host1, Host2, and Host3 is 50%, 20%, 10% respectively. VMs on Host3 running the power management policy are migrated to Host1 or Host2. Then, power off Host3. VMs are evenly distributed on Host1 and Host2 using the load-balancing policy. VM8 may be running on any node of Host1 and Host2. VMs newly migrated to Host2 may be any two of VM1-VM5 and VM8.
47
Contents Introduction to Virtualization
Introduction to FusionCompute Computing Virtualization Features Across-CPU VM Live Migration VM HA CPU QoS Memory Overcommitment Remote CD/DVD-ROM Drive Mounting Dynamic Resource Adjustment Computing Resource Scheduling FusionCompute V100R005C00 New Features
48
VRM-Independent HA Function description Restriction
A master node is elected in a cluster to monitor status of hosts in the cluster and maintains the VM lists. If a host fails, the master node restarts all VMs running on the host on other hosts. Users can choose to enable or disable the VRM-independent HA function. If this function is disabled, the VRM node implements the HA function. If this function is enabled, a master node is elected to implement the HA function. Restriction The VRM-independent HA supports only virtualized storage because VIMS is required for storing the storage heartbeats and configuration data.
49
VRM-Independent HA VRM module function HAD module function
VNA HAD Master node Slave node VRM VRM module function Distributes configuration data, such as host configuration data, VM compatibility information, and heartbeat storage data. Receives the topology reported by HAD. Coordinates and maintains network rules saved on data stores for VM startup. HAD module function Elects the master node based on node configuration. The master node monitors the management and storage heartbeats of all slave nodes to check status of all slave nodes. If the master node detects that a slave node is faulty, the master node restarts all affected VMs on other nodes. Identifies the current status of nodes, such as network partitioned or isolated and resumes the hosts after the network becomes normal. If the master node is faulty, elects another node as the master node. VNA module function Saves VM configuration data and network configuration information on shared storage. Starts VMs on nodes using the configuration data. Allows the HAD module to query the running VM list.
50
Enhanced DRS and DPM Functions
In FusionCompute V100R005C00, the DRS and DPM scheduling algorithms are optimized to improve the scheduling efficiency and reduce the number of scheduling operations required. The migration and power management thresholds include conservative, slightly conservative, medium, slightly radical, and radical. FusionCompute V100R005C00 adds support for DRS scheduling baseline configuration.
51
OVF Template Open Virtualization Format (OVF) is a packaging standard introduced by Distributed Management Task Force (DMTF) for VM and virtual application (vApp) publishing and deployment. The OVF specification defines two specific ways of grouping files: OVF package and OVA package (OVF in the TAR format). A standard OVF package includes: One OVF descriptor with extension .ovf. It is an XML file that defines the components of a VM or vApp and the features and resource requirements of each component. Zero or one OVF manifest with extension .mf Zero or one OVF certificate with extension .cert Zero or more disk image files with extension .vmdk , .vhd, or others Zero or more additional resource files with extension .iso A Huawei OVF template encapsulates only one VM and supports both the OVF and OVA packages. A Huawei OVF package includes: One OVF descriptor with extension .ovf One or more disk image files with extension .vhd. The file format is the compression format used by Huawei for stream transmission.
52
Enhanced Template Importing and Exporting Functions
FusionCompute V100R005C00 introduces support for the HTTP and HTTPS protocols during OVF template importing and exporting. Users can directly browse and select a local directory during template importing and exporting. They do not need to share the template and configure the username and password for accessing the template.
53
Antivirus Virtualization
Compared with the traditional antivirus function, the antivirus virtualization function requires no antivirus software to be installed on user VMs and does not consume computing resources on user VMs, leaving sufficient storage and computing resources for user VMs. Compared with the online antivirus function, the antivirus virtualization function provides higher performance because the system uses host physical memory swapping to scan, monitor, and remove viruses, and does not require network resources on user VMs.
54
VNC Login Optimization
FusionCompute V100R005C00 introduces support for the HTML5-based noVNC function. Compared TightVNC supported by earlier versions, noVNC offers the following benefits: No JRE plug-in is required for the browser. More smooth operations using a mouse. The system responds quickly and smoothly to operations performed using the mouse and keyboard. HTML5 has requirements on browser versions. The browser versions highlighted in green in the figure located in the upper-right corner are those that support HTML5. After a user clicks VNC Login, the system displays a dialog box for users to select a VNC login mode if it detects that noVNC is supported. If the system does not support noVNC, the system enables the user to directly log in to the VM using TightVNC.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.