Download presentation
Presentation is loading. Please wait.
Published byEdgar Gardner Modified over 9 years ago
1
Office of Technology Operations & Planning Unlocking the Power of Server Virtualization Rebecca Astin Office of Technology Operations and Planning National Computer Center August 17, 2010
2
Topics What is Server Virtualization? How Does it Work? Dell Study EPA’s Implementation Physical to Virtual Conversion Plans for the Future
3
What is a Virtualization? Method of partitioning one or more physical servers into multiple “virtual machines” A virtual machine: –Runs its own full-fledged operating system –Can be independently rebooted –Is Isolated from other virtual machines running on the same physical hardware –Is Insulated from physical hardware changes
4
Virtualization is Not New Technology for the hypervisor has been around since IBM popularized VM/370 on its mainframe in the 1970s What’s new is the concept of bringing this virtualization technology to low cost small- and medium-scale x86- based servers
5
Benefits of Virtualization Improved Service, Faster Innovation, Greater Efficiency Faster deployment of new servers Higher availability of Servers Higher service levels and reduced maintenance downtime –Reduce security patch and software update headaches –Adjust CPU and memory resources as needed –More flexible disaster recovery options –Less time spent administering servers Consolidation of physical servers Increased energy efficiency Fewer infrastructure costs
6
Energy Efficiencies / Savings
7
Process Efficiencies / Savings
8
Environmental Savings
9
Virtual Technology in Action
10
“Hello World” Physical ServersVirtual Servers 37,248 physical machines776 physical machines 25 Megawatts540 Kilowatts Takes up to 3 football fieldsTakes up only football endzone Months to set up4 days to set up At VMworld, VMWare served 37,248 virtual desktops and servers on 776 Physical Servers
11
Key Properties of Virtual Machines Partitioning –Run multiple operating systems on one physical machine –Divide system resources between virtual machines Isolation –Fault and security isolation at the hardware level –Advanced resource controls preserve performance Encapsulation –Entire state of virtual machine can be saved to files –Move and copy virtual machines as easily as moving and copying files Hardware Independence –Provision or migrate any virtual machine to any similar or different physical server
12
Creating Virtual Machines
14
Virtual Technology Software manages the deployment and configuration of virtual machines as well as allocation of pooled resources Enhance the reliability and manageability of a server deployment: –VMotion: Capability to move a running virtual machine from one ESX host to another –Storage VMotion: Capability to move a running virtual machine from one storage device to another –Dynamic Resource Scheduler: Automatic load balancing of an ESX cluster using VMotion –High Availability: In case of hardware failure in a cluster, the virtual servers will automatically restart on another host in the cluster
15
VMotion Move running virtual machines from one physical host to another with zero downtime Automatically optimize virtual machines within resource pools Perform hardware maintenance without scheduling downtime and disrupting business operations Schedule migrations to happen at pre-defined times, without an administrator’s presence Maintain an audit trail with a detailed record of migrations Current cluster has performed 417 vMotion migrations – all without interruption to the end user
16
VMware High Availability (HA) In the event of a host failure, VMs will automatically restart on another host within the cluster Reduces planned downtime for common hardware maintenance or upgrades Ensures rapid recovery from server failures with automatic restart of systems
17
VMware Dynamic Resource Scheduler (DRS) Provides dynamic load balancing and allocation of resources for virtual machines
18
VMware Storage VMotion Capability to move a running virtual machine from one storage device to another Easy to migrate to a different class of storage Optimize storage I/O by moving VMs to best performing LUNs and eliminate storage I/O bottlenecks New - Array B LunB2 LunB1 Array A LunA1 LunA2 off-lease
19
Challenges to Virtualization Complex Data Center - Already made major investments in server infrastructure which has increased substantially in recent years Need to increase investments in applications and other server software in upcoming years, but do so economically Need to increase agility by decreasing time to deployment as well as increase high-availability and disaster recovery capabilities To economize present server infrastructure Server platform that will scale and provide future server efficiencies Need Challenge An agile and highly-available server infrastructure
20
Barriers to Entry Not disrupting daily business operations while shifting to a virtualized infrastructure Assuring application owners that they will still “own” their servers and applications (even though they will share technology resources) Knowing the capabilities and limitations of server hardware and software Creating a server infrastructure that will scale with growing business needs Careful planning of how to shift software from existing servers to virtual machines Pointing customers to others whose applications have shifted to virtual machines Working closely with providers to know what is and technologically possible Setting up the new virtualized infrastructure in a way that makes it easy to expand BarrierSolution
21
Where to Start?
22
Planning and Preparation Get comfortable with the technology Research, research, research Consult with Gartner/ Burton Group and VM industry leaders Dell Virtualization Readiness Study Training and briefings Intranet Pilot Storage Area Network (SAN) upgrade Telecommunications infrastructure upgrade Zone network upgrade
23
Virtualization Path
24
Virtual Application Hosting Dell R900s –6 server Intranet cluster (expanding to 8)* –128 Computing Cores (323 GHz processing power) –768 GB RAM –10 TB SAN Storage VMware ESXi (v 3.5, update 5) –Hardware-based virtualization –Vmkernel runs directly on the hypervisor vCenter (v 2.5, update 6) *As of May 2010
25
Intranet Cluster 154 Virtual Machines hosted in the NCC* Average approximately 35-50 VMs per physical host P2V: Converted 40 physical servers to virtual servers* –Reduced hardware from 40 physical servers to six –Reduced network cables needed –Increased capacity of network connection (GHz uplink) *As of May 2010
26
Virtual Architecture
27
Basic VM Build Windows –W2K3 server –Single core* (Most P2Vs utilize less than 5% CPU; Allocate 1 CPU to guarantee processing; continuously monitor to see if need to expand) –4GB RAM – 20GB C:/ for OS (for P2V conversions, storage is figured at current usage + 10%) Linux –Red Hat 4 32bit, Red Hat 4 64bit, Red Hat 5 32 bit and Red Hat 5 64bit –Dual core –2GB RAM –20GB rootvg for OS (for P2V conversions, storage is figured at current usage –Max 500 GB data storage
28
Security VMware ESXi Server Host –No “host” OS (point of attack) –No service console (SSH login) –No command line Virtual Machines –Are secured in the same manner as physical machines Virus Scans Patches Logging Standard Configuration Documents (SCDs) –Can be isolated by firewall software – either at the network level or at the VM level –Cannot talk to other VMs unless part of the same virtual LAN or network
29
Dell Readiness Study 217 Servers Studied –216 of the 217 servers were identified as good candidates for virtualization –Could be reduced to 9 Dell PowerEdge R900s 4xQuad Core 2.93 GHz CPUs with 128 GB RAM Product # of Planned VMs # of Hosts Required Approximate Consolidation Ratios Average CPU Utilization per Host Average Memory Utilization per Host Vmware VI 3.5 Enterprise on Dell R900 216924:133%56%
30
Benefits Identified in the Study 96% Consolidation ratio would achieve a reduction in: –Operating power (86 KW) –Cooling power (108 KW) –Data Center Space (466 sq. ft) 69% cost savings over a 3 year period (with a 5 year server life schedule) Hard Cost avoidance over a 3 year period would be >$4,000,000 Hardware investment in Virtualization of $450,000 would be recouped in 3 months Monthly cost of no action - $62,122
31
Environmental Benefits
32
Plans for the Future Stand up virtual technology in the DMZ Stand up virtual technology in a “Development Zone” Reduce deployment times for applications running on a dedicated machine Self-service ordering/ Auto-provisioning (New Scale) Offer Infrastructure and Platform as a Service
33
For More Information Rebecca Astin National Computer Center astin.rebecca@epa.gov 919-541-3074 David Pritchett National Computer Center pritchett.david@epa.gov 919-541-2798
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.