Presentation on theme: "Giovanni Marchetti Sr. Technical Evangelist Microsoft Corp. WSV202."— Presentation transcript:
Giovanni Marchetti Sr. Technical Evangelist Microsoft Corp. WSV202
Session Objectives Brief intro to HPC Clusters Why virtualize HPC clusters? Where do they make sense? How do we deploy and manage them? What configuration issues arise? How do we schedule jobs on them? Are they so exotic?
What Is High Performance Computing? A branch of computer science that studies systems designs and programming techniques to extract the best computational performance out of microprocessors. It is usually associated with parallel computing either on specialized machines or on clusters of commodity machines.
Industries Using HPC Education & Research Geological Services Manufacturing Financial Services Movies Medicine Archaeology???!!!
Real-world Application Example Courtesy of University of Bologna, Italy CAT scan of a 3000 year-old cat Data and computation – intensive Limited comms. among nodes Typical parametric sweep
MPI Network Private Network Public Network Corporate IT Infrastructure Compute Node Head Node AD DNS DHCP C lients Monitoring Systems Management Compute Cluster Possible Cluster Topologies Admin / User Cons WDS Job Scheduler MPI Management NAT Node Manager MPI Management Compute Node Node Manager MPI Management
The Opportunity Low usage on physical servers (e.g. at night) Make computers compute for a living Virtualized infrastructure Treat computing as just another function Integrate with “real” HPC clusters Maintain isolation for security and ease of management reasons Cycle scavenging does not
The Applications Suitable for parametric sweeps (e.g., Montecarlo simulations) or web services Many independent processes, possibly multi-threaded Large set of inputs Not interactive NOT suitable for MPI applications Virtualization comms. overhead (reduced with R2) Unpredictable latencies Limited topologies
The Options We have 3 alternatives: Install the HPC Pack on all the x64 servers Dual boot Use Hyper-V and virtual machines with HPC 2008 Coming in R2 / v3 Boot from network Boot from vhd
HPC Pack on all servers Pros Little overhead when not in use Easy to deploy and manage Always ready Cons: No isolation at all Whoever can launch a job, can run anything on your computer / x64 workstation Only works on the domain your computer is in Head node may not necessarily be in that domain No control on resource consumption
Dual Boot Pros: “Clean” solution, as long as you have a 2nd partition 100% of resources available in designated time No perf. Overhead Cons: Licenses: 2 per machine!!! 100% of resources available ONLY in designated time Requires careful management of OS transitions Diskpart is powerful and dangerous. Limited isolation: must manage access to partitions carefully
Hyper-V Pros Complete isolation Control resource allocation Rapid provisioning / de-provisioning Domains and VLANs for VM can differ from host Virtualization rights included or free hyper-v server Cons: Performance and management overhead Must have virtualization management solution Must work on your own scheduler integration
Hyper-V: Quick Tips Works only on x64 servers with DEP and VT (or AMD-V) enabled Most x64 workstations can run it; Install “desktop experience” Once installed, everything is a virtual machine, including the management partition Patch early, patch often and be sure you know what to patch
Hyper-V Management Requires Virtual Machine Manager VMM will install agent, hyper-v role on targeted servers in trusted domains Can manage servers in different domains Deploy these patches on hyper-v hosts: KB – perf. counters for synthetic storage KB – BITS client handling volume IDs
Virtual Machine Manager Server Server ConnectorConnector Self Service Web Portal Administrator’s Console Management Interfaces SAN Storage Operator’s Console Web Console Windows PowerShell Operations Manager Server Server Virtual Server Host VM VM VMM Library Server Server VM TemplateTemplate ISOISOScriptScriptVHD VMware VI3 Virtual Center Server ESX Host VMVMVMVM VMVMVMVM VMVMVMVM Windows ® PowerShell VMM 2008 Architecture
VM Deployment Build “prototype node” VM Create hardware and o/s profile based on it Extract a template from it with SCVMM Will run sysprep and make the node unavailable Select target servers to run hyper-v Note: RDMA adapters (e.g. Infiniband) will not be virtualized; they will be used as any other IPoIB interface Deploy node template on target servers (GUI or powershell script)
VM Placement CPU, Network, and Disk Load Physical Disk and Memory Requirements Configuration VM Result of Disk Capacity and Memory Check Disk Capacity and Memory Check Capacity Planning Technology Normalized Host + VM Load Existing Load Configuration Host Performance and Configuration RatingFunction
VM Placement Memory is the main constraint VM can be moved and RAM allocation altered, but not dynamically HPC VMs will most often either run at 100% CPU or 0% within the VM Does not necessarily map to real cpu utilization HPC job scheduler does not understand “50% of CPU”
VM Placement Considerations Specify number of CPUs in VM and assume 100% Total Number of active logical CPUs ≤ number of physical CPUs on host Logical will be mapped to physical by hyper-v scheduler
Hyper-V Architecture Provided by: OS MS / XenSource / Novell ISV/IHV/OEM Hyper-V Windows hypervisor “Designed for Windows” Server Hardware Non hypervisor aware OS Windows Server 2003, 2008 Applications Windows Kernel VSC Windows Kernel Windows Server 2008 VSP VMBus Emulation Parent Partition Kernel Mode User Mode Xen-enabled Linux Kernel Linux VSCs VMBus Hypercall Adapter Applications Child Partitions VMBus Virtualization Stack VM Service WMI Provider VM Worker Process Applications
VM Placement Considerations SCVMM hardware profiles ≠ hardware constraints Indication of expected hw performance Hyper-V always reports up to 1 quad-core cpu of underlying kind Co-located tasks in VM may not actually be Identical hardware profiles on same kind of host simplify life for the scheduler and the admin Not always practical Place for performance, not density (in R2)
On the HPC side Create host groups Rule of thumb: 1 per VM template deployed Use job templates to enforce constraints for jobs on VM host group (e.g. max runtime) Schedule jobs at appropriate time At job submit Or specify host group, then activate the nodes in question At node resume
Rapid Provisioning / De-provisioning Create several templates Different kind of nodes for different kind of jobs > 1 template deployed to same physical host, 1 active Test scalability by deploying N, 2N, etc… VMs. on same hosts No need to delete VMs Take nodes offline and shut them down When required, turn them back on
Provisioning on demand 1 more layer of complexity Use SCOM to track n. of tasks in the queue Cause VM provisioning or activation when high threshold is reached (run psh script) De-provision or shut down when low threshold is reached SCOM / SCVMM / Pro tips May cause VMs to be moved May be irrelevant to particular HPC workload VMs always run 100% or 0 %
Dynamically Grow / Shrink Dynamically grow or shrink pool of resources for jobs in HPC Server 2008 Ties in with dynamic provisioning / de-provisioning Provision servers Grow pool available to running job Offline servers -> shrink pool De-provision or pause VMs Job keeps running
SCOM rules for on-demand VMs
Availability advantages High availability of head nodes is quite complex SCVMM takes care of HA for VMs on clusters Head node can be a VM too In this case, you’re protected against node failure only What if time on a machine has run out, but you still need to compute? Shrink pool, move VM to available node, bring back With R2, live migration means you need not interrupt the computation either!
Moving Computing Nodes
Call to Action Just try it!
Sessions On-Demand & Community Resources for IT Professionals Resources for Developers Microsoft Certification and Training Resources Microsoft Certification & Training Resources Resources Required Slide Speakers, TechEd 2009 is not producing a DVD. Please announce that attendees can access session recordings at TechEd Online. Required Slide Speakers, TechEd 2009 is not producing a DVD. Please announce that attendees can access session recordings at TechEd Online.
Related Content VIR208 Virtual Machine Manager 2008: Technical Overview and R2 Preview VIR209 How to Build an Efficient Application Infrastructure through Virtualization VIR302 Hyper-V & Virtual Machine Manager 2008: Tips, Tricks, and Best Practices VIR303 Hyper-V & System Center: Practical Orchestration & Process Automation VIR03-INT Virtual Computing Clusters VIR04-HOL Introduction to Hyper-V VIR05-HOL Introduction to Microsoft System Center Virtual Machine Manager 2008 Required Slide Speakers, please list the Breakout Sessions, TLC Interactive Theaters and Labs that are related to your session. Required Slide Speakers, please list the Breakout Sessions, TLC Interactive Theaters and Labs that are related to your session.
Windows Server Resources Make sure you pick up your copy of Windows Server 2008 R2 RC from the Materials Distribution Counter Learn More about Windows Server 2008 R2: Technical Learning Center (Orange Section): Highlighting Windows Server 2008 and R2 technologies Over 15 booths and experts from Microsoft and our partners Over 15 booths and experts from Microsoft and our partners Required Slide Track PMs will supply the content for this slide, which will be inserted during the final scrub. Required Slide Track PMs will supply the content for this slide, which will be inserted during the final scrub.
Complete an evaluation on CommNet and enter to win! Required Slide