Ian Gable HEPiX Spring 2009, Umeå 1 VM CPU Benchmarking the HEPiX Way Manfred Alef, Ian Gable FZK Karlsruhe University of Victoria May 28, 2009.

Slides:



Advertisements
Similar presentations
Ed Duguid with subject: MACE Cloud
Advertisements

Alastair Dewhurst, Dimitrios Zilaskos RAL Tier1 Acknowledgements: RAL Tier1 team, especially John Kelly and James Adams Maximising job throughput using.
Locality-Aware Dynamic VM Reconfiguration on MapReduce Clouds Jongse Park, Daewoo Lee, Bokyeong Kim, Jaehyuk Huh, Seungryoul Maeng.
CPU Ready Time in VMware ESX Server Bill Shelden
Virtualization in HPC Minesh Joshi CSC 469 Dr. Box Feb 1, 2012.
© UC Regents 2010 Extending Rocks Clusters into Amazon EC2 Using Condor Philip Papadopoulos, Ph.D University of California, San Diego San Diego Supercomputer.
Ceph vs Local Storage for Virtual Machine 26 th March 2015 HEPiX Spring 2015, Oxford Alexander Dibbo George Ryall, Ian Collier, Andrew Lahiff, Frazer Barnsley.
Disco Running Commodity Operating Systems on Scalable Multiprocessors.
COMS E Cloud Computing and Data Center Networking Sambit Sahu
A comparison between xen and kvm Andrea Chierici Riccardo Veraldi INFN-CNAF.
Virtualization and the Cloud
Virtualization for Cloud Computing
VIRTUALISATION OF HADOOP CLUSTERS Dr G Sudha Sadasivam Assistant Professor Department of CSE PSGCT.
Design and Implementation of a Single System Image Operating System for High Performance Computing on Clusters Christine MORIN PARIS project-team, IRISA/INRIA.
Tanenbaum 8.3 See references
SUMMER VACATION SCHOLARSHIP | IM&T Scientific Computing in the Cloud.
Utility Computing Casey Rathbone 1http://cyberaide.org.edu.
Virtualization and Cloud Computing Research at Vasabilab Kasidit Chanchio Vasabilab Dept of Computer Science, Faculty of Science and Technology, Thammasat.
1 The Virtual Reality Virtualization both inside and outside of the cloud Mike Furgal Director – Managed Database Services BravePoint.
Dual Stack Virtualization: Consolidating HPC and commodity workloads in the cloud Brian Kocoloski, Jiannan Ouyang, Jack Lange University of Pittsburgh.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 7 2/23/2015.
A Performance Evaluation of Azure and Nimbus Clouds for Scientific Applications Radu Tudoran KerData Team Inria Rennes ENS Cachan 10 April 2012 Joint work.
Creating an EC2 Provisioning Module for VCL Cameron Mann & Everett Toews.
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 2.
1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.
Cloud Computing Energy efficient cloud computing Keke Chen.
Power and Performance Modeling in a Virtualized Server System M. Pedram and I. Hwang Department of Electrical Engineering Univ. of Southern California.
Kinshuk Govil, Dan Teodosiu*, Yongqiang Huang, and Mendel Rosenblum
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
Grids, Clouds and the Community. Cloud Technology and the NGS Steve Thorn Edinburgh University Matteo Turilli, Oxford University Presented by David Fergusson.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
Eucalyptus 3 (&3.1). Eucalyptus 3 Product Overview – Govind Rangasamy.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
High Performance Computing on Virtualized Environments Ganesh Thiagarajan Fall 2014 Instructor: Yuzhe(Richard) Tang Syracuse University.
Dynamic Resource Monitoring and Allocation in a virtualized environment.
An Energy-Efficient Hypervisor Scheduler for Asymmetric Multi- core 1 Ching-Chi Lin Institute of Information Science, Academia Sinica Department of Computer.
Power-Aware Scheduling of Virtual Machines in DVFS-enabled Clusters
WNoDeS – Worker Nodes on Demand Service on EMI2 WNoDeS – Worker Nodes on Demand Service on EMI2 Local batch jobs can be run on both real and virtual execution.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Operating a distributed IaaS Cloud for BaBar Randall Sobie, Ashok Agarwal, Patrick Armstrong, Andre Charbonneau, Kyle Fransham, Roger Impey, Colin Leavett-
HS06 on new CPU, KVM virtual machines and commercial cloud Michele Michelotto 1.
© 2012 IBM Corporation Platform Computing 1 IBM Platform Cluster Manager Data Center Operating System April 2013.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
Improving Xen Security through Disaggregation Derek MurrayGrzegorz MilosSteven Hand.
Breaking Barriers Exploding with Possibility Breaking Barriers Exploding with Possibility The Cloud Era Unveiled.
Ian Gable University of Victoria 1 Deploying HEP Applications Using Xen and Globus Virtual Workspaces A. Agarwal, A. Charbonneau, R. Desmarais, R. Enge,
Efficient Live Checkpointing Mechanisms for computation and memory-intensive VMs in a data center Kasidit Chanchio Vasabilab Dept of Computer Science,
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Tools and techniques for managing virtual machine images Andreas.
Lecture 26 Virtual Machine Monitors. Virtual Machines Goal: run an guest OS over an host OS Who has done this? Why might it be useful? Examples: Vmware,
Virtualization One computer can do the job of multiple computers, by sharing the resources of a single computer across multiple environments. Turning hardware.
Condor + Cloud Scheduler Ashok Agarwal, Patrick Armstrong, Andre Charbonneau, Ryan Enge, Kyle Fransham, Colin Leavett-Brown, Michael Paterson, Randall.
RALPP Site Report HEP Sys Man, 11 th May 2012 Rob Harper.
Grid testing using virtual machines Stephen Childs*, Brian Coghlan, David O'Callaghan, Geoff Quigley, John Walsh Department of Computer Science Trinity.
Cloud Computing Andrew Stromme and Colin Schimmelfing.
Performance analysis comparison Andrea Chierici Virtualization tutorial Catania 1-3 dicember 2010.
Unit 2 VIRTUALISATION. Unit 2 - Syllabus Basics of Virtualization Types of Virtualization Implementation Levels of Virtualization Virtualization Structures.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
Nimbus Update March 2010 OSG All Hands Meeting Kate Keahey Nimbus Project University of Chicago Argonne National Laboratory.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
A comparison between xen and kvm Andrea Chierici Riccardo Veraldi INFN-CNAF CCR 2009.
CERN IT Department CH-1211 Genève 23 Switzerland The CERN internal Cloud Sebastien Goasguen, Belmiro Rodrigues Moreira, Ewan Roche, Ulrich.
Virtualization for Cloud Computing
Virtualization.
Cloud Technology and the NGS Steve Thorn Edinburgh University (Matteo Turilli, Oxford University)‏ Presented by David Fergusson.
LIGHTWEIGHT CLOUD COMPUTING FOR FAULT-TOLERANT DATA STORAGE MANAGEMENT
Dag Toppe Larsen UiB/CERN CERN,
Dag Toppe Larsen UiB/CERN CERN,
ATLAS Cloud Operations
Building a Virtual Infrastructure
Presentation transcript:

Ian Gable HEPiX Spring 2009, Umeå 1 VM CPU Benchmarking the HEPiX Way Manfred Alef, Ian Gable FZK Karlsruhe University of Victoria May 28, 2009

Ian Gable HEPiX Spring 2009, Umeå 2 Overview Motivation Benchmark Testbed Benchmarking Technique Results: Xen, KVM, Amazon EC2 Conclusion Other Virtualization Activities at the University of Victoria

Ian Gable HEPiX Spring 2009, Umeå 3 Motivation Measure Virtual Machines as they would be used on HEP Worker nodes There are plenty of VM benchmarks out there, but none of the them had been setup specifically for HEP –Multiple 100% CPU Loaded VMs running on a single box –Scientific Linux used Now possible to create your own Amazon like cloud infrastructure. –Open Nebula (seen at HEPiX), Nimbus (Globus project, UVic involved) –Does turning you cluster into a Cloud IaaS service cost you in lost CPU performance? Focus on CPU benchmarking because it’s less dependent on specific setups. From working with the HEPiX CPU working group we were well positioned to get results quickly.

Ian Gable HEPiX Spring 2009, Umeå 4 Benchmark Testbed Selected to form a representative set of CPUs commonly used at HEP Sites:

Ian Gable HEPiX Spring 2009, Umeå 5 Selection of OS and Virtual Machine Monitors Use HEP-SPEC06 as the benchmark. Obvious choice. Selected Scientific Linux 5.2 with the distribution provided Xen –Obvious choice for ease of use within the HEP community because of distribution integration –Had been running Xen in production mode at UVic for 3 years: lots of experience. –Test could be easily put together with ‘yum install kernel-xen’ Limited KVM testing –Selected one box at UVic for further examination with KVM –Not a lot of experience with KVM –More work to get operational on SL (not a lot more) –Important to test because it appears to be RedHats new favorite

Ian Gable HEPiX Spring 2009, Umeå 6 Benchmark Technique We wish to simulate the fully loaded HEP worker node used with virtualization Two scenarios we are most interested in: 1 VM per core: n VMs are booted where n is the number of cores. Each VM has 1 VCPU 1 VM per box: 1 VM is booted with VCPUs equal to the number of physical cores on the Box

Ian Gable HEPiX Spring 2009, Umeå 7 VM Memory Allocation Where n is the number of cores

Ian Gable HEPiX Spring 2009, Umeå 8 Xen Results i386 VMs

Ian Gable HEPiX Spring 2009, Umeå 9 Distribution of HEP-SPEC06 Results This machine type E5420: +/ %

Ian Gable HEPiX Spring 2009, Umeå 10 Xen i386 VM Relative performance

Ian Gable HEPiX Spring 2009, Umeå 11 Xen Results x86_64 VMs

Ian Gable HEPiX Spring 2009, Umeå 12 Xen per core variability

Ian Gable HEPiX Spring 2009, Umeå 13 Xen Results Wrap Up Insignificant performance degradation in both 1 VM per core, and 1 VM per box cases. Some counter intuitive performance gains in the 1 VM per core case. –Could be associated with pinning of each VM to a core, hence fewer cache misses –Could be associated with the Xen Borrowed Virtual Time (BVT) scheduler vs the regular SMP kernel. Zero CPU Performance impediment to running 1 VM per job on a multiprocessor worker node.

Ian Gable HEPiX Spring 2009, Umeå 14 A first look at KVM Selected Xeon E5405 (Harpertown) for further study Replaced host kernel with vanilla Kernel x86_64 –includes kvm modules as a part of mainline –SL 5.3 uses (no kvm modules) Tested i386 KVM-84 SL 5.3 VM with virtio Exactly the same benchmark Technique. Results not strictly comparable with earlier Xen results.

Ian Gable HEPiX Spring 2009, Umeå 15 Preliminary KVM Results KVM appears to out perform earlier Xen setup in the 1 VM per core case. KVM underperforms in the 1 VM per box case. Looking for an explanation

Ian Gable HEPiX Spring 2009, Umeå 16 Amazon EC2 Benchmark Simply run HEP-SPEC06 on three different Amazon instance types Small Instance ($0.10/ hour) –1.7 GB Memory, 1 VCPU, 32-bit Large Instance($0.40/hour) –7.5 GB Memory, 2 VCPUs, 64-bit Extra large Instance ($0.80/hour) –15 GB Memory, 4 VCPUs, 64-bit All test done with SL5.3 Amazon Machine Images (AMI) produced at UVic using the RHEL Xen kernels from Amazon.

Ian Gable HEPiX Spring 2009, Umeå 17 Utility Computing Thruput measurement Based on 3 runs on each instance type. To get same thruput as Xeon L5420: $1.84/h

Ian Gable HEPiX Spring 2009, Umeå 18 Conclusion In terms of CPU performance, it is indeed very feasible to run highly CPU loaded virtual machines on current generation multi-core CPUs Established this using the HEP-SPEC06 benchmark which has been proven to map to real HEP application performance. No benchmark suffered more then 5% decrease in performance, save for preliminary KVM results. Useful measure of utility computing cost effectiveness:

Ian Gable HEPiX Spring 2009, Umeå 19 Other Virtualization Activities at UVic Developing BaBar Legacy Data Analysis system at UVic using a Nimbus Cloud deployed to a full blade chassis. –Nimbus is an Open Source IaaS software like Open Nebula (seen at HEPiX already) Google Summer of Code Student –develop code for for Nimbus project in Cloud Scheduling –Student gets $4500 stipend paid by Google –Ian to act as Google ‘Project Mentor’.

Ian Gable HEPiX Spring 2009, Umeå 20 Acknowledgements HEPiX CPU Benchmarking working group. Matt Vliet (UVic Undergraduate)