CERN IT Department CH-1211 Genève 23 Switzerland www.cern.ch/i t CERN-IT Plans on Virtualization Ian Bird On behalf of IT WLCG Workshop, 9 th July 2010.

Slides:



Advertisements
Similar presentations
HEPiX Virtualisation Working Group Status, July 9 th 2010
Advertisements

WLCG Cloud Traceability Working Group progress Ian Collier Pre-GDB Amsterdam 10th March 2015.
European Organization for Nuclear Research Virtualization Review and Discussion Omer Khalid 17 th June 2010.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
5205 – IT Service Delivery and Support
Platform & Engineering Services CERN IT Department CH-1211 Geneva 23 Switzerland t PES Ewan Roche, Ulrich Schwickerath, Manuel Guijarro,
INTRODUCTION TO CLOUD COMPUTING Cs 595 Lecture 5 2/11/2015.
Tier-1 experience with provisioning virtualised worker nodes on demand Andrew Lahiff, Ian Collier STFC Rutherford Appleton Laboratory, Harwell Oxford,
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Michal Kwiatek, Juraj Sucik, Rafal.
Assessment of Core Services provided to USLHC by OSG.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
CERN IT Department CH-1211 Genève 23 Switzerland t Virtualization with Windows at CERN Juraj Sucik, Emmanuel Ormancey Internet Services Group.
Status of WLCG Tier-0 Maite Barroso, CERN-IT With input from T0 service managers Grid Deployment Board 9 April Apr-2014 Maite Barroso Lopez (at)
Appendix B Planning a Virtualization Strategy for Exchange Server 2010.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
INFSO-RI Enabling Grids for E-sciencE SA1: Cookbook (DSA1.7) Ian Bird CERN 18 January 2006.
From Virtualization Management to Private Cloud with SCVMM 2012 Dan Stolts Sr. IT Pro Evangelist Microsoft Corporation
Computing trends for WLCG Ian Bird ROC_LA workshop CERN; 6 th October 2010.
CERN IT Department CH-1211 Genève 23 Switzerland t Evolution of virtual infrastructure with Hyper-V Juraj Sucik, Slavomir Kubacka Internet.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
WLCG Cloud Traceability Working Group face to face report Ian Collier 11 February 2015.
Virtual Workspaces Kate Keahey Argonne National Laboratory.
Jose Castro Leon CERN – IT/OIS CERN Agile Infrastructure Infrastructure as a Service.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
CERN IT Department CH-1211 Genève 23 Switzerland t The Agile Infrastructure Project Part 1: Configuration Management Tim Bell Gavin McCance.
Virtualised Worker Nodes Where are we? What next? Tony Cass GDB /12/12.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN-IT Update Ian Bird On behalf of IT Multi-core and Virtualisation Workshop,
CERN-IT Oracle Database Physics Services Maria Girone, IT-DB 13 December 2004.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
Trusted Virtual Machine Images a step towards Cloud Computing for HEP? Tony Cass on behalf of the HEPiX Virtualisation Working Group October 19 th 2010.
WLCG Overview Board, September 3 rd 2010 P. Mato, P.Buncic Use of multi-core and virtualization technologies.
Operating Systems & Information Services CERN IT Department CH-1211 Geneva 23 Switzerland t OIS CERN Virtual Infrastructure Status update.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Workload management, virtualisation, clouds & multicore Andrew Lahiff.
Cloud Computing Lecture 5-6 Muhammad Ahmad Jan.
Web Technologies Lecture 13 Introduction to cloud computing.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN IT Monitoring and Data Analytics Pedro Andrade (IT-GT) Openlab Workshop on Data Analytics.
CERN IT Department CH-1211 Genève 23 Switzerland t Migration from ELFMs to Agile Infrastructure CERN, IT Department.
CERN - IT Department CH-1211 Genève 23 Switzerland t Operating systems and Information Services OIS Proposed Drupal Service Definition IT-OIS.
CERN IT Department CH-1211 Genève 23 Switzerland t Next generation of virtual infrastructure with Hyper-V Juraj Sucik, Michal Kwiatek, Rafal.
Cloud Computing – UNIT - II. VIRTUALIZATION Virtualization Hiding the reality The mantra of smart computing is to intelligently hide the reality Binary->
CERN IT Department CH-1211 Genève 23 Switzerland t SL(C) 5 Migration at CERN CHEP 2009, Prague Ulrich SCHWICKERATH Ricardo SILVA CERN, IT-FIO-FS.
Platform & Engineering Services CERN IT Department CH-1211 Geneva 23 Switzerland t PES Improving resilience of T0 grid services Manuel Guijarro.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Virtual Server Server Self Service Center (S3C) JI July.
StratusLab is co-funded by the European Community’s Seventh Framework Programme (Capacities) Grant Agreement INFSO-RI Demonstration StratusLab First.
Cloud Computing Application in High Energy Physics Yaodong Cheng IHEP, CAS
Platform & Engineering Services CERN IT Department CH-1211 Geneva 23 Switzerland t PES Agile Infrastructure Project Overview : Status and.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
HEPiX Virtualisation working group Andrea Chierici INFN-CNAF Workshop CCR 2010.
CERN IT Department CH-1211 Genève 23 Switzerland PES Virtualization at CERN – a status report Virtualization at CERN – status report Sebastien.
CERN IT Department CH-1211 Genève 23 Switzerland The CERN internal Cloud Sebastien Goasguen, Belmiro Rodrigues Moreira, Ewan Roche, Ulrich.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
WLCG Workshop 2017 [Manchester] Operations Session Summary
Sviluppi in ambito WLCG Highlights
Virtualization and Clouds ATLAS position
Dag Toppe Larsen UiB/CERN CERN,
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Dag Toppe Larsen UiB/CERN CERN,
ATLAS Cloud Operations
StratusLab Final Periodic Review
StratusLab Final Periodic Review
PES Lessons learned from large scale LSF scalability tests
WLCG Collaboration Workshop;
Presentation transcript:

CERN IT Department CH-1211 Genève 23 Switzerland t CERN-IT Plans on Virtualization Ian Bird On behalf of IT WLCG Workshop, 9 th July 2010

CERN IT Department CH-1211 Genève 23 Switzerland t 2 Multi-core jobs at CERN Multi-core jobs can be run on the CERN batch service: –Dedicated queue “sftcms” –dedicated resources 2 nodes with 8 cores, one job slot each Can be extended on request –access restricted to pre-defined users Currently 12 registered users defined to use the queue –Service is in production since February 2010 But has hardly been used yet

CERN IT Department CH-1211 Genève 23 Switzerland t 3 Multi-core jobs Grid-wide EGEE-III MPI WG recommendations –Current draft at –Final version to appear soon Next steps 1.Validate new attributes in the CREAM-CE: Using a temporary JDL attribute (CERequirements) –E.g. CERequirements = “WholeNodes = \”True\””; Using direct job submission to CREAM Validation starting with some user communities 2.Modify CREAM and BLAH so that the new attributes can be used as first-level JDL attributes: E.g. WholeNodes = True; Coming with CREAM 1.7, Q Support for the new attributes in WMS Coming with WMS 3.3, Q3 2010

CERN IT Department CH-1211 Genève 23 Switzerland t Virtualisation Activities In many areas – not all directly relevant for LHC computing 3 main areas: –Service consolidation “VOBoxes”, VMs on demand, LCG certification testbed –“LX” services Worker nodes, pilots, etc (issue of “bare” WN tba) –Cloud interfaces Rationale: –Better use of resources – optimise cost, power, efficiency –Reduce dependencies esp between OS and applications (e.g. SL4  SL5 migration), and between grid software –Long term sustainability/maintainability  can we move to something which is more “industry-standard” ? +don’t forget WLCG issues of how to expand to other sites  which may have many other constraints (e.g. may require virtualised WN)  Must address trust issue from the outset 4

CERN IT Department CH-1211 Genève 23 Switzerland t Service consolidation VO Boxes (in the general sense of all user-managed services) –IT runs the OS and hypervisor; user runs the service and application –Clarifies distinction in responsibilities –Simplifies management for VOC – no need to understand system configuration tools –Allows to optimise between heavily used and lightly used services –(eventually) transparent migration between hardware: improve service availability VMs “on demand” (like requesting a web server today) –Request through a web interface –General service for relatively long-lived needs –The user can request a VM from among a set of standard images –E.g.: ETICS multi-platform build and automated testing LCG certification test bed. Today uses a different technology, but will migrate once live checkpointing of images is provided. 5

CERN IT Department CH-1211 Genève 23 Switzerland t 6 CVI: CERN Virtualisation Infrastructure Based on Microsoft’s Virtual Machine Manager –Multiple interfaces available ‘Self-Service’ web interface at SOAP interface Virtual Machine Manager console for Windows clients Integrated with LANdb network database 100 hosts running Hyper-V hypervisor –10 distinct host groups, with delegated administration privileges –‘Quick’ migration of VMs between hosts ~1 minute, session survives migration Images for all supported Windows and Linux versions –Plus PXE boot images

CERN IT Department CH-1211 Genève 23 Switzerland t 7 CVI: some usage statistics Today, CVI provides 340 VMs… – 70% Windows, 30% Linux … for different communities –Self-service portal 230 VMs from 99 distinct users Mixture of development and production machines –Host groups for specific communities Engineering Services: 85 VMs Media streaming: 12 VMs 6 Print servers, 8 Exchange servers … etc

CERN IT Department CH-1211 Genève 23 Switzerland t 8 CVI: work in progress - II Consolidation of physics servers (IT/PES/PS) –a.k.a “VOBoxes” –~300 Quattor managed Linux VM’s IT servers and VOBoxes –3 enclosures of 16 blades Each with ISCSI shared storage Failover cluster setup –Allows transparent live migration between hypervisors –Need to gain operational experience Start with some IT services first (e.g. CAProxy, …) Verify that procedures work as planned –Target date for first VOBoxes is ~July 2010 gain experience with non-critical VOBox services first –Next step: Virtual small diskservers (~ 5TB of storage) By accessing iSCSI targets from VM’s

CERN IT Department CH-1211 Genève 23 Switzerland t “LX” Services LXCloud –See presentation of Sebastian Goasguen + Ulrich Schwickerath at workshop –Management of a virtual infrastructure with cloud interfaces –Includes the capability to run CernVM images –Scalability test are ongoing –Tests ongoing with both Open Nebula and Platform ISF as potential solutions LXBatch –Standard OS worker node: as VM addresses dependency problem –WN with a full experiment software stack user could choose from among a standard/certified set. These images could e.g. Be built using the experiment build servers. –As the previous case but with the pilot framework embedded –CERNVM images Eventually: LXBatch  LXCloud 9

CERN IT Department CH-1211 Genève 23 Switzerland t Evolution 10

CERN IT Department CH-1211 Genève 23 Switzerland t LXCloud 11

CERN IT Department CH-1211 Genève 23 Switzerland t Ongoing work Integration with existing management infrastructure and tools –Including monitoring and alarm systems Evaluating VM provisioning systems –Opensource and commercial Image distribution mechanisms –With P2P tools Scalability tests –Batch system (how many VMs can be managed) –Infrastructure (network database, etc. ) –Image distributions –VM performance To be understood: –I/O performance, particularly for analysis jobs –How to do accounting etc. –Virtualised infrastructure vs allocate “whole node” to application 12

CERN IT Department CH-1211 Genève 23 Switzerland t 13

CERN IT Department CH-1211 Genève 23 Switzerland t Summary Ongoing work in several areas –Will see benefits immediately (e.g. VOBoxes) –Will gain some experience in (e.g. LXCloud) test environment Should be able to satisfy a wide range of resource requests – no longer limited to the model of a single job/cpu Broader scope of WLCG –Address issues of trust, VM management; integration with AA framework etc –Interoperability through cloud interfaces (in both directions) with other “grid” sites as well as public/commercial providers –Can be implemented in parallel with existing grid interfaces 14