Dag Toppe Larsen UiB/CERN CERN,

Slides:



Advertisements
Similar presentations
Virtualisation From the Bottom Up From storage to application.
Advertisements

Cloud Computing Mick Watson Director of ARK-Genomics The Roslin Institute.
Undergraduate Poster Presentation Match 31, 2015 Department of CSE, BUET, Dhaka, Bangladesh Wireless Sensor Network Integretion With Cloud Computing H.M.A.
Cambodia-India Entrepreneurship Development Centre - : :.... :-:-
Introduction to DoC Private Cloud
Presented by Sujit Tilak. Evolution of Client/Server Architecture Clients & Server on different computer systems Local Area Network for Server and Client.
Installing software on personal computer
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Cloud Computing Systems Lin Gu Hong Kong University of Science and Technology Sept. 21, 2011 Windows Azure—Overview.
Cloud Computing All Copyrights reserved to Talal Abu-Ghazaleh Organization
CERN IT Department CH-1211 Genève 23 Switzerland t Virtualization with Windows at CERN Juraj Sucik, Emmanuel Ormancey Internet Services Group.
1 port BOSS on Wenjing Wu (IHEP-CC)
Introduction to Cloud Computing
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
Grids, Clouds and the Community. Cloud Technology and the NGS Steve Thorn Edinburgh University Matteo Turilli, Oxford University Presented by David Fergusson.
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
1 The Fast(est) Path to Building a Private/Hybrid Cloud October 25th, 2011 Paul Mourani RightScale.
Wenjing Wu Andrej Filipčič David Cameron Eric Lancon Claire Adam Bourdarios & others.
Using Virtual Servers for the CERN Windows infrastructure Emmanuel Ormancey, Alberto Pace CERN, Information Technology Department.
Copyright © cs-tutorial.com. Overview Introduction Architecture Implementation Evaluation.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen CERN
WLCG Overview Board, September 3 rd 2010 P. Mato, P.Buncic Use of multi-core and virtualization technologies.
Windows Azure. Azure Application platform for the public cloud. Windows Azure is an operating system You can: – build a web application that runs.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
NA61/NA49 virtualisation: status and plans Dag Toppe Larsen Budapest
Predrag Buncic (CERN/PH-SFT) Software Packaging: Can Virtualization help?
T3g software services Outline of the T3g Components R. Yoshida (ANL)
1 Cloud Services Requirements and Challenges of Large International User Groups Laurence Field IT/SDC 2/12/2014.
36 th LHCb Software Week Pere Mato/CERN.  Provide a complete, portable and easy to configure user environment for developing and running LHC data analysis.
VIRTUALIZATION TECHNOLOGIES BY COLLIN DONALDSON. PHYSICAL COMPUTING Install Hardware Load Operating System and other software Deploy either manually or.
© ExplorNet’s Centers for Quality Teaching and Learning 1 Explain the purpose of Microsoft virtualization. Objective Course Weight 2%
Virtualisation: status and plans Dag Toppe Larsen
System Architecture CS 560. Project Design The requirements describe the function of a system as seen by the client. The software team must design a system.
Predrag Buncic, CERN/PH-SFT The Future of CernVM.
HEPiX Virtualisation working group Andrea Chierici INFN-CNAF Workshop CCR 2010.
CernVM and Volunteer Computing Ivan D Reid Brunel University London Laurence Field CERN.
APACHE INSTALL AWS Linux (Amazon Web Services EC2)
Unit 3 Virtualization.
CernVM-FS vs Dataset Sharing
Chapter 6: Securing the Cloud
Cloud Technology and the NGS Steve Thorn Edinburgh University (Matteo Turilli, Oxford University)‏ Presented by David Fergusson.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Use of HLT farm and Clouds in ALICE
Chapter 1: Introduction
Update on revised HEPiX Contextualization
Virtualisation for NA49/NA61
NA61/NA49 virtualisation:
Blueprint of Persistent Infrastructure as a Service
Dag Toppe Larsen UiB/CERN CERN,
High Availability Linux (HA Linux)
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
ATLAS Cloud Operations
StratusLab Final Periodic Review
StratusLab Final Periodic Review
WLCG experiments FedCloud through VAC/VCycle in the EGI
Introduction to CVMFS A way to distribute HEP software on cloud
Virtualisation for NA49/NA61
Lecture 24 Virtual Machine Monitors
Virtualization overview
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
WLCG Collaboration Workshop;
1. 2 VIRTUAL MACHINES By: Satya Prasanna Mallick Reg.No
Virtual Machines.
OS Virtualization.
Haiyan Meng and Douglas Thain
Virtualization Layer Virtual Hardware Virtual Networking
Cloud computing mechanisms
Different types of Linux installation
Prof. Leonardo Mostarda University of Camerino
Presentation transcript:

Dag Toppe Larsen UiB/CERN CERN, 16.06.2012 CernVM for NA49/NA61 Dag Toppe Larsen UiB/CERN CERN, 16.06.2012

Why virtualisation? Data preservation Very flexible Avoids complexity of grid Easy software distribution Processing not constrained to CERN Take advantage of new LXCLOUD Take advantage of commercial clouds, e.g. Amazon EC2 Can develop in same VM as data will be processed on – should reduce failing jobs

Data preservation: motivation Preserve historic record Even after experiment end-of-life, data reprocessing might be desirable if future experiments reach incompatible results Many past experiments have already lost this possibility

Data preservation: challenges Two parts: data & software Data: preserve by migration to newer storage technologies Software: more complicated Just preserving source code/binaries not enough Strongly coupled to OS/library/compiler version (software environment) Software environment strongly coupled to hardware, platform will eventually become unavailable Porting to new platform requires big effort

Data preservation: solution Possible solution: virtualisation “Freeze” hardware in software Can run legacy analysis software on legacy versions of Linux they were originally developed for in VMs Software environment preserved, no need to modify code Comes for “free” if processing is already done on VMs

CERNVM: introduction Dedicated Linux distribution for virtual machines Currently based on SLC5 Newer and updated versions will be made available for new software Old versions will still be available for legacy analysis software Supports all common hypervisors Supports Amazon EC2 clouds

CERNVM: layout

CERNVM: use cases Two main use cases: Computing centre Images for head and batch nodes Includes Condor batch system Personal computers Desktop (GUI) and basic (CL) images “Personal” use Code can be developed (desktop image) in similar environment/platform it will be processed (batch node image)

CERNVM: contextualisation All CERNVM instances initially identical Experiment specific software configuration/set-up introduced via contextualisation Two types CD-ROM image – mainly site specific configuration EC2 user data – mainly experiment specific Executed during start-up of VM

CVMFS: introduction Distributed file system based on HTTP Read-only Distribution of binary files – no need for local compile & install All libraries & software that can not be expected to be found on “standard” Linux should be distributed Each experiment has one or more persons responsible for providing updates and resolve dependencies

CVMFS: software repositories Several repositories mounted under /cvmfs/ Each repository typically corresponds to one “experiment” (or other “entity”) Experiments have “localised” names, e.g. /cvmfs/na61.cern.ch/ Common software in separate repositories, e.g. ROOT in /cvmfs/sft.cern.ch/ Several versions of software may be distributed in parallel – user can choose version to run

CVMFS: design Compressed files on HTTP server Downloaded, decompressed and cached locally on first use Possible to run software without Internet connection A hierarchy of standard HTTP proxy servers distribute the load Can also be used by non-VMs, e.g. LXPLUS/LXBATCH, other clusters, personal laptops

Reference cloud: introduction Small CERNVM reference private cloud Condor batch system OpenNebula management Amazon EC2 interface Reference installation for other clouds Detailed/simple step-by-step instructions for replication at other sites will be provided Attempt to make “uniform” installations Site customisation possible for monitoring, etc.

Reference cloud: virtual distributed Condor cluster Based on VMs in cloud Can be distributed over several sites Even if nodes are on different sites, they will appear to be in the same cluster A tier 1 can include VMs provided by tier 2s in its virtual Condor cluster Can be very work-saving, as the tier 2s do not need to set up job management themselves Other possibility: local CERNVM batch system run local jobs (like normal cluster)

Reference cloud: OpenNebula framework Popular framework for management of virtual machines Supports most common hypervisors Choice: KVM/QEMU – fast, does not require modifications to OS Amazon EC2 interface Possible to include VMs from other clouds, and provide hosts to other clouds Web management interface

Reference cloud: Amazon EC2 interface EC2 is the commercial cloud offered by Amazon EC2 also describes an interface for managing VMs Has become de-facto interface for all clouds, including private Hence, using the EC2 interface allows for great flexibility in launching VMs on both private and commercial clouds

Reference cloud: public vs. private clouds