WLCG Collaboration Workshop;

Slides:



Advertisements
Similar presentations
Communicating Machine Features to Batch Jobs GDB, April 6 th 2011 Originally to WLCG MB, March 8 th 2011 June 13 th 2012.
Advertisements

HEPiX Virtualisation Working Group Status, July 9 th 2010
CERN IT Department CH-1211 Genève 23 Switzerland t CERN-IT Plans on Virtualization Ian Bird On behalf of IT WLCG Workshop, 9 th July 2010.
WLCG Cloud Traceability Working Group progress Ian Collier Pre-GDB Amsterdam 10th March 2015.
1 Bridging Clouds with CernVM: ATLAS/PanDA example Wenjing Wu
Tier-1 experience with provisioning virtualised worker nodes on demand Andrew Lahiff, Ian Collier STFC Rutherford Appleton Laboratory, Harwell Oxford,
Assessment of Core Services provided to USLHC by OSG.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES P. Saiz (IT-ES) AliEn job agents.
BESIII Production with Distributed Computing Xiaomei Zhang, Tian Yan, Xianghu Zhao Institute of High Energy Physics, Chinese Academy of Sciences, Beijing.
Workshop summary Ian Bird, CERN WLCG Workshop; DESY, 13 th July 2011 Accelerating Science and Innovation Accelerating Science and Innovation.
WLCG Cloud Traceability Working Group face to face report Ian Collier 11 February 2015.
Virtualised Worker Nodes Where are we? What next? Tony Cass GDB /12/12.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN-IT Update Ian Bird On behalf of IT Multi-core and Virtualisation Workshop,
GLIDEINWMS - PARAG MHASHILKAR Department Meeting, August 07, 2013.
Trusted Virtual Machine Images a step towards Cloud Computing for HEP? Tony Cass on behalf of the HEPiX Virtualisation Working Group October 19 th 2010.
WLCG Overview Board, September 3 rd 2010 P. Mato, P.Buncic Use of multi-core and virtualization technologies.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
Workload management, virtualisation, clouds & multicore Andrew Lahiff.
+ AliEn site services and monitoring Miguel Martinez Pedreira.
1 Cloud Services Requirements and Challenges of Large International User Groups Laurence Field IT/SDC 2/12/2014.
HEPiX Virtualisation Working Group Status, February 10 th 2010 April 21 st 2010 May 12 th 2010.
Ian Collier, STFC, Romain Wartel, CERN Maintaining Traceability in an Evolving Distributed Computing Environment Introduction Security.
Evolution of WLCG infrastructure Ian Bird, CERN Overview Board CERN, 30 th September 2011 Accelerating Science and Innovation Accelerating Science and.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Outcome should be a documented strategy Not everything needs to go back to square one! – Some things work! – Some work has already been (is being) done.
VO Box discussion ATLAS NIKHEF January, 2006 Miguel Branco -
WLCG Accounting Task Force Update Julia Andreeva CERN GDB, 8 th of June,
Trusted Virtual Machine Images the HEPiX Point of View Tony Cass October 21 st 2011.
The HEPiX Virtualisation Working Group Towards a Grid of Clouds Tony Cass CHEP 2012 May 24 th 2012.
HEPiX IPv6 Working Group David Kelsey (STFC-RAL) GridPP33 Ambleside 22 Aug 2014.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI John Gordon EGI Virtualisation and Cloud Workshop Amsterdam 12 th May 2011.
LCG Introduction John Gordon, STFC GDB December 7 th 2010.
HEPiX Virtualisation working group Andrea Chierici INFN-CNAF Workshop CCR 2010.
Accounting Review Summary and action list from the (pre)GDB Julia Andreeva CERN-IT WLCG MB 19th April
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Evolution of storage and data management
WLCG IPv6 deployment strategy
Review of the WLCG experiments compute plans
Laurence Field IT/SDC Cloud Activity Coordination
WLCG Workshop 2017 [Manchester] Operations Session Summary
(Prague, March 2009) Andrey Y Shevel
ALICE & Clouds GDB Meeting 15/01/2013
Use of HLT farm and Clouds in ALICE
WLCG Network Discussion
Sviluppi in ambito WLCG Highlights
Virtualization and Clouds ATLAS position
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
HEPiX Virtualisation working group
Virtualisation for NA49/NA61
NA61/NA49 virtualisation:
Dag Toppe Larsen UiB/CERN CERN,
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Belle II Physics Analysis Center at TIFR
Dag Toppe Larsen UiB/CERN CERN,
The “Understanding Performance!” team in CERN IT
ATLAS Cloud Operations
DIRAC services.
How to enable computing
David Cameron ATLAS Site Jamboree, 20 Jan 2017
Virtualisation for NA49/NA61
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
CernVM Status Report Predrag Buncic (CERN/PH-SFT).
Discussions on group meeting
VMDIRAC status Vanessa HAMAR CC-IN2P3.
Cloud Computing R&D Proposal
Input on Sustainability
The LHCb Computing Data Challenge DC06
Presentation transcript:

WLCG Collaboration Workshop; Clouds and Virtualisation Ian Bird, CERN WLCG Collaboration Workshop; DESY; 12th July 2011 Accelerating Science and Innovation 1

Activities GDB – requested experiments to briefly explain their positions regarding virtualisation and clouds HEPiX – working group (>1 year) to propose ways to enable trusted image sharing in particular to use cernvm images Here: What are sites doing? Planning? Can we clarify the likely use cases from both experiment and site viewpoints?

ATLAS R&D: Use cases: Virtualisation: use of VMs at sites evaluate cloud technologies design a model for clouds interacting with ATLAS software implement in DDM, Panda, etc activity on cvmfs + multicore Use cases: MC on cloud with stage out to traditional grid storage or long term cloud storage data reprocessing in the cloud (cost?) distributed analysis with data at remote grid sites, or moved to cloud resource capacity bursting for urgent tasks prototype by end 2011 for simulation, reproc, analysis Virtualisation: use of VMs at sites No real position yet, but several unknowns; e.g. who builds and manages VMs? who instantiates at site? requirements on VMs (cores, memory etc)

CMS Whole node approach Virtualisation clouds execute process in user space on many core host - manage workload - wait for task force output Virtualisation  not interested per se:  mainly a site business. happy with real nodes (or virtual nodes), nothing against sites using VM's as long as performance is OK. and monitoring is not impacted clouds commercial clouds may be used for simulation (bursting) but currently thought to be too expensive use of cloud interface for LCG resources is possible but depends on implications prefer efficient local access to site storage and not root access on machine in DMZ current grid interfaces may be used to access whole nodes could use commercial clouds if cost was OK (for some tasks).

LHCb Aim to replace Dirac pilot with a customized cernvm VM, using cernvm certified image using contextualisation for pilot credentials, starting dirac job agent requirements: outbound connectivity to central VOboxes, and storage, LFC etc starting a machine with EC2 would be OK CVMDirac could run on Amazon LXCloud or similar institutional cloud - as alternative to batch system opportunistic usage - e.g. BOINC can also be used with multicore - some development needed

ALICE Currently no interest

HEPiX WLCG expects that sites will not be prepared to instantiate random images in the way that Amazon does: Site infrastructure may not be adequately protected against images that allow root access to unknown people, Sites are required to maintain traces of activity at sites (syslog/process accounting) to enable investigations in case of security incidents. The HEPiX process documents a way to create "certified images" that address these concerns and should, therefore, be freely transmissible between sites. Pilot job frameworks: no reason for end users to be concerned with image creation interface is with the central experiment task queue. Which images are instantiated where is the concern of the central experiment team

WLCG Site use cases? Site Summary: Future: (Many) Sites will run virtualised infrastructures (some may be private clouds) Many different implementations and prototypes CERN interested in “standard” implementations (e.g. Openstack) Interest in ability to burst out to commercial clouds Essentially should be hidden from applications Future: How far can “standard” cloud interfaces supplement/replace grid job management?

Some discussion questions What are sites doing? Planning? Where does CERNVM fit? Where does whole-node scheduling fit? Can we clarify the likely use cases from both experiment and site viewpoints