Pete Gronbech GridPP Project Manager April 2017

Slides:



Advertisements
Similar presentations
HTCondor and the European Grid Andrew Lahiff STFC Rutherford Appleton Laboratory European HTCondor Site Admins Meeting 2014.
Advertisements

Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
UKI-SouthGrid Overview GridPP30 Pete Gronbech SouthGrid Technical Coordinator and GridPP Project Manager Glasgow - March 2012.
ATLAS computing in Geneva Szymon Gadomski, NDGF meeting, September 2009 S. Gadomski, ”ATLAS computing in Geneva", NDGF, Sept 091 the Geneva ATLAS Tier-3.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow GridPP27 15 th Sep 2011 GridPP.
SouthGrid Status Pete Gronbech: 4 th September 2008 GridPP 21 Swansea.
UKI-SouthGrid Overview Face-2-Face Meeting Pete Gronbech SouthGrid Technical Coordinator Oxford June 2013.
Grid Computing for UK Particle Physics Jeremy Coles GridPP Production Manager Monday 3 rd September CHEP 2007, Victoria, Canada.
Quarterly report SouthernTier-2 Quarter P.D. Gronbech.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
Overview of day-to-day operations Suzanne Poulat.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
Grid job submission using HTCondor Andrew Lahiff.
Slide David Britton, University of Glasgow IET, Oct 09 1 Pete Clarke University of Edinburgh PPAP Meeting Imperial, 24/25 th Sep 2015 GridPP Access for.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow GridPP Computing for Particle.
UKI-SouthGrid Update Hepix Pete Gronbech SouthGrid Technical Coordinator April 2012.
Distributed Computing for CEPC YAN Tian On Behalf of Distributed Computing Group, CC, IHEP for 4 th CEPC Collaboration Meeting, Sep , 2014 Draft.
GridPP Dirac Service The 4 th Dirac User Workshop May 2014 CERN Janusz Martyniak, Imperial College London.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Oxford & SouthGrid Update HEPiX Pete Gronbech GridPP Project Manager October 2015.
Lofar Information System on GRID A.N.Belikov. Lofar Long Term Archive Prototypes: EGEE Astro-WISE Requirements to data storage Tiers Astro-WISE adaptation.
Tier3 monitoring. Initial issues. Danila Oleynik. Artem Petrosyan. JINR.
The GridPP DIRAC project DIRAC for non-LHC communities.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI VO auger experience with large scale simulations on the grid Jiří Chudoba.
The GridPP DIRAC project DIRAC for non-LHC communities.
STFC in INDIGO DataCloud WP3 INDIGO DataCloud Kickoff Meeting Bologna April 2015 Ian Collier
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
DIRAC Distributed Computing Services A. Tsaregorodtsev, CPPM-IN2P3-CNRS FCPPL Meeting, 29 March 2013, Nanjing.
HTCondor-CE. 2 The Open Science Grid OSG is a consortium of software, service and resource providers and researchers, from universities, national laboratories.
A Statistical Analysis of Job Performance on LCG Grid David Colling, Olivier van der Aa, Mona Aggarwal, Gidon Moont (Imperial College, London)
Riccardo Zappi INFN-CNAF SRM Breakout session. February 28, 2012 Ingredients 1. Basic ingredients (Fabric & Conn. level) 2. (Grid) Middleware ingredients.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI solution for high throughput data analysis Peter Solagna EGI.eu Operations.
Distributed Computing for Small Experiments
HTCondor Accounting Update
HTCondor Accounting Update
Grid and Cloud Computing
WLCG IPv6 deployment strategy
Review of the WLCG experiments compute plans
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
Containers and other technology at the Tier-1
London Tier-2 Quarter Owen Maroney
SuperB – INFN-Bari Giacinto DONVITO.
Pete Gronbech GridPP Project Manager April 2016
GridPP DIRAC Daniela Bauer & Simon Fayer.
Report on CHEP 2007 Raja Nandakumar.
Distributed Computing for Small Experiments
Belle II Physics Analysis Center at TIFR
HEPiX Spring 2014 Annecy-le Vieux May Martin Bly, STFC-RAL
Yaodong CHENG Computing Center, IHEP, CAS 2016 Fall HEPiX Workshop
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
Moving from CREAM CE to ARC CE
Computing Board Report CHIPP Plenary Meeting
Oxford Site Report HEPSYSMAN
CREAM-CE/HTCondor site
UK Status and Plans Scientific Computing Forum 27th Oct 2017
Grid Computing and the National Analysis Facility
Small site approaches - Sussex
Ákos Frohner EGEE'08 September 2008
LHC Data Analysis using a worldwide computing grid
ETHZ, Zürich September 1st , 2016
GRIF : an EGEE site in Paris Region
Installation/Configuration
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
HEPSYSMAN Summer th May 2019 Chris Brew Ian Loader
RHUL Site Report Govind Songara, Antonio Perez,
Pete Gronbech, Kashif Mohammad and Vipul Davda
The LHCb Computing Data Challenge DC06
Presentation transcript:

Pete Gronbech GridPP Project Manager April 2017 Tier-2 Site Survey Pete Gronbech GridPP Project Manager April 2017

Object of the exercise Try to establish the level of interaction between the Grid Tier-2 clusters and other computing infrastructures at sites such as Tier-3 or University wide services. April 2016

Batch Systems in Use on Grid Clusters ARC + HTCondor 8 sites with 2 testing CREAM + SGE 3 sites (Various types of Grid Engine) CREAM + Torque 3 sites (2 testing ARC/HTCondor 1 VAC) ARC + SLURM 1 site CREAM + SLURM 1 site ARC + SGE 1 site April 2016

Is the batch system shared? Can jobs be submitted without the Grid CE? NO – 11 sites (2 sites occasional other access) YES – 6 ( 4 are part of a University High End computing facility) April 2016

Shared File system NO – 9 sites YES – 8 sites (2 Lustre, 5 NFS, 1 NFS + dcache) April 2016

Top non-LHC or GridPP DIRAC VOs supported (CPU) ILC – 14 sites Biomed – 9 sites Pheno – 8 sites Dune – 3 sites T2k – 3 sites SNO+ - 3 sites IceCube – 2 sites Na62 – 2 sites LZ – 2 sites Mice, CEPC, enmr, uboone – mentioned by 1 site each April 2016

Usage by top 10 VOs over the last year by site SNO+ 0.55% Dune 0.51% Bio 0.36% GridPP 0.34% LZ 0.55% April 2016

Storage System DPM – 12 sites dcache – 2 sites StoRM – 2 sites (the Lustre sites) Dmlite+ HDFS – 1 site April 2016

Non-LHC Storage supported VO’s listed as having storage T2k – 5 sites SNO+ - 5 sites Bio – 4 sites Pheno – 2 sites ILC – 2 sites Comet, DUNE, Hyper-k, LSST, LZ, NA62 - 1 site each From Q416 Quarterly reports Total Non-LHC storage was 5% of total ~20PB ie ~1PB April 2016

Access to the SE for local storage YES – 11 sites NO – 6 sites April 2016

Conclusions A very diverse set of responses across the 17 sites. No set pattern across large sites vs small sites There are favourites for batch system and SE, ie HTCondor/ARC and DPM However site preferences and compatibility with other university services may well prevent greater commonality. https://www.gridpp.ac.uk/wiki/GridPP5_Tier2_plans April 2016