Status of the DESY Grid Centre Volker Guelzow & the Grid Crew DESY IT Hamburg, April 28th.

Slides:



Advertisements
Similar presentations
STEINBUCH CENTRE FOR COMPUTING - SCC KIT – University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association.
Advertisements

E-Science Workshop, Santiago de Chile, 23./ KIT ( Frank Schmitz Forschungszentrum Karlsruhe Institut.
Overview of LCG-France Tier-2s and Tier-3s Frédérique Chollet (IN2P3-LAPP) on behalf of the LCG-France project and Tiers representatives CMS visit to Tier-1.
Experiment Support CERN IT Department CH-1211 Geneva 23 Switzerland t DBES News on monitoring for CMS distributed computing operations Andrea.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
Resources and Financial Plan Sue Foffano WLCG Resource Manager C-RRB Meeting, 12 th October 2010.
Grid Computing Oxana Smirnova NDGF- Lund University R-ECFA meeting in Sweden Uppsala, May 9, 2008.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
DESY Participation in an External Experiment Joachim Mnich PRC Meeting
Status of the DESY Grid Centre Volker Guelzow for the Grid Team DESY IT Hamburg, October 25th, 2011.
EGI-InSPIRE Steven Newhouse Interim EGI.eu Director EGI-InSPIRE Project Director.
The production deployment of IPv6 on WLCG David Kelsey (STFC-RAL) CHEP2015, OIST, Okinawa 16 Apr 2015.
Oxford Update HEPix Pete Gronbech GridPP Project Manager October 2014.
The HEPiX IPv6 Working Group David Kelsey EGI TF, Prague 18 Sep 2012.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Main title HEP in Greece Group info (if required) Your name ….
Alex Read, Dept. of Physics Grid Activity in Oslo CERN-satsingen/miljøet møter MN-fakultetet Oslo, 8 juni 2009 Alex Read.
Evolution, by tackling new challenges| CHEP 2015, Japan | Patrick Fuhrmann | 16 April 2015 | 1 Patrick Fuhrmann On behave of the project team Evolution,
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
The ILC And the Grid Andreas Gellrich DESY LCWS2007 DESY, Hamburg, Germany
1 HiGrade Kick-off Welcome to DESY Hamburg Zeuthen.
GridPP Building a UK Computing Grid for Particle Physics Professor Steve Lloyd, Queen Mary, University of London Chair of the GridPP Collaboration Board.
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Grid DESY Andreas Gellrich DESY EGEE ROC DECH Meeting FZ Karlsruhe, 22./
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Alex Read, Dept. of Physics Grid Activities in Norway R-ECFA, Oslo, 15 May, 2009.
Computing Jiří Chudoba Institute of Physics, CAS.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
EMI INFSO-RI European Middleware Initiative (EMI) Alberto Di Meglio (CERN)
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
LHC Computing, CERN, & Federated Identities
Connect communicate collaborate LHCONE in Europe DANTE, DFN, GARR, RENATER, RedIRIS LHCONE Meeting Amsterdam 26 th – 27 th Sept 11.
JINR WLCG Tier 1 for CMS CICC comprises 2582 Core Disk storage capacity 1800 TB Availability and Reliability = 99% 49% 44% JINR (Dubna)End of.
Dr. Isabel Campos Plasencia (IFCA-CSIC) Spanish NGI Coordinator ES-GRID The Spanish National Grid Initiative.
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
European Middleware Initiative (EMI) Alberto Di Meglio (CERN) Project Director.
Connect communicate collaborate LHCONE European design & implementation Roberto Sabatino, DANTE LHCONE Meeting, Washington, June
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Ian Bird LCG Project Leader Status of EGEE  EGI transition WLCG LHCC Referees’ meeting 21 st September 2009.
1 September 2007WLCG Workshop, Victoria, Canada 1 WLCG Collaboration Workshop Victoria, Canada Site Readiness Panel Discussion Saturday 1 September 2007.
EMI INFSO-RI Testbed for project continuous Integration Danilo Dongiovanni (INFN-CNAF) -SA2.6 Task Leader Jozef Cernak(UPJŠ, Kosice, Slovakia)
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
The HEPiX IPv6 Working Group David Kelsey (STFC-RAL) EGI OMB 19 Dec 2013.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Activities and Perspectives at Armenian Grid site The 6th International Conference "Distributed Computing and Grid- technologies in Science and Education"
Using HLRmon for advanced visualization of resource usage Enrico Fattibene INFN - CNAF ISCG 2010 – Taipei March 11 th, 2010.
NDGF – a Joint Nordic Production Grid Lars Fischer ICFA Workshop on HEP Networking, Grid, and Digital Divide Issues for Global e-Science Cracow, 2 October.
KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft Steinbuch Centre for Computing
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
Pledged and delivered resources to ALICE Grid computing in Germany Kilian Schwarz GSI Darmstadt ALICE Offline Week.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
HGF Mass Storage Support Group
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Role and Challenges of the Resource Centre in the EGI Ecosystem Tiziana Ferrari,
Gene Oleynik, Head of Data Storage and Caching,
Status of WLCG FCPPL project
Status: ATLAS Grid Computing
(Prague, March 2009) Andrey Y Shevel
Long-term Grid Sustainability
Update on Plan for KISTI-GSDC
Grid Computing and the National Analysis Facility
K. Schauerhammer, K. Ullmann (DFN)
John Gordon, STFC GDB April 6th 2011
Connecting the European Grid Infrastructure to Research Communities
LHC Data Analysis using a worldwide computing grid
Collaboration Board Meeting
Presentation transcript:

Status of the DESY Grid Centre Volker Guelzow & the Grid Crew DESY IT Hamburg, April 28th

Volker Guelzow | PRC | | Page 2 Outline > The 2011 DESY Grid centre status > The > Requests and Pledges for WLCG > dCache > The LHCone project > The German Tier 2 situation > Conclusion

Volker Guelzow | PRC | | Page 3 GridCentre Resources as of DESY-HHDESY-ZNNAF CPU (#Slots) CPU (HS06)33.9 kHS068.6 kHS0622 kHS06 Disk (ATLAS)890 TB (525 TB)750 TB (525 TB)135 TB Disk (CMS)1057 TB (640 TB)-58 TB Disk (LHCb)-180 TB47 TB Disk (other Grid)348 TB500 TB33 TB In red: WLCG T2 Pledges

Volker Guelzow | PRC | | Page 4 DESY-Grid Centre Usage

Volker Guelzow | PRC | | Page 5 The German T2‘s (Atlas, CMS, LHCb) HGF Alliance funding for Universities: Pledges for Germany es _15DEC2010.pdf CPU 2011 [HS] CPU 2012 [HS] Disk 2011 [TB] Disk 2012 [TB] A DESY C DESY A Goettingen C Aachen A Munich A FR/Wupp A FR/W Freiburg Summe CMS Summe Atlas SumDESY SumNonDESY GrandTotAllT2 WW

Volker Guelzow | PRC | | Page 6 Tier 2 Production: > ATLAS production, worldwide, finished and failed jobs. DE-Cloud

Volker Guelzow | PRC | | Page 7 > Summary of ATLAS production, stacked. of ATLAS of a DE-Cloud

Volker Guelzow | PRC | | Page 8 German Tier 2 usage From Computing Resources Scrutiny Group April 2011 (Cern-RRB ) (German Rep: Martin Gasthuber, DESY) AtlasCMS

Volker Guelzow | PRC | | Page 9 Atlas delivered vs. pledged CPU

Volker Guelzow | PRC | | Page 10 CMS delivered vs pledged computing

Volker Guelzow | PRC | | Page 11 Tier2 availability based on SAM tests

Volker Guelzow | PRC | | Page 12 CMS Analysis Jobs since beginning of 2011

Volker Guelzow | PRC | | Page 13 CMS MC production jobs since January 2011

Tier-2 Accounting 11/ / th April 2011

Tier-2 Accounting 11/ / th April 2011

Volker Guelzow | PRC | | Page 16 The DESY GridLab: Idea: > DESY Grid more than just a production site > Have dedicated machines to develop and test new products and components … representative for and integrated into production environment > Reusing old WNs from Grid, temporarily borrowing WNs from production Grid > Purchasing some dedicated HW (storage) > Contacts: D. Ozerov, Y. Kemp Activities: > Taking part in the WLCG storage technology demonstrators: > NFS v4.1 (pNFS) with dCache.org > Benchmarking LHC applications on a dedicated setup: HEPIX, CHEP, EMI conference contributions > Together with external physicists > Platform for Cloud & virtualization testing in Grid context > Platform for DPHEP prototype > Platform for external students work > …

Volker Guelzow | PRC | | Page 17 NAF usage shares 2011 Wall Clock Lustre Space

Volker Guelzow | PRC | | Page 18 NAF usage by VO‘s

Volker Guelzow | PRC | | Page 19 NAF: Current open questions and discussions > Some requests from the user community  Increase stability and reliability of NAF AFS  Lustre management tools (group quotas, old-file-deletion, synchronization with dCache)  Interactive and graphical software stack on the NAF  Faster X-access (other protocols like NX)  Install CernVM-FS for experiment software (NAF&Grid)  Faster reaction and fixing of problems, better user information  … > NAF provider view:  Many of these requests make sense, and we think could be useful  Some requests very difficult: e.g. Lustre product has deficiencies and lacks features  Other requests have to be rejected because of lack of manpower and difficult to integrate into current NAF setup as targeted to a national community

Volker Guelzow | PRC | | Page 20 NAF CPU Occupancy

Volker Guelzow | PRC | | Page 21 dCache.org dCache, Status Contact : Patrick DESY dCache : a world wide recognized storage technology DESY provides the project infrastructure (headquarters) Development : Web Pages, Wikis and Documentation. Central Code Repositories Organization of developers meetings and workshops Services activities: Test infrastructure & performance (Grid-Lab) Quality Management Release Management Customer support : Tier I conferences, Network Activities : DESY : dCache operations (DOT) and Grid-Lab Germany : HGF, DGI, German Support Europe : EMI, NDGF, Swedish National Infrastructure Worldwide : WLCG, Open Science Grid, Open Grid Forum Main communities WLCG HERA Photon Science DESY) LOFAR (LOw Frequency ARray) : Amsterdam & Jülich Swedish academic storage e-infrastructure (SNIC)

Volker Guelzow | PRC | | Page 22 Kopenhagen Stockholm DESY Chicago Partners o DESY (Headquarters) DESY (lab) : 2 HGF & DGI : 3 European Middleware Initiative : 2 (DESY Grid-Lab : 1) o Nordics Nordic Data Grid Facility : 1 Swedish National Infrastructure : 1 o FERMIlab FERMI (lab) plus USCMS : 2 The European Middleware Initiative (EMI, 2010 – 2013 ) 12 Million € total EU funding plus 12 € by 22 partners. DESY gets € over 3 years = 2 FTE plus a lot of traveling dCache is one of the 4 middle-wares (gLite, ARC, UNICORE, dCache) dCache.org provides the EMI data area leader (Patrick) EMI provides access to new communities EMI provides efforts for standards in dCache (more non HEP customers) dCache, People and Funding Contact : Patrick DESY dCache provides a stable collaboration, important for HEP and strategically interesting for DESY. dCache success attracted 3. party money.

Volker Guelzow | PRC | | Page 23 dCache, The Deployment Contact : Patrick DESY o 94 PB in total WLCG o 8 Tier I’s o 60 Tier II’s USA 28 PB Europe 44 PB Germany 16 PB East : 1 PB Barcelona Lyon Athens Pisa Roma NDGF Sweden London Madrid Amsterdam KIT DESY Aachen Wuppertal Munich FERMIlab BNL Florida Purdue Wisconsin Cambridge, MA Madison dCache Other Storage Systems WLCG Storage per SE type Göttingen Freiburg Mainz Dresden

Volker Guelzow | PRC | | Page 24 LHCone: proposed architecture (I) RENATER DFN GARR Other NRENs ESNET/ USLHCNet GEANT EU-US capacity: Several 10G links - demand oriented Capacity reserved for LHC and according to demand T1 centre T2 (or T3)

Volker Guelzow | PRC | | Page 25 LHCone: proposed architecture (II) HEPPI Prototyp Configuration DESY 20G RWTH 6G KIT 10G DES AAC U FFM (GSI) FRA 10G internat, GSI 10G FZK

Volker Guelzow | PRC | | Page CPU [HS06] 2012 CPU[HS0 6] SRC 2013 CPU [HS06] SRC 2011 Disk[TB] 2012 Disk [TB] SRC 2013 Disk [TB] SRC Atlas Total T CMS Total T CPU2011CPU 2012CPU 2013Disk 2011Disk 2012Disk 2013 Atlas27800 HS29500 HS31500 HS3420 TB4700 TB5300 TB CMS23960 HS26300 HS 1490 TB1950 TB Per centre: Av. Atlas9300 HS10000 HS10500 HS1240 TB1560 TB1800 TB Av. CMS12780 HS17600 HS 1000 TB1300 TB T2 requirements for Germany

Volker Guelzow | PRC | | Page 27 Required T2 Resources > Basic Factors: 1 HepSpec06 including Rackspace, Network, Infrastructure costs 26€ (2011) Invest, est.: 20€ (2012) > Storage: 400$/TB (2011) including space, network, infrastructure > lifetime for CPU: 3 years, storage: 5 years > For calculation: increase + 3 (5) year depreciation > Energy: 25W/TB online; 1,5 W/HS06 CPU 2011[€]2012[€]2013[€] Atlas cpu Atlas disk CMS cpu CMS disk

Volker Guelzow | PRC | | Page 28 Required T2 Resources (closed session) > For DESY: Finance the NAF (~ k€, size of a Tier 2), > Available: basic funding and „Ausbauinvestitionen“ from POF 2011[T€]2011 fund. 2012[T€]2012 fund 2013[T€]2013 fund Atlas cpu 312ok290Half year HA, Desy+ MPI ok 220NO HA funds, Desy+ MPI ok Atlas disk 720ok CMS cpu 364ok CMS disk 376ok ~150 T€/Uni ~75 T€/Uni

Volker Guelzow | PRC | | Page 29 The German T2‘s HGF Alliance funding for Universities: Pledges for Germany es _15DEC2010.pdf CPU 2011 [HS] CPU 2012 [HS] Disk 2011 [TB] Disk 2012 [TB] A DESY C DESY A Goettingen C Aachen A Munich A FR/Wupp A FR/W Freiburg Summe CMS Summe Atlas SumDESY SumNonDESY GrandTotAllT2 WW

Volker Guelzow | PRC | | Page 30 Contribution of Tier 2 and Tier 3 Centres to EGI

Volker Guelzow | PRC | | Page 31 Summary > The DESY-Grid-Centre is one of the top centres in WLCG > The NAF is very helpfull and used nationwide > The NAF needs further (interactive) options > dCache is the major working horse for LHC datamanagement and DESY has the lead > LHCone is a great chance for changing computing models > DESY has take the leading role for LHCone in Germany > There is an open financing issue for the Universities Tier 2 after id 2012

Volker Guelzow | PRC | | Page 32 Zusatzslides, Verwendung noch offen.

th April pledge per experiment and summary status dated 04/10/10 shown at last C-RRB 2011 pledge per experiment and summary status dated 08/04/11 Evolution of pledges since last C-RRB

Volker Guelzow | PRC | | Page 34 DE-Cloud > ATLAS analysis, per Cloud, last year

Volker Guelzow | PRC | | Page 35 > Comparison of production (not stacked) DE-Cloud

Volker Guelzow | PRC | | Page 36 > ATLAS analysis in DE-Cloud  FZK major contribution  DESY HH+ZN strong contribution DESY (HH+ZN)

Volker Guelzow | PRC | | Page 37 > Production in DE-Cloud DESY (HH+ZN)

Volker Guelzow | PRC | | Page 38 Transfers TO DESYTransfers FROM DESY Rather good, some issues on dedicated connections

Volker Guelzow | PRC | | Page 39 Transfers TO DESY From T1 and T2 Transfers FROM DESY TO Tier2s