Dagmar Adamova, NPI AS CR Prague/Rez

Slides:



Advertisements
Similar presentations
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Advertisements

Regional Computing Centre for Particle Physics Institute of Physics AS CR (FZU) TIER2 of LCG (LHC Computing Grid) 1M. Lokajicek Dell Presentation.
Project Status Report Ian Bird Computing Resource Review Board 30 th October 2012 CERN-RRB
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN)
Astro-WISE & Grid Fokke Dijkstra – Donald Smits Centre for Information Technology Andrey Belikov – OmegaCEN, Kapteyn institute University of Groningen.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 25 th April 2012.
Ian Bird LHC Computing Grid Project Leader LHC Grid Fest 3 rd October 2008 A worldwide collaboration.
The LHC Computing Grid – February 2008 The Challenges of LHC Computing Dr Ian Bird LCG Project Leader 6 th October 2009 Telecom 2009 Youth Forum.
The LHCb CERN R. Graciani (U. de Barcelona, Spain) for the LHCb Collaboration International ICFA Workshop on Digital Divide Mexico City, October.
Les Les Robertson LCG Project Leader High Energy Physics using a worldwide computing grid Torino December 2005.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
LHC Computing, CERN, & Federated Identities
Eygene Ryabinkin, on behalf of KI and JINR Grid teams Russian Tier-1 status report May 9th 2014, WLCG Overview Board meeting.
tons, 150 million sensors generating data 40 millions times per second producing 1 petabyte per second The ATLAS experiment.
Ian Bird WLCG Networking workshop CERN, 10 th February February 2014
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 1 st March 2011 Visit of Dr Manuel Eduardo Baldeón.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
WLCG after 1 year with data: Prospects for the future Ian Bird; WLCG Project Leader openlab BoS meeting CERN4 th May 2011.
WLCG Status Report Ian Bird Austrian Tier 2 Workshop 22 nd June, 2010.
Big Data for Big Discoveries How the LHC looks for Needles by Burning Haystacks Alberto Di Meglio CERN openlab Head DOI: /zenodo.45449, CC-BY-SA,
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
STFC in INDIGO DataCloud WP3 INDIGO DataCloud Kickoff Meeting Bologna April 2015 Ian Collier
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
WLCG – Status and Plans Ian Bird WLCG Project Leader openlab Board of Sponsors CERN, 23 rd April 2010.
Grid technologies for large-scale projects N. S. Astakhov, A. S. Baginyan, S. D. Belov, A. G. Dolbilov, A. O. Golunov, I. N. Gorbunov, N. I. Gromova, I.
Computing infrastructures for the LHC: current status and challenges of the High Luminosity LHC future Worldwide LHC Computing Grid (WLCG): Distributed.
EGI-InSPIRE RI EGI Compute and Data Services for Open Access in H2020 Tiziana Ferrari Technical Director, EGI.eu
LHC collisions rate: Hz New PHYSICS rate: Hz Event selection: 1 in 10,000,000,000,000 Signal/Noise: Raw Data volumes produced.
ATLAS – statements of interest (1) A degree of hierarchy between the different computing facilities, with distinct roles at each level –Event filter Online.
T0-T1 Networking Meeting 16th June Meeting
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2015/2016
COMPUTING FOR ALICE IN THE CZECH REPUBLIC in 2016/2017
Status of WLCG FCPPL project
WLCG Tier-2 Asia Workshop TIFR, Mumbai 1-3 December 2006
(Prague, March 2009) Andrey Y Shevel
LHCOPN/LHCONE status report pre-GDB on Networking CERN, Switzerland 10th January 2017
Ian Bird WLCG Workshop San Francisco, 8th October 2016
“A Data Movement Service for the LHC”
Report from WLCG Workshop 2017: WLCG Network Requirements GDB - CERN 12th of July 2017
Grid site as a tool for data processing and data analysis
LHCOPN update Brookhaven, 4th of April 2017
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
Project Status Report Computing Resource Review Board Ian Bird
Grid related projects CERN openlab LCG EDG F.Fluckiger
Vanderbilt Tier 2 Project
Update on Plan for KISTI-GSDC
Clouds of JINR, University of Sofia and INRNE Join Together
LHC DATA ANALYSIS INFN (LNL – PADOVA)
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Readiness of ATLAS Computing - A personal view
The LHC Computing Grid Visit of Her Royal Highness
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
UK Status and Plans Scientific Computing Forum 27th Oct 2017
Thoughts on Computing Upgrade Activities
Southwest Tier 2.
A high-performance computing facility for scientific research
Project Status Report Computing Resource Review Board Ian Bird
WLCG Collaboration Workshop;
WLCG Collaboration Workshop;
Connecting the European Grid Infrastructure to Research Communities
The latest developments in preparations of the LHC community for the computing challenges of the High Luminosity LHC Dagmar Adamova (NPI AS CR Prague/Rez)
New strategies of the LHC experiments to meet
EGI – Organisation overview and outreach
LHC Data Analysis using a worldwide computing grid
The LHC Computing Grid Visit of Professor Andreas Demetriou
Presentation transcript:

Dagmar Adamova, NPI AS CR Prague/Rez Current status of the WLCG data management system, the experience from the three years of data taking and future role of Grids for the LHC data processing Dagmar Adamova, NPI AS CR Prague/Rez The last step on the way to enable delivery of Physics discoveries provided by the LHC is Computing. The environment for the LHC data management and processing is provided by the Worldwide LHC Computing Grid (WLCG). It enabled reliable processing and analysis of the LHC data and fast delivery of the scientific papers since the first LHC collisions in 2009. E.g. in 2012, the total number of EP preprints with LHC results reached 352. Activity on 01.01.2013: Running Jobs: 246791 Transfer rates: ~14 GB/s CASTOR: close to 100 PB archive - Physics data: 94.3 PB in October 2012. Increases at 1 PB/week with LHC on 1 1

World Wide resources: spanning over 6 continents WCLG, by the numbers More than 8000 physicists use it Over 300000 available cores In average 2 million jobs processed every day Over 170 PB of disks available worldwide 10 Gigabit/s optical fiber links connect CERN to each of the 12 Tier 1 institutes There is close to 100 PB of stored LHC data at the CERN tape system CASTOR Increase: close to 3.5 PB/month with the LHC on Global data export/transfers from CERN: > 15 GB/s in peaks This is also a truly worldwide undertaking. WLG has computing sites in almost every continent, and today provides significant levels of resources – computing clusters, storage (today we have close to 100 PB of disk available to the experiments), and networking. WLCG Collaboration current status: 1 Tier 0 (CERN); 12 Tier 1s (CERN, US-BNL, Amsterdam/NIKHEF-SARA, Taipei/ASGC, Bologna/CNAF, NDGF, UK-RAL, Lyon/CCIN2P3, Barcelona/PIC, De-FZK, US-FNAL, TRIUMF) + 1 associate Tier 1 (KISTI, South Korea); 68 Tier 2 federations. In preparation: extension of the CERN Tier 0 (Wigner Institute Budapest); Tier 1 in Russia. 54 MoU signatories, representing 36 countries. 2 2

Current phase: Proof of Concept H -> ZZ -> μμee @ lumi 1032 cm-2s-1 The effect of the upcoming LHC upgrade In 2012, the LHC running conditions made for a pile-up up to 30 pp interactions per bunch crossing. The recorded events were very complex and larger volumes of data were taken than originally anticipated (~ 30 PB). The upcoming high luminosity upgrade of the LHC (luminosity ~ 2x1034 cm-2 s-1, intensity of 1.7x1011 p/bunch with 25 ns spacing) will produce a higher pile-up and more complex events It is essential to maintain adequate resource scaling so that the Physics potential will not be limited by the availability of computing resources. H -> ZZ -> μμee @ lumi 1035 cm-2s-1 Possible additional resources from Computing Clouds? The key technology: virtualization already widely used at the WLCG sites. But, new standard interfaces to the existing Clouds are necessary. The costs of using commercial Clouds services for the processing the LHC data are currently too high. The European project Helix Nebula - the Science Cloud: Big science teams up with big business Aims to enable the development and exploitation of a Cloud Computing Infrastructure based on the needs of European IT-intense scientific research organizations. The scientific partners include the CERN ATLAS Collaboration, the European Molecular Biology Laboratory EMBL and the European Space Agency ESA. Current phase: Proof of Concept 3

The effect of the upcoming LHC upgrade 4