Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dagmar Adamova, NPI AS CR Prague/Rez

Similar presentations


Presentation on theme: "Dagmar Adamova, NPI AS CR Prague/Rez"— Presentation transcript:

1 Dagmar Adamova, NPI AS CR Prague/Rez
Current status of the WLCG data management system, the experience from the three years of data taking and future role of Grids for the LHC data processing Dagmar Adamova, NPI AS CR Prague/Rez The last step on the way to enable delivery of Physics discoveries provided by the LHC is Computing. The environment for the LHC data management and processing is provided by the Worldwide LHC Computing Grid (WLCG). It enabled reliable processing and analysis of the LHC data and fast delivery of the scientific papers since the first LHC collisions in E.g. in 2012, the total number of EP preprints with LHC results reached 352. Activity on : Running Jobs: Transfer rates: ~14 GB/s CASTOR: close to 100 PB archive - Physics data: 94.3 PB in October Increases at 1 PB/week with LHC on 1 1

2 World Wide resources: spanning over 6 continents
WCLG, by the numbers More than 8000 physicists use it Over available cores In average 2 million jobs processed every day Over 170 PB of disks available worldwide 10 Gigabit/s optical fiber links connect CERN to each of the 12 Tier 1 institutes There is close to 100 PB of stored LHC data at the CERN tape system CASTOR Increase: close to 3.5 PB/month with the LHC on Global data export/transfers from CERN: > 15 GB/s in peaks This is also a truly worldwide undertaking. WLG has computing sites in almost every continent, and today provides significant levels of resources – computing clusters, storage (today we have close to 100 PB of disk available to the experiments), and networking. WLCG Collaboration current status: 1 Tier 0 (CERN); 12 Tier 1s (CERN, US-BNL, Amsterdam/NIKHEF-SARA, Taipei/ASGC, Bologna/CNAF, NDGF, UK-RAL, Lyon/CCIN2P3, Barcelona/PIC, De-FZK, US-FNAL, TRIUMF) + 1 associate Tier 1 (KISTI, South Korea); 68 Tier 2 federations. In preparation: extension of the CERN Tier 0 (Wigner Institute Budapest); Tier 1 in Russia. 54 MoU signatories, representing 36 countries. 2 2

3 Current phase: Proof of Concept
H -> ZZ -> lumi 1032 cm-2s-1 The effect of the upcoming LHC upgrade In 2012, the LHC running conditions made for a pile-up up to 30 pp interactions per bunch crossing. The recorded events were very complex and larger volumes of data were taken than originally anticipated (~ 30 PB). The upcoming high luminosity upgrade of the LHC (luminosity ~ 2x1034 cm-2 s-1, intensity of 1.7x1011 p/bunch with 25 ns spacing) will produce a higher pile-up and more complex events It is essential to maintain adequate resource scaling so that the Physics potential will not be limited by the availability of computing resources. H -> ZZ -> lumi 1035 cm-2s-1 Possible additional resources from Computing Clouds? The key technology: virtualization already widely used at the WLCG sites. But, new standard interfaces to the existing Clouds are necessary. The costs of using commercial Clouds services for the processing the LHC data are currently too high. The European project Helix Nebula - the Science Cloud: Big science teams up with big business Aims to enable the development and exploitation of a Cloud Computing Infrastructure based on the needs of European IT-intense scientific research organizations. The scientific partners include the CERN ATLAS Collaboration, the European Molecular Biology Laboratory EMBL and the European Space Agency ESA. Current phase: Proof of Concept 3

4 The effect of the upcoming LHC upgrade
4


Download ppt "Dagmar Adamova, NPI AS CR Prague/Rez"

Similar presentations


Ads by Google