Presentation is loading. Please wait.

Presentation is loading. Please wait.

The LHC Computing Grid – February 2008 The Challenges of LHC Computing Frédéric Hemmer IT Department 26 th January 2010 Visit of Michael Dell 1 Frédéric.

Similar presentations


Presentation on theme: "The LHC Computing Grid – February 2008 The Challenges of LHC Computing Frédéric Hemmer IT Department 26 th January 2010 Visit of Michael Dell 1 Frédéric."— Presentation transcript:

1 The LHC Computing Grid – February 2008 The Challenges of LHC Computing Frédéric Hemmer IT Department 26 th January 2010 Visit of Michael Dell 1 Frédéric Hemmer

2 2 Collisions at the LHC: summary 26 January 2010Frédéric Hemmer

3 26 January 20103Frédéric Hemmer

4 4 pp collisions at 14 TeV at 10 34 cm -2 s -1 How to extract this: (Higgs  4 muons) From this: With: 20 proton-proton collisions overlap And this repeats every 25 ns… A very difficult environment … Z  at LEP (e+e-) 26 January 2010Frédéric Hemmer

5 The LHC Computing Challenge 5  Signal/Noise: 10 -13 (10 -9 offline)  Data volume High rate * large number of channels * 4 experiments  15 PetaBytes of new data each year  Compute power Event complexity * Nb. events * thousands users  100 k of (today's) fastest CPUs  45 PB of disk storage  Worldwide analysis & funding Computing funding locally in major regions & countries Efficient analysis everywhere  GRID technology 26 January 2010Frédéric Hemmer

6 6 Collision in an LHC detector 26 January 2010Frédéric Hemmer

7 7 On-line data reduction 26 January 2010

8 Frédéric Hemmer8 Tier 0 at CERN: Acquisition, First pass processing Storage & Distribution 26 January 2010

9 Tier 0 – Tier 1 – Tier 2 Frédéric Hemmer9 Tier-0 (CERN): Data recording Initial data reconstruction Data distribution Tier-1 (11 centres): Permanent storage Re-processing Analysis Tier-2 (~130 centres): Simulation End-user analysis 26 January 2010

10 (w)LCG – Project and Collaboration LCG was set up as a project in 2 phases: – Phase I – 2002-05 - Development & planning; prototypes End of this phase the computing Technical Design Reports were delivered (1 for LCG and 1 per experiment) – Phase II – 2006-2008 – Deployment & commissioning of the initial services Program of data and service challenges During Phase II, the WLCG Collaboration was set up as the mechanism for the longer term: – Via an MoU – signatories are CERN and the funding agencies – Sets out conditions and requirements for Tier 0, Tier 1, Tier 2 services, reliabilities etc (“SLA”) – Specifies resource contributions – 3 year outlook 26 January 201010Frédéric Hemmer

11 De-FZK US-FNAL Ca- TRIUMF NDGF CERN Barcelona/PIC Lyon/CCIN2P3 US-BNL UK-RAL Taipei/ASGC 26 January 2010Frédéric Hemmer11 Today we have 49 MoU signatories, representing 34 countries: Australia, Austria, Belgium, Brazil, Canada, China, Czech Rep, Denmark, Estonia, Finland, France, Germany, Hungary, Italy, India, Israel, Japan, Rep. Korea, Netherlands, Norway, Pakistan, Poland, Portugal, Romania, Russia, Slovenia, Spain, Sweden, Switzerland, Taipei, Turkey, UK, Ukraine, USA. Today we have 49 MoU signatories, representing 34 countries: Australia, Austria, Belgium, Brazil, Canada, China, Czech Rep, Denmark, Estonia, Finland, France, Germany, Hungary, Italy, India, Israel, Japan, Rep. Korea, Netherlands, Norway, Pakistan, Poland, Portugal, Romania, Russia, Slovenia, Spain, Sweden, Switzerland, Taipei, Turkey, UK, Ukraine, USA. WLCG Today Tier 0; 11 Tier 1s; 64 Tier 2 federations (124 Tier 2 sites) WLCG Today Tier 0; 11 Tier 1s; 64 Tier 2 federations (124 Tier 2 sites) Amsterdam/NIKHEF-SARA Bologna/CNAF

12 Preparation for accelerator start up Since 2004 WLCG has been running a series of challenges to demonstrate aspects of the system; with increasing targets for: – Data throughput – Workloads – Service availability and reliability Recent significant challenges – May 2008 – Combined Readiness Challenge All 4 experiments running realistic work (simulating data taking) Demonstrated that we were ready for real data – June 2009 – Scale Testing Stress and scale testing of all workloads including massive analysis loads – Real data – cosmic rays many months of 2008/9; collision data Nov/Dec 2009 Successful data acquisition and distribution /analysis at full data rates In essence the LHC Grid service has been running for several years 1226 January 2010Frédéric Hemmer

13 Data transfer Full experiment rate needed is 650 MB/s Desire capability to sustain twice that to allow for Tier 1 sites to shutdown and recover Have demonstrated far in excess of that All experiments exceeded required rates for extended periods, & simultaneously All Tier 1s have exceeded their target acceptance rates Full experiment rate needed is 650 MB/s Desire capability to sustain twice that to allow for Tier 1 sites to shutdown and recover Have demonstrated far in excess of that All experiments exceeded required rates for extended periods, & simultaneously All Tier 1s have exceeded their target acceptance rates 26 January 201013Frédéric Hemmer

14 2009 Tier 0-1 Data Transfers Final readiness test (STEP’09)Preparation for LHC startupLHC physics data More than 8 GB/s peak transfers from Castor fileservers at CERN

15 Grid Activity – distribution of CPU delivered 26 January 2010Frédéric Hemmer15 Distribution of work across Tier0/Tier1/Tier 2 really illustrates the importance of the grid system – Tier 2 contribution is ~ 50%; – >85% is external to CERN Tier 2 sites Tier 0 + Tier 1 sites

16 First events 26 January 201016Frédéric Hemmer

17 WLCG depends on two major science grid infrastructures …. EGEE - Enabling Grids for E-Science OSG - US Open Science Grid 17 Interoperability & interoperation is vital significant effort in building the procedures to support it Interoperability & interoperation is vital significant effort in building the procedures to support it 26 January 2010Frédéric Hemmer

18 Archeology Astronomy Astrophysics Civil Protection Comp. Chemistry Earth Sciences Finance Fusion Geophysics High Energy Physics Life Sciences Multimedia Material Sciences … >250 sites 48 countries >50,000 CPUs >20 PetaBytes >10,000 users >150 VOs >150,000 jobs/day LCG has been the driving force for the European multi-science Grid EGEE (Enabling Grids for E-sciencE) EGEE is now a global effort, and the largest Grid infrastructure worldwide Co-funded by the European Commission (Cost: ~170 M€ over 6 years, funded by EU ~100M€) EGEE already used for >100 applications, including… Impact of the LHC Computing Grid in Europe 18 Frédéric Hemmer

19 19Health-e-Child Similarity Search Temporal Modelling Visual Data Mining Genetics Profiling Treatment Response Inferring Outcome Biomechanical Models Tumor Growth Modelling Semantic Browsing Personalised Simulation Surgery Planning RV and LV Automatic Modelling Measurement of Pulmonary Trunk

20  Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and Internet2.  Virtual Data Toolkit: Common software developed between Computer Science & applications used by OSG and others. OSG Today

21 Sustainability Need to prepare for permanent Grid infrastructure Ensure a high quality of service for all user communities Independent of short project funding cycles Infrastructure managed in collaboration with National Grid Initiatives (NGIs) European Grid Initiative (EGI)

22 EGI Council 26 January 2010 22 Frédéric Hemmer 22 Oct 2009: 33 NGIs 2 EIRO (CERN, EMBL) + Observers (Armenia, Belarus, Georgia, Kazakhstan, Moldova, Russia, Ukraine)

23 26 January 2010 Frédéric Hemmer 23 NGIs in Europe www.eu-egi.eu

24 CERN Computing – Tier 0 in numbers Computing – CPU: – 6300 systems / 39k cores (+ planned 2400 systems, 16k cores) – Used for CPU servers, disk servers, general services Computing – disk: – 14 PB on 42.5k disk drives (+ planned 19 PB on 20k drives) Computing – tape: – 34 PB on 45k tape cartridges – 56k tape slots in robots, 160 tape drives Computer centre: – 2.9 MW usable power, + ~1.5 MW for cooling Frédéric Hemmer2426 January 2010

25 CERN openlab in brief A science – industry partnership to drive R&D and innovation Started in 2002, now in phase 3 Motto: “you make it – we break it” Evaluates state-of-the-art technologies in a very complex environment and improves them Test in a research environment today what will be used in industry tomorrow Training: openlab student programme Topical seminars CERN School of Computing January 2010

26 openlab phase III  Covers 2009-2011  Status  Partners: HP, Intel, Oracle and Siemens  New topics  Global wireless coverage for CERN (HP)  Power-efficient solutions (Intel)  Performance Tuning (Oracle)  Control systems and PLC security (Siemens)  Advanced storage systems and/or global file system (partner to be identified)  100Gb/s networking (partner to be identified) 26January 2010

27 openlab people: students in 2009 Summer 200927

28 26 January 2010Frédéric Hemmer28


Download ppt "The LHC Computing Grid – February 2008 The Challenges of LHC Computing Frédéric Hemmer IT Department 26 th January 2010 Visit of Michael Dell 1 Frédéric."

Similar presentations


Ads by Google