Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jürgen Knobloch Grid Computing for LHC Jürgen Knobloch.

Similar presentations


Presentation on theme: "Jürgen Knobloch Grid Computing for LHC Jürgen Knobloch."— Presentation transcript:

1 Jürgen Knobloch Grid Computing for LHC Jürgen Knobloch

2 2 November 2001 2 LHC Computing Challenges •Data volume •Data rate •Data complexity •CPU power •World wide distributed analysis •Long project lifetime •Competition

3 Jürgen Knobloch 2 November 2001 3 Rare Phenomena - Huge Background 9 orders of magnitude! The HIGGS All interactions

4 Jürgen Knobloch 2 November 2001 4 CPU Requirements •Complex events –Large number of signals –“good” signals are covered with background •Many events –10 9 events/experiment/year –1- 25 MB/event raw data –several passes required  Need world-wide: 7*10 6 SPECint95 (3*10 8 MIPS)

5 Jürgen Knobloch 2 November 2001 5 Complex Data = More CPU Per Byte les.robertson@cern.ch Estimated CPU Capacity required at CERN 0 1,000 2,000 3,000 4,000 5,000 1998199920002001200220032004200520062007200820092010 Jan 2000: 3.5K SI95 Other experiments LHC K SI95 Moore’s law – some measure of the capacity technology advances provide for a constant number of processors or investment

6 Jürgen Knobloch 2 November 2001 6 Data Volume •Raw data storage: 7 PB/a •Simulated data : 3 PB/a •World wide tape:28.5 PB/a 50 CD-ROM = 33 GB 6 cm 10 PB => 18 km

7 Jürgen Knobloch 2 November 2001 7 The 4 LHC Experiments CMS ATLAS LHCb

8 Jürgen Knobloch 2 November 2001 8 On-line System •Multi-level trigger •Filter out background •Reduce data volume •24 x 7 operation Level 1 - Special Hardware Level 2 - Embedded Processors 40 MHz (1000 TB/sec) Level 3 – Farm of commodity CPUs 75 KHz (75 GB/sec) 5 KHz (5 GB/sec) 100 Hz (100 MB/sec) Data Recording & Offline Analysis

9 Jürgen Knobloch 2 November 2001 9 World-wide collaboration Europe: 267 institutes, 4603 users Elsewhere: 208 institutes, 1632 users

10 Jürgen Knobloch 2 November 2001 10 Regional Centres - a Multi-tier Model les.robertson@cern.ch CERN Tier2 Lab a Uni b Lab c Uni n Lab m Lab b Uni b Uni y Uni x Physics Department    Desktop Tier 1 USA FermiLab UK Rutherford France IN2P3/Lyon Italy CNAF/Bologna NL NIKHEF

11 Jürgen Knobloch 2 November 2001 11 CERN Computer Centre Today

12 Jürgen Knobloch 2 November 2001 12 HEP Grid Initiatives •DataGRID –European Commission support, HEP, Earth Observation, Biology •PPDG – Particle Physics Data Grid –US labs – HEP data analysis –High performance file transfer, data caching •GriPhyN – Grids for Physics Networks –Computer science focus –HEP applications target •Several national European initiatives –Italy – INFN –UK, France, Netherlands,

13 Jürgen Knobloch 2 November 2001 13 EU contract signed by all 21 partners for about 10 millions Euros of EU funding, mainly for personnel Project to start early 2001, duration 3 years Deliverables: Middleware tried out with Particle Physics-, Earth Observation-, Biomedical Applications Flagship project of the EU IST GRID programme Potential important role with IST CPA and RN EU calls for proposal in 2001

14 Jürgen Knobloch 2 November 2001 14 DataGRID Partners Principal Contractors UK PPARCItaly INFN France CNRSHolland NIKHEF Italy/EuropeESA/ESRINCERN (Fabrizio Gagliardi) Industry IBM (UK), CS-SI (F), Datamat (I) Assistant Contractors Helsinki Institute of Physics & CSC, Swedish Natural Science Research Council (Parallelldatorcentrum–KTH, Karolinska Institute), Istituto Trentino di Cultura, Zuse Institut Berlin, University of Heidelberg, CEA/DAPNIA (F), IFAE Barcelona, CNR (I), CESNET (CZ), KNMI (NL), SARA (NL), SZTAKI (HU)

15 Jürgen Knobloch 2 November 2001 15 EU-Grid Project Work Packages

16 Jürgen Knobloch 2 November 2001 16 New opportunities •Transatlantic test-bed (between GriPhyN and EU- DataGrid) •Global Grid Forum organized by DataGrid in Amsterdam 3/2001, next: Vienna (Virginia) 7/2001 •Greek, Polish and Spanish (and other?) consortia (HEP and other sciences) want to extend Middleware development and test-beds to their geographical areas and other applications (Meteo, Biology etc) •Proposal from EBI/EMBL to join in a European Bio- Grid

17 Jürgen Knobloch 2 November 2001 17 Information Infrastructure for e- science (from Dr. John Taylor) Current Situation: ad hoc Client-Server HPC Analysis Storage Analysis Experiment Computing HPC Scientist CLRC Daresbury

18 Jürgen Knobloch 2 November 2001 18 Information Utilities: Infrastructure for e science ( from Dr. John Taylor) The “One Stop Shop”-the Information Utility MIDLEWAREMIDLEWARE Experiment Computing Storage Analysis Scientist CLRC Daresbury Computing

19 Jürgen Knobloch 2 November 2001 19 Technology domains for solutions DEVELOPER VIEW GRID FABRIC APPLICATION USER VIEW

20 Jürgen Knobloch 2 November 2001 20 Applications •HEP –The four LHC experiments –Live testbed for the Regional Centre model •Earth Observation –ESA-ESRIN –KNMI (Dutch meteo) climatology –Processing of atmospheric ozone data derived from ERS GOME and ENVISAT SCIAMACHY sensors •Biology –CNRS (France), Karolinska (Sweden) –Application being defined

21 Jürgen Knobloch 2 November 2001 21 GEANT4 Collaboration Budker Inst. of Physics IHEP Protvino MEPHI Moscow Pittsburg University

22 Jürgen Knobloch 2 November 2001 22 Geant4 Organization •Collaboration based on MoU, consists of •Laboratories: CERN, SLAC, KEK, TRIUMF, Jefferson,... •Experiments: Atlas, CMS, LHCb, BaBar, Harp,... •Institutes, Groups: IN2P3, IGD, TERA, Frankfurt,... –for the production service, maintenance and development of Geant4 •About 80 collaborators worldwide –physicists and computer scientists •G4 was developed by worldwide RD44 project –RD44 ended with first production release in December 98

23 Jürgen Knobloch 2 November 2001 23 GEANT4 Users - Space Gamma-ray Large Area Space Telescope GLAST ESA XMM X-ray telescope

24 Jürgen Knobloch 2 November 2001 24 GEANT4 Users - HEP ATLAS at LHC, CERN Borexino at Gran Sasso Laboratory BaBar at SLAC CMS at LHC, CERN

25 Jürgen Knobloch 2 November 2001 25 GEANT4 Users - Medical Low energy photons Courtesy of the Italian Nat. Inst. for Cancer Research E (MeV) Photon attenuation brachytherapy treatment

26 Jürgen Knobloch 2 November 2001 26 LHC Computing Challenges •Data volume •Data rate •CPU power •Data complexity •World wide distributed analysis •Long project lifetime •Competition Hardware cost: 240 MCHF Commodity pricing (1/3 at CERN) GRID SW Components Open source High speed networks

27 Jürgen Knobloch 2 November 2001 27 Compilers & Systems •Removing legacy UNIX systems •Concentrate on LINUX on Intel •Solaris on SUN as second platform •Most new developments in C++ •Still legacy FORTRAN software to keep alive •Debugging tools on LINUX not yet at the required level


Download ppt "Jürgen Knobloch Grid Computing for LHC Jürgen Knobloch."

Similar presentations


Ads by Google