Presentation is loading. Please wait.

Presentation is loading. Please wait.

ATLAS DC2 ISGC-2005 Taipei 27th April 2005

Similar presentations


Presentation on theme: "ATLAS DC2 ISGC-2005 Taipei 27th April 2005"— Presentation transcript:

1 ATLAS DC2 ISGC-2005 Taipei 27th April 2005
Gilbert Poulard (CERN PH-ATC) on behalf of ATLAS Data Challenges; Grid and Operations teams

2 Overview Introduction ATLAS production system Data Challenge 2
ATLAS experiment ATLAS Data Challenges program ATLAS production system Data Challenge 2 The 3 Grid flavors (LCG; Grid3 and NorduGrid) ATLAS DC2 production Conclusions ISGC-2005 G. Poulard - CERN PH

3 Introduction: LHC/CERN LHC (CERN)
Mont Blanc, 4810 m Geneva ISGC-2005 G. Poulard - CERN PH

4 The challenge of the LHC computing
Storage – Raw recording rate 0.1 – 1 GBytes/sec Accumulating at 5-8 PetaBytes/year 10 PetaBytes of disk Processing – 200,000 of today’s fastest PCs ISGC-2005 G. Poulard - CERN PH

5 Introduction: ATLAS Detector for the study of high-energy proton-proton collisions. The offline computing will have to deal with an output event rate of 200 Hz. i.e 2x109 events per year with an average event size of 1.6 Mbyte. Researchers are spread all over the world. ISGC-2005 G. Poulard - CERN PH

6 Introduction: ATLAS experiment
~ 2000 Collaborators ~150 Institutes 34 Countries Diameter m Barrel toroid length 26 m Endcap end-wall chamber span 46 m Overall weight Tons ISGC-2005 G. Poulard - CERN PH

7 Introduction: Data Challenges
Scope and Goals of Data Challenges (DCs): Validate Computing Model Software Data Model DC1 ( ) Put in place the full software chain Simulation of the data; digitization; pile-up Reconstruction Production system Tools (bookkeeping; monitoring; …) Intensive use of Grid “Build” the ATLAS DC community DC2 (2004) Similar exercise as DC1 BUT Use of the Grid middleware developed in several projects LHC Computing Grid project (LCG) to which CERN is committed Grid3 on US NorduGrid in Scandinavian countries ISGC-2005 G. Poulard - CERN PH

8 ATLAS Production System
The production database, which contains abstract job definitions The Windmill supervisor that reads the production database for job definitions and present them to the different Grid executors in an easy-to-parse XML format The Executors, one for each Grid flavor, that receives the job-definitions in XML format and converts them to the job description language of that particular Grid Don Quijote, the ATLAS Data Management System, moves files from their temporary output locations to their final destination on some Storage Elements and registers the files in the Replica Location Service of that Grid In order to handle the task of ATLAS DC2 an automated Production system was developed. It consists of 4 components: ISGC-2005 G. Poulard - CERN PH

9 DC2 production phases Task Flow for DC2 data Pythia Persistency:
Bytestream Raw Digits Task Flow for DC2 data ESD Digits (RDO) MCTruth Bytestream Raw Digits Mixing Reconstruction Events HepMC Hits MCTruth Geant4 Digitization Bytestream Raw Digits ESD Digits (RDO) MCTruth Events HepMC Hits MCTruth Pythia Geant4 Digitization Reconstruction Digits (RDO) MCTruth Events HepMC Hits MCTruth Geant4 Pile-up Bytestream Raw Digits ESD Bytestream Raw Digits Mixing Reconstruction Events HepMC Hits MCTruth Digits (RDO) MCTruth Geant4 Pile-up Bytestream Raw Digits 20 TB 5 TB 20 TB 30 TB ~5 TB Event Mixing Detector Simulation Digitization (Pile-up) Reconstruction Event generation Byte stream Persistency: Athena-POOL TB Physics events Min. bias Events Piled-up events Mixed events Mixed events With Pile-up ISGC-2005 G. Poulard - CERN PH Volume of data for 107 events

10 DC2 production phases Process No. of events ~ Event size
~ CPU time per event ~ Volume of data MB kSI2k-s TB Event generation 5 x 107 0.06 3 Simulation 107 2. 520 30 Pile-up Digitization 3 x 106 3. 150 15 6 20 Event mixing & Byte-stream 5.4 ATLAS DC2 started in July 2004 The simulation part was finished by the end of September and the pile-up and digitization parts by the end of November 10 million events were generated, fully simulated and digitized and ~2 Million events were “piled-up” Event mixing and reconstruction was done for 2.4 Million events in December. The Grid technology as provided the tools to perform this “massive” worldwide production ISGC-2005 G. Poulard - CERN PH

11 The 3 Grid flavors LCG (http://lcg.web.cern.ch/LCG/)
The job of LHC Computing Grid Project - LCG - is to prepare the computing infrastructure for the simulation, processing and analysis of LHC data for all four of the LHC collaborations. This includes both the common infrastructure of libraries, tools and frameworks required to support the physics application software, and the development and deployment of the computing services needed to store and process the data, providing batch and interactive facilities for the worldwide community of physicists involved in LHC. Grid3 ( The Grid3 collaboration has deployed and international Data Grid with dozens of sites and thousand of processors. The facility jointly by the US Grid project iVDGL, GriPhyN and PPDG and the US participants in the LHC experiments ATLAS and CMS. NorduGrid ( The aim of the NorduGrid collaboration is to deliver a robust, scalable, portable and fully featured solution for a global computational and data Grid system. NorduGrid develops and deploys a set of tools and services - the so-called ARC middleware, which is a free software. Both Grid3 and NorduGrid have similar approaches using the same foundations (GLOBUS) as LCG with slightly different middleware ISGC-2005 G. Poulard - CERN PH

12 The 3 Grid flavors: LCG-2 Number of sites; resources
are evolving quickly ISGC-2005 G. Poulard - CERN PH

13 The 3 Grid flavors: NorduGrid
NorduGrid is a research collaboration established mainly across Nordic Countries but includes sites from other countries. They contributed to a significant part of the DC1 (using the Grid in 2002). It supports production on several operating systems (non-RedHat 7.3 platforms). > 10 countries, 40+ sites, ~4000 CPUs, ~30 TB storage ISGC-2005 G. Poulard - CERN PH

14 The 3 Grid flavors: Grid3 Sep 04 30 sites, multi-VO shared resources
~3000 CPUs (shared) The deployed infrastructure has been in operation since November 2003 At this moment running 3 HEP and 2 Biological applications Over 100 users authorized to run in GRID3 ISGC-2005 G. Poulard - CERN PH

15 ATLAS DC2: countries (sites)
Australia (1) Austria (1) Canada (4) CERN (1) Czech Republic (2) Denmark (4) France (1) Germany (1+2) Italy (7) Japan (1) Netherlands (1) Norway (3) Poland (1) Slovenia (1) Spain (3) Sweden (7) Switzerland (1) Taiwan (1) UK (7) USA (19) 20 countries 69 sites 13 countries; 31 sites 7 countries; 19 sites ISGC-2005 G. Poulard - CERN PH

16 ATLAS DC2 production Total ISGC-2005 G. Poulard - CERN PH

17 ATLAS DC2 production ISGC-2005 G. Poulard - CERN PH

18 ATLAS Production (July 2004 - April 2005)
Rome (mix jobs) DC2 (short jobs period) DC2 (long jobs period) ISGC-2005 G. Poulard - CERN PH

19 Jobs Total As of 30 November 2004 20 countries 69 sites ~ 260000 Jobs
~ 2 MSi2k.months ISGC-2005 G. Poulard - CERN PH

20 Lessons learned from DC2
Main problems The production system was in development during DC2 The beta status of the services of the Grid caused troubles while the system was in operation For example the Globus RLS, the Resource Broker and the information system were unstable at the initial phase Specially on LCG, lack of uniform monitoring system The mis-configuration of sites and site stability related problems But also Human errors (for example “expired proxy”; bad registration of files) Network problems (connection lost between two processes) Data Management System problems (eg. connection with mass storage system) ISGC-2005 G. Poulard - CERN PH

21 Lessons learned from DC2
Main achievements To have run a large scale production on Grid ONLY, using 3 Grid flavors To have an automatic production system making use of Grid infrastructure Few “10 TB” of data have been moved among the different Grid flavors using DonQuijote (ATLAS Data Management) servers ~ jobs were submitted by the production system ~ logical files were produced and ~2500 jobs were run per day ISGC-2005 G. Poulard - CERN PH

22 Conclusions The generation, simulation and digitization of events for ATLAS DC2 have been completed using 3 flavors of Grid Technology (LCG; Grid3; NorduGrid) They have been proven to be usable in a coherent way for a real production and this is a major achievement This exercise has taught us that all the involved elements (Grid middleware, production system, deployment and monitoring tools, …) need improvements From July to end November 2004, the automatic production system has submitted ~ jobs, they consumed ~2000 kSI2k months of CPU and produced more than 60 TB of data If one includes on-going production one reaches ~ jobs, more than 100 TB and ~500 kSI2k.years ISGC-2005 G. Poulard - CERN PH


Download ppt "ATLAS DC2 ISGC-2005 Taipei 27th April 2005"

Similar presentations


Ads by Google