Presentation is loading. Please wait.

Presentation is loading. Please wait.

Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)

Similar presentations


Presentation on theme: "Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)"— Presentation transcript:

1 Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)

2 2003-04-09 oxana.smirnova@hep.lu.se2 ATLAS: preparing for data taking

3 2003-04-09 oxana.smirnova@hep.lu.se3 Currently @ Data Challenge 1 (DC1) Event generation completed during DC0 Main goals of DC1: Need to produce simulated data for High Level Trigger & Physics Groups Reconstruction & analysis on a large scale learn about data model; I/O performances; identify bottlenecks etc Data management Use/evaluate persistency technology Learn about distributed analysis Involvement of sites outside CERN Use of Grid as and when possible and appropriate

4 2003-04-09 oxana.smirnova@hep.lu.se4 DC1, Phase 1: Task Flow Example: one sample of di-jet events PYTHIA event generation: 1.5 x 10 7 events split into partitions (read: ROOT files) Detector simulation: 20 jobs per partition, ZEBRA output Atlsim/Geant3 + Filter 10 5 events Atlsim/Geant3 + Filter Hits/ Digits MCTruth Atlsim/Geant3 + Filter Pythia6 Di-jet Athena-Root I/OZebra HepMC Event generation Detector Simulation (5000 evts) (~450 evts) Hits/ Digits MCTruth Hits/ Digits MCtruth

5 2003-04-09 oxana.smirnova@hep.lu.se5 Piling up events

6 2003-04-09 oxana.smirnova@hep.lu.se6 Future: DC2-3-4-… DC2: Originally Q3/2003 – Q2/2004 Will be delayed Goals Full deployment of Event Data Model & Detector Description Transition to the new generation of software tools and utilities Test the calibration and alignment procedures Perform large scale physics analysis Further tests of the computing model Scale As for DC1: ~ 107 fully simulated events DC3: Q3/2004 – Q2/2005 Goals to be defined; Scale: 5 x DC2 DC4: Q3/2005 – Q2/2006 Goals to be defined; Scale: 2 X DC3 Sweden can try to provide ca 3-5% contribution (?)

7 2003-04-09 oxana.smirnova@hep.lu.se7 DC requirements so far Integrated DC1 numbers: 50+ institutes in 20+ countries Sweden enter with other Nordic countries via the NorduGrid 3500 “normalized CPUs” (80000 CPU-days) Nordic share: equivalent of 320 “normalized CPUs” (ca. 80 in real life) 5 × 10 7 events generated No Nordic participation 1 × 10 7 events simulated Nordic: ca. 3 × 10 5 100 TB produced (135 000 files of output) Nordic: ca. 2 TB, 4600 files More precise quantification is VERY difficult because of orders of magnitude complexity differences between different physics channels and processing steps 1. CPU time consumption: largely unpredictable, VERY irregular 2. OS: GNU/Linux, 32 bit architecture 3. Inter-processor communication: never been a concern so far (no MPI needed) 4. Memory consumption: depends on the processing step/data set, so far 512 MB have been enough 5. Data volumes: vary from KB to GB per job 6. Data access pattern: mostly unpredictable, irregular 7. Data bases: each worker node is expected to be able to access a remote database 8. Software is under constant development, will certainly exceed 1 GB, includes multiple dependencies on HEP-specific software, sometimes licensed

8 2003-04-09 oxana.smirnova@hep.lu.se8 And a bit about Grid ATLAS DC ran on Grid since summer 2002 (NorduGrid, US Grid) Future DCs will be to large extent (if not entirely) gridified Allocated computing facilities must have all the necessary Grid middleware (but ATLAS will not provide support) Grids that we tried: NorduGrid – a Globus-based solution developed in Nordic countries, provides stable and reliable facility, executes all the Nordic share of DCs US Grid (iVDGL) – basically, Globus tools, hence missing high-level services, but still serves ATLAS well, executing ca 10% of US DC share EU DataGrid (EDG) – way more complex solution (but Globus-based, too), still in development, not yet suitable for production, but can perform simple tasks. Did not contribute to DCs Grids that are coming: LCG: will be initially strongly based on EDG, hence may not be reliable before 2004 EGEE: another continuation of EDG, still in the proposal preparation state Globus moves towards Grid Services architecture – may imply major changes both in existing solutions, and in planning


Download ppt "Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges) Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala)"

Similar presentations


Ads by Google