Presentation is loading. Please wait.

Presentation is loading. Please wait.

The ALICE Framework at GSI Kilian Schwarz ALICE Meeting August 1, 2005.

Similar presentations


Presentation on theme: "The ALICE Framework at GSI Kilian Schwarz ALICE Meeting August 1, 2005."— Presentation transcript:

1 The ALICE Framework at GSI Kilian Schwarz ALICE Meeting August 1, 2005

2 Overview  ALICE framework  What part of ALICE framework is installed where at GSI and how can it be accessed/used  ALICE Computing model (Tier architecture)  Resource consumption of individual tasks  Resources at GSI and GridKa

3 ALICE Framework ROOT AliRoot STEER Virtual MC G3 G4 FLUKA HIJING MEVSIM PYTHIA6 PDF CRT EMCALZDC FMD ITS MUON PHOSPMDTRD TPC TOF STRUCT START RICH RALICE EVGEN HBTP HBTAN ISAJET AliEn F. Carminati, CERN

4 Software installed at GSI: AliRoot  Installed at: /d/alice04/PPR/AliRoot  Newest version: AliRoot v4-03-03  Environment setup via: >. gcc32login >. gcc32login >. alilogin dev/new/pro/version-number >. alilogin dev/new/pro/version-number  gcc295-04 not supported anymore  gcc295-04 not supported anymore  corresponding ROOT version initialized, too  corresponding ROOT version initialized, too * responsible person: Kilian Schwarz * responsible person: Kilian Schwarz

5 Software installed at GSI: ROOT (AliRoot is heavily based on ROOT)  Installed at: /usr/local/pub/debian3.0/gcc323-00/rootmgr  Newest version: 502-00  Environment setup via >. gcc32login / alilogin or rootlogin >. gcc32login / alilogin or rootlogin Responsible persons:Responsible persons: - Joern Adamczewski / Kilian Schwarz - Joern Adamczewski / Kilian Schwarz See also: http://www-w2k.gsi.de/rootSee also: http://www-w2k.gsi.de/roothttp://www-w2k.gsi.de/root

6 Software installed at GSI: geant3 (needed for simulation: accessed via VMC)  Installed at: /d/alice04/alisoft/PPR/geant3  Newest version: v1-3  Environment setup via gcc32login/alilogin  Responsible person: Kilian Schwarz

7 Software at GSI: geant4/Fluka (simulation: accessed via VMC)  Both so far not heavily used from ALICE  Geant4: standalone versions up to G4.7.1  newest VMC version: geant4_vmc_1.3  Fluka: not installed so far by me  Environment setup via >. gsisimlogin [-vmc] dev/new/prod/version >. gsisimlogin [-vmc] dev/new/prod/version See also http://www- linux.gsi.de/~gsisim/g4vmc.htmlSee also http://www- linux.gsi.de/~gsisim/g4vmc.htmlhttp://www- linux.gsi.de/~gsisim/g4vmc.htmlhttp://www- linux.gsi.de/~gsisim/g4vmc.html Responsible person: Kilian SchwarzResponsible person: Kilian Schwarz

8 Software at GSI: event generators (task: simulation)  Installed at: /d/alice04/alisoft/PPR/evgen  Available: - Pythia5 - Pythia5 - Pythia6 - Pythia6 - Venus - Venus Responsible person: Kilian SchwarzResponsible person: Kilian Schwarz

9 Software at GSI: AliEn The ALICE Grid Environment  Currently being set up in the version2 (AliEn2)  Installed at: /u/aliprod/alien  Idea: global production and analysis  Environment setup via..alienlogin  Copy certs from /u/aliprod/.globus or register own certs  Usage: /u/aliprod/bin/alien (proxy-init/login)  Then: register files and submit grid-jobs  Or: directly from ROOT !!!  Status: global AliEn2 production testbed currently being set up.  Will be used for LCG SC3 in September  Individual analysis of globally distributed Grid data at the latest during LCG SC4 2006 via AliEn/LCG/PROOF  Non published analysis possible already now: - create AliEn-ROOT Collection (xml file readable via AliEn) - create AliEn-ROOT Collection (xml file readable via AliEn) - analyse via ROOT/PROOF (TFile::Open(“alien://alice/cern.ch/production/…”) - analyse via ROOT/PROOF (TFile::Open(“alien://alice/cern.ch/production/…”) - Web Frontend being created via ROOT/QT - Web Frontend being created via ROOT/QT  Responsible person: Kilian Schwarz

10 AliEn2 services (see http://alien.cern.ch) http://alien.cern.ch Local scheduler ALICE VO – central services Central Task Queue Job submission File Catalogue Configuration Accounting User authentication Computing Element Workload management Job Monitoring Storage Element(s) DB Data Transfer Storage Element Cluster Monitor AliEn Site services Disk and MSS Existing site components ALICE VO – Site services integration

11 Software at GSI: Globus  Installed at: /usr/local/globus2.0 and /usr/local/grid/globus /usr/local/grid/globus  Versions globus2.0 and 2.4  Idea: can be used to send batch jobs to GridKa (far more resources available than at GSI)  Environment setup via:. globuslogin  Usage: > grid-proxy-init (Grid certificate needed !!!) > grid-proxy-init (Grid certificate needed !!!) > globus-job-run/submit alice.fzk.de Grid/Batch job > globus-job-run/submit alice.fzk.de Grid/Batch job Responsible person: Victor Penso/Kilian SchwarzResponsible person: Victor Penso/Kilian Schwarz

12 GermanGrid CA How to get a certificate in detail: See http://wiki.gsi.de/Grid/DigitalCertificates

13 Software at GSI: LCG  Installed at: /usr/local/grid/lcg  Newest version: LCG2.5  Idea: global batch farm  Environment setup:. lcglogin  Usage: > grid-proxy-init (Grid certificate needed !!!) > grid-proxy-init (Grid certificate needed !!!) > edg-job-submit batch/grid job (jdl-file) > edg-job-submit batch/grid job (jdl-file) See also: http://wiki.gsi.de/GridSee also: http://wiki.gsi.de/Gridhttp://wiki.gsi.de/Grid Responsible person: Victor Penso, Anar Manafov, Kilian SchwarzResponsible person: Victor Penso, Anar Manafov, Kilian Schwarz

14 LCG: the LHC Grid Computing project (with ca. 11k CPUs world’s largest Grid Testbed)

15 Software at GSI: PROOF  Installed at: /usr/local/pub/debian3.0/gcc323-00/rootmgr  Newest version: ROOT 502-00  Idea: parallel analysis of larger data sets for quick/interactive results  Personal PROOF Cluster at GSI, integrated in batch farm, can be set up via > prooflogin (e.g. number of slaves, data to be analysed, -h (help)) > prooflogin (e.g. number of slaves, data to be analysed, -h (help)) See also: http://wiki.gsi.de/Grid/TheParallelRootFacilitySee also: http://wiki.gsi.de/Grid/TheParallelRootFacilityhttp://wiki.gsi.de/Grid/TheParallelRootFacility Later personal PROOF Cluster including GSI and GridKa via Globus possibleLater personal PROOF Cluster including GSI and GridKa via Globus possible Later global PROOF Cluster via AliEn/D-Grid possibleLater global PROOF Cluster via AliEn/D-Grid possible Responsible person: Carsten Preuss, Robert Manteufel, Kilian SchwarzResponsible person: Carsten Preuss, Robert Manteufel, Kilian Schwarz

16 Parallel Analysis of Event Data root Remote PROOF Cluster proof TNetFile TFile Local PC $ root ana.C stdout/obj node1 node2 node3 node4 $ root root [0] tree.Process(“ana.C”) $ root root [0] tree.Process(“ana.C”) root [1] gROOT->Proof(“remote”) $ root root [0] tree.Process(“ana.C”) root [1] gROOT->Proof(“remote”) root [2] dset->Process(“ana.C”) ana.C proof proof = slave server proof proof = master server #proof.conf slave node1 slave node2 slave node3 slave node4 *.root TFile

17 LHC Computing Model (Monarc and Cloud) LHC Computing Model (Monarc and Cloud) One Tier 0 site at CERN for data taking ALICE (Tier 0+1) in 2008: 500 TB disk (8%), 2 PB tape, 5.6 MSI2K (26%) Multiple Tier 1 sites for reconstruction and scheduled analysis 3 PB disk (46%), 3.3 PB tape 9.1 MSI2K (42%) Tier 2 sites for simulation and user analysis 3 PB disk(46%), 7.2 MSI2K (33%)

18 ALICE Computing model more in detail:  T0 (CERN): long term storage for raw data, calibration and first reconstruction  T1 (5, in Germany GridKa): long term storage of second copy of raw data, 2 subsequent reconstructions, scheduled analysis tasks, reconstruction of MC Pb-Pb data, long term storage of data processed at T1s and T2s  T2 (many, in Germany GSI): generate and reconstruct simulated MC data and chaotic analysis  T0/T1/T2: short term storage in multiple copies of active data  T3 (many, in Germany  T3 (many, in Germany Münster, Frankfurt, Heidelberg, GSI) chaotic analysis

19 CPU requirements and Event size p-p / KSI2k x s/ev. Heavy Ion KSI2k x s/ev. Reconstruction5.468 Scheduled analysis 15230 Chaotic analysis 0.57.5 Simulation (ev. cr. and rec.) 350 15000 (2-4 hours on standard PC) Raw / MB ESD / MB AOD / MB Raw MC ESD MC p-p10.040.0040.40.04 Heavy Ion 12.52.50.253002.5

20 ALICE Tier resources Tier0Tier1sTier2sTotal CPU (MSI2k) 7.513.813.735.0 Disk (PB) 0.17.52.610.2 Tape (PB) 2.37.5-9.8 Bandwidth in (Gb/s) 1020.01 Bandwidth out (Gb/s) 1.20.020.6

21 GridKa (1 of 5 T1s) GridKa (1 of 5 T1s) IN2P3, CNAF, GridKa, NIKHEF, (RAL), Nordic, USA (effective ~5) ramp up time: due to shorter runs and reduced luminosity at the beginning not full resources needed: 20% 2007, 40% 2008, 100% end of 2008 +++Total Status 2005 20062007200820092009 CPU (kSI2k) 2435730060018003000 Disk (TB) 28 (50% used) 121602006001000 Tape (TB) 56242203009001500

22 GSI + T3(support for the 10% German ALICE members) +++Total Status 2005 20062007200820092009 CPU (kSI2k) 64 Dual P4, 20 DP3, (80 DOpteron new bought) --400130 530(800) + 500 T3 Disk (TB) 2.23 (0.3 free) – 15 TB new 20030 230 + 100 T3 Tape (TB) 190 (100 used) 5005001000 T3: Münster, Frankfurt, Heidelberg, GSI


Download ppt "The ALICE Framework at GSI Kilian Schwarz ALICE Meeting August 1, 2005."

Similar presentations


Ads by Google