Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computing Resources for the AIACE Experiment Raffaella De Vita INFN – Sezione di Genova The AIACE experiment On-Site Computing (JLab) Off-Site Computing.

Similar presentations


Presentation on theme: "Computing Resources for the AIACE Experiment Raffaella De Vita INFN – Sezione di Genova The AIACE experiment On-Site Computing (JLab) Off-Site Computing."— Presentation transcript:

1 Computing Resources for the AIACE Experiment Raffaella De Vita INFN – Sezione di Genova The AIACE experiment On-Site Computing (JLab) Off-Site Computing (Ge, LNF) Future Perspectives and Summary Computing models/needs for other nuclear physics experiments

2 The Aiace Experiment AIACE Attività Italiana A CEBAF: Laboratori Nazionali di Frascati & INFN e Università di Genova Physics Goal: Study of the hadron structure and of the properties of strong interaction in terms of quarks and gluons using electromagnetic probes Experimental Setup: Jefferson Laboratory (Newport News, VA-USA) with the CEBAF accelerator (electron and photons up to 6 GeV) and the CLAS detector. International Collaboration (CLAS) : 35 Institutes from 7 Countries ~150 Members (~9% AIACE) Italian Collaboration: Scientist: 9.4 fte, Technical Staff 2.6 fte Activity: Data taking started in 1998 Publications: 55 physics papers published on refereed journals (PRL, PRC, PRD) 20 detector-related papers (NIM, IEEE) 21 papers (28%) with AIACE Leading Authors R. De Vita, INFN – Genova Computing Resources for the AIACE Experiment CCR2007, Rimini 8 Maggio 2007

3 JLab and the CLAS detector  electron or photon beam up to 6 GeV on fixed target (proton, deuteron, nuclear targets)  L = 10 34 cm -2 s -1  CEBAF Large Acceptance Spectrometer (CLAS)  ~ 40 000 electronic channels  Best suited for multiparticle final states R. De Vita, INFN – Genova Computing Resources for the AIACE Experiment CCR2007, Rimini 8 Maggio 2007

4 Data Acquisition High Resolution TDCs ADCs SCALERs Drift Chamber TDCs ROCs UNIX cluster Network RAID Front EndBack End replaced/upgraded in 2004-2006 Readout Commercial electronics widely used VME/FastBus ADCs Newly installed VME Pipeline TDCs (CAEN) VME Motorola controller equipped with dual CPU boards (OS:VXWorks) Fast/Gigabit Ethernet Run Control based on 3 Quad-Opteron servers (OS: Solaris / Linux) + Sun UltraSparc servers silo 2.5 TB Raid Array + 2000 TB Tape Silo 8Gb/s Courtesy of S. Boiarinov, Jefferson Lab

5 DAQ performane 8 KHz event rate with maximum limit of 10 KHz set by front-end electronics dead time 35 MB/s data rate in routine operation (50MB/s in non- beam test) with < 15% dead time Raw data volume per year ~ 300 TB Gbytes CLAS12 R. De Vita, INFN – Genova Computing Resources for the AIACE Experiment CCR2007, Rimini 8 Maggio 2007 Courtesy of S. Boiarinov and I. Bird, Jefferson Lab

6 Data Production data taking subdidvided into Run Periods few weeks/months duration different run conditions (beam/target type, beam energy, detector setup) different physics goals Example: EG4: Measurement of Proton Spin Structure Functions Data taking in February-June 2006 2010 9 triggers on tape 70 TB raw data (35 000 files) 5 TB estimated data volume for compressed DSTs Statistics for MC simulation  real events) (10 10 0.1s/event  32 y on 1 CPU) R. De Vita, INFN – Genova Computing Resources for the AIACE Experiment CCR2007, Rimini 8 Maggio 2007

7 On site data processing detector calibration and raw data processing including DST production performed using on-site computing facilities, i.e. JLab farm: o 166 dual processor systems o ~360 000 specint2000 o software: RedHat Enterprise Linux 3, LSF 6.0 80% of computing resources available for CLAS data processing o 70 % reserved for processing of raw data o 30 % available for all CLAS users (80 cpu  150 people) limited resources are left for physics analysis and MC simulations  these tasks are performed off-site with alternative computing facilities R. De Vita, INFN – Genova Computing Resources for the AIACE Experiment CCR2007, Rimini 8 Maggio 2007

8 AIACE@Genova Computing CPU FARM SWITCH Server Applications, User accounts,... User WorkStation 4 TB NAS User WorkStation 14 TB Storage (FCLink)  Network Fast/Gigabit Ethernet private subnet  Central Linux Server  User accounts  Application software  Storage 4 TB NAS 14 TB Storage area with fiber channel technology  Scientific Computing Linux Farm 16 CPUs PBS queuing system R. De Vita, INFN – Genova Computing Resources for the AIACE Experiment CCR2007, Rimini 8 Maggio 2007

9 AIACE@LNF Computing  Network gigabit Ethernet private subnet  Storage NAS-NAS mirror (7.4 TB, 40% for AIACE usage) Tape backup (30 slots)  Scientific Computing Linux Farm (30 CPUs, 60% for AIACE usage) PBS queuing system  Intranet Services Directory services Windows cluster Windows-Linux integration  Electrical and Environment 18 KVA redundant UPS temperature monitoring remote management RAID5 TAPE PBS/PUBLIC HEAD RAID5 WORKER CPU CPU private subnet public subnet volume mirror CPU Courtesy of F. Ronchetti, Laboratori Nazionali di Frascati

10 Off Site Data Processing Ongoing analysis projects: LNF: 4 physics analysis of hadronic reactions 1 data set (2 TB DSTs) 4 different sets of MC simulations Genova: 5 analysis 3 data sets (15 TB DSTs) 5 different sets of MC simulations Data transfer from JLab through network connection Present storage resources seem adequate for local computing Available computing power (Genova=20 CPUs + LNF=18 CPUs) not sufficient !!! Possible solution: Usage of Tier1 computing facilities (preferably through GRID) Requested 50 000 specint2000 + local storage currently not available Other solutions: Tier2 ??? R. De Vita, INFN – Genova Computing Resources for the AIACE Experiment CCR2007, Rimini 8 Maggio 2007

11 Summary Activity of AIACE Collaboration continues at JLab with intensive data collection and analysis efforts Data volume almost comparable with high energy experiments (LHC) Current model based on small local storage and farms for physics analysis and simulations Presently available computing resources not sufficient Usage of GRID and related computing resources may solve the problem. Possibility is being considered (Not easy!!) R. De Vita, INFN – Genova Computing Resources for the AIACE Experiment CCR2007, Rimini 8 Maggio 2007

12 GRID Map of the nodes of the PANDA GRID http://panda.physics.gla.ac.uk/Alien/

13 PANDAGRID PANDA Grid management is done by Glasgow group Grid infrastructure and know-how are expanding inside PANDA Latest additions: Pavia, ScotGrid PandaRoot (PANDA simulation software) installs and runs well on all platforms The only errors encountered were due to user misconfiguration or to access restrictions (firewalls) PandaRoot/CBM developers are now acquainted with the gridware Panda Grid already used and always ready for simulations! http://panda.physics.gla.ac.uk/Alien/

14 Latest work Improved PandaRoot installation scripts, retested on x86_64 Debugged PBS and LSF problems Established full simulation/reconstruction chain with catalogue triggers Jobs ran in one week test: ~15000 Necessity to link to other GRID activity http://panda.physics.gla.ac.uk/Alien/

15 Fazia

16

17 Jefferson Lab E max ~ 6 GeV I max ~ 200 A Duty Factor ~ 100% Beam P ~ 80% E  ~ 0.8- 5.7 GeV CLAS R. De Vita, INFN – Genova Computing Resources for the AIACE Experiment CCR2007, Rimini 8 Maggio 2007

18 Jlab Linux Farm 1000 jobs running simultaneously full usage of available resources 50% of resources used CLAS for raw data reconstruction average number of jobs/user  10-20

19 Network Layout  Gigabit ethernet Private Public 1+1 Gbit/s uplink 1 Gbit/s uplink LNF Network RAI D5 CLUSTER systems Cisco CAT6000 Cisco CAT2950 7 dual-CPU 4 quad-CPU 2 dual-CPU 1 single-CPU 1 Xeon NAS 1 P3 NAS Courtesy of F. Ronchetti, Laboratori Nazionali di Frascati

20 Storage System RAI D5 /user s 1 Gbit/s NAS-NAS volume mirror Computing nodes have only core OS. TAP E PROCOM 1750 2.2 TB SCSIPROCOM 1800 5.2 TB SCSI 30-slot LTO2 tape robot All user and physics data reside on NAS systems NFS CIFS NFS CIFS Courtesy of F. Ronchetti, Laboratori Nazionali di Frascati

21 Linux Scientific Computing Nodes runnin g PBS jobs Courtesy of F. Ronchetti, Laboratori Nazionali di Frascati

22 Intranet Services 4 servers running Win2k3:  Common Windows/Linux Directory Services NIS/Active Directory (same credentials for both worlds)  Common Logical File System DFS (Windows), NFS (Linux) have same structure (reversing slashes) Example: \\ed22\dfs\users ↔ //ed22/dfs/users  Common User Areas User finds same files and environment under any OS  Tight integration Windows Desktop accessible from Linux Windows and Linux HOME are the same Password changes are synchronized Printing goes through AD servers  Clients/Thin Clients User access Linux PCs in NIS domain use cluster file system and settings Windows PCs in AD use Windows cluster file system and settings Thin Clients can open X or RDP (Windows) sessions directly on cluster Courtesy of F. Ronchetti, Laboratori Nazionali di Frascati


Download ppt "Computing Resources for the AIACE Experiment Raffaella De Vita INFN – Sezione di Genova The AIACE experiment On-Site Computing (JLab) Off-Site Computing."

Similar presentations


Ads by Google