Presentation is loading. Please wait.

Presentation is loading. Please wait.

COMPASS off-line computing

Similar presentations


Presentation on theme: "COMPASS off-line computing"— Presentation transcript:

1 COMPASS off-line computing
the COMPASS experiment the analysis model the off-line system hardware software CHEP2000

2 The COMPASS Experiment (Common Muon and Proton Apparatus for Structure and Spectroscopy)
Fixed target experiment at the CERN SPS approved in February 1997 commissioning from May 2000 data taking for at least 5 years collaboration about 200 physicists from Europe and Japan diversified physics programme muon beam gluon contribution to nucleon spin, quark spin distribution functions hadron beam glueballs, charmed baryons, Primakoff reactions all measurements require high statistics CHEP2000

3 Experimental Apparatus
Two stage spectrometer (LAS, SAS) Several new detectors GEMs, microMega, straw trackers, scintillating fibers, RICH, and silicon detectors, Calorimeters, Drift and MWP Chambers (440 K electronic channels) Not an easy geometry highly inhomogeneous magnetic field (SM1, PTM) CHEP2000

4 Expected Rates beam intensity: 108 muons/s with duty cycle of 2.4s/14s
RAW event size: ~ kB trigger rate: 104 events/spill DAQ designed for 105 events/spill (hadron programme) on-line filtering continuous data acquisition flux: 35 MB/s data taking period ~100 days/year: ~ 1010 events/year, ~ 300 TB/year of RAW data CHEP2000

5 COMPASS analysis model
The RAW data will be stored at CERN (no copy foreseen) and have to be accessible during all the experiment lifetime will be processed at CERN, in parallel to and at the same speed of data acquisition assuming no pre-processing for calibrations ~ 1 reprocessing of the full data set processing time 2 SPECint95-sec/event calibrations “on-line”, powerful on- off-line monitoring, small data subset reprocessing if fast raw data access the needed CPU power is 2000 SpecInt95 (~ CU) Physics analysis will be performed at the home institutes, as well as specific studies and MC production the relevant sets of data must have a much smaller size remote and concurrent access to raw data important (“PANDA” model…) CHEP2000

6 General choices In 1997 COMPASS decided to
build a completely new software system use OO techniques C++ as programming language ODB to store the data. Given the short time scale, the ‘small’ collaboration, the novelty, and the well known difficulty of the tasks, it was mandatory to collaborate with the IT division foresee the use LHC++ and commercial products (HepODBMS, Objectivity/DB) look at other developments (ROOT) CHEP2000

7 Off-line system Hardware Software central data recording
COMPASS Computing Farm (CCF) (see M. Lamanna presentation, Feb. 7, session E) Software Data structures and access CORAL (Compass Reconstruction and AnaLysis) program CHEP2000

8 Central data recording (CDR)
updated version of the CERN Central Data Recording (CDR) scheme the on-line system performs the event building (and filtering) - ALICE DATE system writes RAW data on local disks files in byte-stream format (10-20 parallel streams), keeping a "run" structure (typical sizes of 50 GB) the Central Data Recording system transfers the files to the COMPASS Computing Farm, at the computer center (rate of 35 MB/s) the COMPASS Computing Farm CCF formats the data into a federated database (Objectivity/DB) converts the RAW events in simple persistent objects performs fast event tagging or clusterisation (if necessary) sends the DB to HMS for storage CHEP2000

9 COMPASS Computing Farm (CCF)
Beginning 1998: IT/PDP Task Force: computing farms for high-rate experiments (NA48, NA45, and COMPASS). Proposed model for the CCF: hybrid farm with about 10 Proprietary Unix servers (“data servers”) about 200 PCs (”CPU clients”), 2000 SPECint95 (0.2 s/ev) 3 to 10 TB of disk space Present model: farm with PCs as “data servers” and ”CPU clients” order of 100 dual PIII machines standard PCs running CERN certified Linux (now: RH5.1 with kernel /12) CHEP2000

10 CCF CHEP2000

11 COMPASS Computing Farm (cont.)
The data servers will handle the network traffic from the CDR, format the RAW events into a federated DB , and send them to the HSM and receive the data to be processed from the HSM, if needed, distribute the RAW events to the PCs for reconstruction receive back the output (persistent objects), and send it to the HSM. The CPU clients will process the RAW events (reconstruction of different runs/files has to run in parallel) a real challenge:1000 ev/sec to be stored and processed by 100 dual PCs tests with prototypes are going on since two years; good results CHEP2000

12 Off-line system Software Data structures
Event DB Experimental conditions DB Reconstruction quality control monitoring DB MC data CORAL: Compass Reconstruction and AnaLysis program CHEP2000

13 Data structures Event DB
event headers containers: small dimensions (on disk), basic information like tag, time,... RAW event containers: just one object with event (DATE) buffer  reconstructed data containers: objects for physics analysis direct access to objects run: file structure not visible association to avoid duplications direct: raw - reco. data via “time”: raw - mon. ev CHEP2000

14 Data structures (cont.)
Experimental conditions DB includes all information for processing and physics analysis (on-line calibrations, geometrical description of the apparatus...) based on CERN porting of BABAR Condition Database package (included in HepODBMS) versioning of  objects access to valid information using event time Reconstruction quality control monitoring data includes all quantities needed for monitoring the stability of the reconstruction and of the apparatus performances stored in Objectivity/DB Monte Carlo data we are using Geant3 (Geant4: under investigation, not in the short term) ntuples, Zebra banks CHEP2000

15 status Event DB Experimental conditions DB
version 1 ready Experimental conditions DB in progress: implementation started Reconstruction quality control monitoring data starting Monte Carlo data ready CHEP2000

16 CORAL COmpass Reconstruction and AnaLysis program
fully written in C++, OO technique modular architecture with a framework providing all basic functionalities well defined interfaces for all components needed for event reconstruction insulation layers for all "external" packages access to the experimental conditions and event DB (reading and writing persistent objects) - HepODBMS to assure flexibility in changing both reconstruction components and external packages components for event reconstruction developed in parallel detector decoding, pattern rec. in geom. regions, track fit, RICH and Calorimeter rec., … CHEP2000

17 CORAL CHEP2000

18 CORAL status development and tests on Linux
we try to keep portability on other platforms (Solaris) framework: almost ready work going on to interface new reconstruction components and access to experimental conditions DB reconstruction components: integrated inside CORAL and tested MC event reading and decoding, track pattern recognition, track fit,… integration foreseen soon RICH pattern recognition, Calorimeter reconstruction, vertex fit,... under development detector (DATE buffer) decoding, in parallel with on-line,... Goal: version 1 ready at the end of April 2000 all basic functionalities, even if not optimised as for all other off-line system components CHEP2000

19 General comments most of the problems we had are related to the fact that we are still in a transition period: no stable environment both for available software (LHC++) and OS (Linux) lack of standard “HEP made” tools and packages commercial products seem not to be always a solution too few examples of HEP software systems using new techniques expertise and resources having a large number of physicists knowing the new programming language (and techniques) requires time all the work has been done by a very small enthusiastic team (3 to 10 fte in 2 years) Still, we think we made the right choice CHEP2000

20 “FOCUS …. recognises the role that the experiment has
from the minutes of the 16th meeting of FOCUS held on December 2, 1999: “FOCUS …. recognises the role that the experiment has as a "test-bed" for the LHC experiments.” CHEP2000

21 CHEP2000


Download ppt "COMPASS off-line computing"

Similar presentations


Ads by Google