Presentation is loading. Please wait.

Presentation is loading. Please wait.

Proposal for the MEG Offline System Assisi 9/21/2004Corrado Gatto General Architecture Computing Model Organization & Responsibilities Milestones.

Similar presentations


Presentation on theme: "Proposal for the MEG Offline System Assisi 9/21/2004Corrado Gatto General Architecture Computing Model Organization & Responsibilities Milestones."— Presentation transcript:

1 Proposal for the MEG Offline System Assisi 9/21/2004Corrado Gatto General Architecture Computing Model Organization & Responsibilities Milestones

2 Dataflow and Reconstruction Requirements 100 Hz L3 trigger evt size : 1.2 MB Raw Data throughput: (10+10)Hz  1.2Mb/Phys evt  0.1 + 80Hz  0.01 Mb/bkg evt = 3.5 Mb/s : 35 kB Total raw data storage: 3.5Mb/s  10 7 s = 35 TB/yr

3 Monarc Analysis Model Baseline: Event Sizes and Storage Sizes – Raw data 1.2 MB/event – ESD 10 KB/event – AOD 2 KB/event – TAG or DPD1 KB/event Storage – Raw data 35 TByte – Reconstructed data 13 Tbyte/reprocessing – MC generated events 40 TByte – MC reconstructed events 25 Tbyte /reprocessing Assuming 10 9 generated MC events Very Rough Estimate

4 Compare to the others

5 Requirements for software architecture or framework Geant3 compatible (at least at the beginning) Easy interface with existing packages: – Geant3, Geant4, external (fortran) event generators Scalability Simple structure to be used by non-computing experts Written and maintained by few people Portability Use a world-wide accepted framework Use ROOT + An existing Offline Package as starting point

6 Project is started 5 existing Offline packages under evaluation AliRoot is winning: all aspects of the Offline are implemented Support offered by Carminati+Brun

7 Raw Performances Pure Linux setup 20 data sources FastEthernet local connection

8 MEG Computing Model: Work in Progress Analysis = Reconstruction -> Need O (1) farm per analysis Analysis policy not yet established (1 centralized analysis or several competing analyses) Very CPU demanding processing: – MC generation: 0.5 Hz – Reconstruction: 1 Hz Final design within few months

9 Manpower Estimate (framework only) Available at Lecce (Estimate)

10 Responsibilities & Tasks (all software) Detector experts: – LXe: Signorelli, Yamada, Savada – DC: Schneebeli (hit), Hajime (Pattern), Lecce – TC: Pavia/Genova – Trigger: Nicolo’ (Pisa)

11 Starting-up: Tasks and Objectives Setup a system to test the Offline code DAQ Prompt Calib Reco farm Online Disk Server Staging to tape Prototype the Main Program Container Classes Steering program FastMC Test functionalities & performance Core Offline System Development RDBMS

12 Milestones 1. Start-up: September 2004 2. Choice of the prototype Offline system: End October 2004 3. Organize the reconstruction code. December 2004 1. per subdetector (simulation, part of reconstruction) 2. central tasks (framework, global reconstruction, visualisation, geometry database …) 4. Start the development system HW: January 2005 5. Write down the Offline Structure (container classes, event class, etc…) : February 2005 6. MDC: 4 th quarter 2005 7. Keep the existing MC in the Geant3 framework. Form a panel to decide if and how to migrate to ROOT: 4 th quarter 2005

13 Conclusions MEG’s Offline project approved by the collaboration Offline group is consolidating (mostly in Lecce) Work is starting Software framework and architecture have been frozen Computing Model will be chosen soon Merging with existing software within 1 year

14 Proposed Architecture Fully OO. Each detector executes a list of detector actions/tasks & produces/posts its own data All the functionalities are implemented in the “Detector Class” – Both sensitive modules (detectors) and non-sensitive ones are described by this base class. – supports the hit and digit trees produced by the simulation – supports the the objects produced by the reconstruction. – This class is also responsible for building the geometry of the detectors. The Run Manager coordinates the detector classes – executes the detector objects in the order of the list – Global trigger, simulation and reconstruction are special services controlled by the Run Manager class Ensure high level of modularity (for easy of maintenance) – The structure of every detector package is designed so that static parameters (like geometry and detector response parameters) are stored in distinct objects The data structure is build up as ROOT TTree-objects Offline services (Geometry browser, Event Display, RDBMS interface, Tape interface, etc.) based on ROOT bult-in services

15 Computing Model

16 What Does MEG Need? Wan file access Parallel/remote Processing Robotic Tape Support RDBMS Connectivity Event Display GEANT3 Interface Geometric Modeler UI DQM Histogramming

17 What does ROOT Offer Extensive CERN support – Bonus for small collaborations Unprecedented Large contributing HEP Community – Open Source project Multiplatforms Support multi-threading and asynchronous I/O – Vital for a reconstruction farm Optimised for different access granularity – Raw data, DST's, NTuple analysis

18 Experiments Using ROOT for the Offline ExperimentMax Evt sizeEvt rateDAQ outTape StorageSubdetectorsCollaborators STAR20 MB1 Hz20 MB/s200 TB/yr3>400 Phobos300 kB100 Hz30 MB/s400 TB/yr3>100 Phenix116 kB200 Hz17 MB/s200 TB/yr12600 Hades9 kB (compr.)33 Hz300 kB/s1 TB/yr517 Blast0.5 kB500 Hz250 kB/s555 Meg1.2 MB100 Hz3.5 MB/s70 TB/yr3

19 From GEANT3 to ROOT A conversion program from Geant3 to Root (g2root) exists to convert a GEANT RZ file into a ROOT C++ program – All the components are translated: geometry, materials, kinematics, etc. – However, need to write the output in the format required by the reconstruction code. – The new code is integrated with the VMC schema and can be run with any desired MC Call the fortran code from a C++ program – The calls to Geant3 are intercepted and the ROOT components are fully usable (geometry browser, geometric modeler, etc.) – However, need to interface the output in Zebra format with the TTree format required by the ROOT based reconstruction code. Migration schema #1 Migration schema #2

20 ROOT vs Fortran Several months start-up Investment for the future All HEP components built in a unique framework All libraries still mantained Plenty of compilers Built-in 3D full Event display with navigator Interactive remote analysis Fully platform independent (can even mix Unix and Windows) 3-D histos Automatic file compression Immediate start-up Hard to actract younger people Need to merge many libraries CERNLIB no longer mantained (last is for the Itanium) Hard to find free compilers No Event display available in fortran (naive in G3) No interactive remote analysis No compression A ROOT file is 2-3 times smaller than ZEBRAA ROOT file is 2-3 times smaller than ZEBRA

21 Conclusions ROOT has all the features MEG need already built-in Migration from Geant3 to ROOT is a non existent problem Real issue are the reconstruction routines: they have to be written in C++ The decision between an Offline in fortran or in C++ should be taken on the base of much reconstruction code rather than Montecarlo code has already been written


Download ppt "Proposal for the MEG Offline System Assisi 9/21/2004Corrado Gatto General Architecture Computing Model Organization & Responsibilities Milestones."

Similar presentations


Ads by Google