The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN.

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 14 th Collaboration Meeting Birmingham 6-7 Sept 2005.
Advertisements

31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
MEG-Review Feb MEG Software Group MEG Software Status Framework for MC, Schedule, DC reconstruction update and Discussion on a ROOT-based offline.
1 Databases in ALICE L.Betev LCG Database Deployment and Persistency Workshop Geneva, October 17, 2005.
Blueprint RTAGs1 Coherent Software Framework a Proposal LCG meeting CERN- 11 June Ren é Brun ftp://root.cern.ch/root/blueprint.ppt.
1 The MEG Software Project PSI 9/2/2005Corrado Gatto Offline Architecture Computing Model Status of the Software Organization.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Lucia Silvestris, INFN Bari and CERN/CMC Status Report on CPT Project 23 March 2001, CERN Meeting del Consorzio INFN Status Reports on CPT Project, on.
Report from the Lecce’s Offline Group Rome 6-Feb-2006 The Achievements The People Future Plans.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ATLAS Computing Planning ATLAS Software Workshop August 1998 Jürgen Knobloch Slides also.
Data Quality Monitoring of the CMS Tracker
December 17th 2008RAL PPD Computing Christmas Lectures 11 ATLAS Distributed Computing Stephen Burke RAL.
SVX Software Overview Sasha Lebedev VTX meeting 09/07/ SVX Software web page:
Shuei MEG review meeting, 2 July MEG Software Status MEG Software Group Framework Large Prototype software updates Database ROME Monte Carlo.
The GlueX Collaboration Meeting October 4-6, 2012 Jefferson Lab Curtis Meyer.
Conditions DB in LHCb LCG Conditions DB Workshop 8-9 December 2003 P. Mato / CERN.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
As of 28 Juni 2005Getting Starged with GEM - Shuei Yamada 1 Getting Started with GEM Shuei YAMADA ICEPP, University of Tokyo What is GEM? Before you start.
Updating JUPITER framework using XML interface Kobe University Susumu Kishimoto.
David N. Brown Lawrence Berkeley National Lab Representing the BaBar Collaboration The BaBar Mini  BaBar  BaBar’s Data Formats  Design of the Mini 
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
14 February 2007Fabrizio Cei1 INFN and University of Pisa PSI Review Meeting PSI, 14 February 2007 Status of MEG Software.
5 May 98 1 Jürgen Knobloch Computing Planning for ATLAS ATLAS Software Week 5 May 1998 Jürgen Knobloch Slides also on:
Prediction W. Buchmueller (DESY) arXiv:hep-ph/ (1999)
CBM Software Workshop for Future Challenges in Tracking and Trigger Concepts, GSI, 9 June 2010 Volker Friese.
19 November 98 1 Jürgen Knobloch ATLAS Computing ATLAS Computing - issues for 1999 Jürgen Knobloch Slides also on:
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
9 November 98 1 Jürgen Knobloch ATLAS Computing Overview of ATLAS Computing Jürgen Knobloch Slides also on:
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
November 2013 Review Talks Morning Plenary Talk – CLAS12 Software Overview and Progress ( ) Current Status with Emphasis on Past Year’s Progress:
Introduction CMS database workshop 23 rd to 25 th of February 2004 Frank Glege.
Databases in CMS Conditions DB workshop 8 th /9 th December 2003 Frank Glege.
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
1 N. BrunerUniv. of New Mexico MuTr Software  Online  Calibration  Offline.
Paul Scherrer Institut 5232 Villigen PSI CHEP 2006 in Mumbay / / Matthias Schneebeli ROME CHEP 2006 Presented by Matthias Schneebeli a universally.
Integration of the ATLAS Tag Database with Data Management and Analysis Components Caitriana Nicholson University of Glasgow 3 rd September 2007 CHEP,
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
1 Checks on SDD Data Piergiorgio Cerello, Francesco Prino, Melinda Siciliano.
23/2/2000Status of GAUDI 1 P. Mato / CERN Computing meeting, LHCb Week 23 February 2000.
NEC' /09P.Hristov1 Alice off-line computing Alice Collaboration, presented by P.Hristov, CERN NEC'2001 September 12-18, Varna.
Online Monitoring System at KLOE Alessandra Doria INFN - Napoli for the KLOE collaboration CHEP 2000 Padova, 7-11 February 2000 NAPOLI.
General requirements for BES III offline & EF selection software Weidong Li.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
August 98 1 Jürgen Knobloch ATLAS Software Workshop Ann Arbor ACOS Report ATLAS Software Workshop December 1998 Jürgen Knobloch Slides also on:
Proposal for the MEG Offline System Assisi 9/21/2004Corrado Gatto General Architecture Computing Model Organization & Responsibilities Milestones.
Main parameters of Russian Tier2 for ATLAS (RuTier-2 model) Russia-CERN JWGC meeting A.Minaenko IHEP (Protvino)
Database Issues Peter Chochula 7 th DCS Workshop, June 16, 2003.
R 3 B Calorimeter Simulation H. Alvarez Pol – R 3 B Calorimeter Simulation NUSTAR Calorimeter WG – Valencia 17/06/05 H. Alvarez Pol, D. Cortina, I. Durán.
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Marco Cattaneo, 3-June Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
1 SLAC simulation workshop, May 2003 Ties Behnke Mokka and LCDG4 Ties Behnke, DESY and SLAC MOKKA: european (france) developed GEANT4 based simulation.
1 GlueX Software Oct. 21, 2004 D. Lawrence, JLab.
9 Feb, MEG Software Status -for the coming review committee- Question from A.Blondel Our answer Schedule and Man power MEG Software Group.
DAQ thoughts about upgrade 11/07/2012
Marco Cattaneo, 20-May Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
Database Replication and Monitoring
Migration of reconstruction and analysis software to C++
The LHCb Software and Computing NSS/IEEE workshop Ph. Charpentier, CERN B00le.
Calibrating ALICE.
US ATLAS Physics & Computing
OO-Design in PHENIX PHENIX, a BIG Collaboration A Liberal Data Model
DQM for the RPC subdetector
Simulation and Physics
MonteCarlo production for the BaBar experiment on the Italian grid
ATLAS DC2 & Continuous production
HEC Beam Test Software schematic view T D S MC events ASCII-TDS
Offline framework for conditions data
Presentation transcript:

The MEG Offline Project General Architecture Offline Organization Responsibilities Milestones PSI 2/7/2004Corrado Gatto INFN

Estimated Event Size and Storage Sizes – Raw data 1.2 MB/event – ESD 10 KB/event – AOD 2 KB/event – TAG or DPD1 KB/event Storage – Raw data 35 TByte – Reconstructed data 13 TByte – MC generated events 40 TByte – MC reconstructed events 25 TByte Assuming 10 9 generated MC events Scaling BABAR & KLOE by multiplicity DST

General Architecture: Guidelines Computing & data model: MONARC Completely based on ROOT Ensure high level of modularity (for easy of maintenance) Absence of code dependencies between different detector modules (to C++ header problems) The structure of every detector package is designed so that static parameters (like geometry and detector response parameters) are stored in distinct objects The data structure is build up as ROOT TTree-objects (Folders)

Data Access: ROOT + RDBMS Model histograms Calibrations Geometries Run/File Catalog Trees Event Store ROOT files Oracle MySQL

Offline Tasks Offline Framework: – ROOT Installation and Maintenance – Main program Implementation – Container Classes – Control Room User Interface (Run control, DQM, etc…) I/O Development: – Interface to DAQ – Distributed Computing (PROOF) – Interface to Tape – Data Challenges Database Development (Catalogue and Conditions): – Installation – Maintenance of constant – Interface Contribution to Event Display DQM – At sub-event level responsibility is of detector experts – Full event is responsibility of the Offline team Documentation

Offline Tasks 2 Offline Software Coordination: – DAQ Integration – Coordinate the work of detector software subgroups Reconstruction Calibration Alignment Geometry DB Histogramming – Montecarlo Integration Data Structure Geometry DB – Supervise the production of collaborating classes – Receive, test and commit the code from individual subgroups Computing Coordination: – Hardware/Software Installation – Users and queue coordination – Supervise Tier-0 and Tier- 1/2 computing facilities

Manpower Estimate

Responsibilities & Tasks Detector experts: – LXe: Giovanni, Shuei, Ryu – DC: Matthias (hit), Hajime (Pattern), Lecce – TC: Pavia/Genova – Trigger: Donato (Pisa)

Milestones Start-up: October 2004 Choice of the prototype Offline system: October 2004 Organize the reconstruction code. December 2004 – per subdetector (simulation, part of reconstruction) – central tasks (framework, global reconstruction, visualisation, geometry database …) Write down the Offline Structure (container classes, event class, etc…) : February 2005 MDC: 4 th quarter 2005 Keep the existing MC in the Geant3 framework. Form a panel to decide if and how to migrate to ROOT: 4 th quarter 2005