Computing for Alice at GSI (Proposal) (Marian Ivanov)

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Combined tracking based on MIP. Proposal Marian Ivanov.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
ALICE analysis at GSI (and FZK) Kilian Schwarz CHEP 07.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Beamline Takashi Kobayashi 1 Global Analysis Meeting Nov. 29, 2007.
CHEP03 - UCSD - March 24th-28th 2003 T. M. Steinbeck, V. Lindenstruth, H. Tilsner, for the Alice Collaboration Timm Morten Steinbeck, Computer Science.
Chiara Zampolli in collaboration with C. Cheshkov, A. Dainese ALICE Offline Week Feb 2009C. Zampolli 1.
The ALICE Analysis Framework A.Gheata for ALICE Offline Collaboration 11/3/2008 ACAT'081A.Gheata – ALICE Analysis Framework.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Experience with analysis of TPC data Marian Ivanov.
ALICE Upgrade for Run3: Computing HL-LHC Trigger, Online and Offline Computing Working Group Topical Workshop Sep 5 th 2014.
Framework for Raw Data Thomas Kuhr Offline Week 29/06/2004.
Real data reconstruction A. De Caro (University and INFN of Salerno) CERN Building 29, December 9th, 2009ALICE TOF General meeting.
1 Software for the KEK test Malcolm Ellis 13 th April 2005.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
Infrastructure for QA and automatic trending F. Bellini, M. Germain ALICE Offline Week, 19 th November 2014.
PWG3 Analysis: status, experience, requests Andrea Dainese on behalf of PWG3 ALICE Offline Week, CERN, Andrea Dainese 1.
Analysis trains – Status & experience from operation Mihaela Gheata.
5/2/  Online  Offline 5/2/20072  Online  Raw data : within the DAQ monitoring framework  Reconstructed data : with the HLT monitoring framework.
Cosmic Ray Run III Cosmic Ray AliEn Catalogue LHC08b 1.
TRT Offline Software DOE Visit, August 21 st 2008 Outline: oTRT Commissioning oTRT Offline Software Activities oTRT Alignment oTRT Efficiency and Noise.
Planning and status of the Full Dress Rehearsal Latchezar Betev ALICE Offline week, Oct.12, 2007.
AliRoot survey P.Hristov 11/06/2013. Offline framework  AliRoot in development since 1998  Directly based on ROOT  Used since the detector TDR’s for.
Status of global tracking and plans for Run2 (for TPC related tasks see Marian’s presentation) 1 R.Shahoyan, 19/03/14.
Statistical feature extraction, calibration and numerical debugging Marian Ivanov.
HLT Kalman Filter Implementation of a Kalman Filter in the ALICE High Level Trigger. Thomas Vik, UiO.
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
AliRoot survey: Analysis P.Hristov 11/06/2013. Are you involved in analysis activities?(85.1% Yes, 14.9% No) 2 Involved since 4.5±2.4 years Dedicated.
1 Offline Week, October 28 th 2009 PWG3-Muon: Analysis Status From ESD to AOD:  inclusion of MC branch in the AOD  standard AOD creation for PDC09 files.
Summary of User Requirements for Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Offline Week Alignment and Calibration Workshop February.
Predrag Buncic CERN Future of the Offline. Data Preparation Group.
M. Gheata ALICE offline week, October Current train wagons GroupAOD producersWork on ESD input Work on AOD input PWG PWG31 (vertexing)2 (+
Summary of Workshop on Calibration and Alignment Database Magali Gruwé CERN PH/AIP ALICE Computing Day February 28 th 2005.
Alignment in real-time in current detector and upgrade 6th LHCb Computing Workshop 18 November 2015 Beat Jost / Cern.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Analysis experience at GSIAF Marian Ivanov. HEP data analysis ● Typical HEP data analysis (physic analysis, calibration, alignment) and any statistical.
Alien and GSI Marian Ivanov. Outlook GSI experience Alien experience Proposals for further improvement.
1 Reconstruction tasks R.Shahoyan, 25/06/ Including TRD into track fit (JIRA PWGPP-1))  JIRA PWGPP-2: Code is in the release, need to switch setting.
Calibration algorithm and detector monitoring - TPC Marian Ivanov.
AliRoot Classes for access to Calibration and Alignment objects Magali Gruwé CERN PH/AIP ALICE Offline Meeting February 17 th 2005 To be presented to detector.
AliRoot survey: Reconstruction P.Hristov 11/06/2013.
Some topics for discussion 31/03/2016 P. Hristov 1.
MAUS Status A. Dobbs CM43 29 th October Contents MAUS Overview Infrastructure Geometry and CDB Detector Updates CKOV EMR KL TOF Tracker Global Tracking.
ALICE Full Dress Rehearsal ALICE TF Meeting 02/08/07.
V4-19-Release P. Hristov 11/10/ Not ready (27/09/10) #73618 Problems in the minimum bias PbPb MC production at 2.76 TeV #72642 EMCAL: Modifications.
Predrag Buncic CERN Plans for Run2 and the ALICE upgrade in Run3 ALICE Tier-1/Tier-2 Workshop February 2015.
ANALYSIS TRAIN ON THE GRID Mihaela Gheata. AOD production train ◦ AOD production will be organized in a ‘train’ of tasks ◦ To maximize efficiency of full.
Marco Cattaneo, 20-May Event Reconstruction for LHCb  What is the scope of the project?  What are the goals (short+medium term)?  How do we organise.
Monthly video-conference, 18/12/2003 P.Hristov1 Preparation for physics data challenge'04 P.Hristov Alice monthly off-line video-conference December 18,
CALIBRATION: PREPARATION FOR RUN2 ALICE Offline Week, 25 June 2014 C. Zampolli.
1 14th June 2012 CPass0/CPass1 status and development.
V4-18-Release P. Hristov 21/06/2010.
Online – Data Storage and Processing
Production status – christmas processing
Status of the Analysis Task Force
ALICE analysis preservation
fields of possible improvement
ALICE – First paper.
ALICE Physics Data Challenge 3
v4-18-Release: really the last revision!
Philippe Charpentier CERN – LHCb On behalf of the LHCb Computing Group
Commissioning of the ALICE HLT, TPC and PHOS systems
AliRoot status and PDC’04
Analysis framework - status
QA tools – introduction and summary of activities
US ATLAS Physics & Computing
Preparations for the CMS-HI Computing Workshop in Bologna
ATLAS DC2 & Continuous production
Offline framework for conditions data
Presentation transcript:

Computing for Alice at GSI (Proposal) (Marian Ivanov)

Outline ● Priorities ● Assumptions ● Proposal – GSI Tier2 - special role – Focus on the calibration and alignment - TPC and TRD

Priorities ( ) ● Detector calibration and alignment (TPC-ITS- TRD) – First test – Cosmic and Laser – October 2007 – To be ready for first pp collision ● First paper – Time scale - Depends on success of October tests – Goal : ~ 1 week (statistic about 10^4-10^5 events) ● ==> The calibration and alignment has the TOP priority ( )

Assumptions ● CPU requirements – Relative ● Simulation ~ 400 a.u ● Reconstruction ~ 100 a.u ● Alignment ~ 1 a.u ● Calibration ~ 1 a.u ● To verify and improve the calibration and alignment several passes through data are necessary ● The time scale for one iteration ~ minutes, hours ==> ● The calibration and alignment algorithms should be decoupled from the simulation and reconstruction ● The reconstruction algorithm should be repeated after retuning of the calibration

Assumptions – Data volume to process - accessibility ● pp event ● ESD size ~ 0.03 Mby/ev ● ESDfriend ~ 0.45 Mby/ev ● (0.5 Tby-5 TBy) per 10^6 events - (no overlaps – 10 overlapped events) ● ESD (friends) Raw data (zero suppressed) – Local ~ 10^5-10^6 pp - 10^4 pp – Batch ~ 10^6-10^7 pp - 10^5 pp – Proof ~ 10^6-10^7 pp - – Grid >10^7 pp - 10^6 pp

Assumptions ● Type of analysis (requirements) ● First priority – Calibration of TPC – 10 ^4 -10^5 pp – Validation of the reconstruction - 10^4-10^5 pp – Alignment TPC, TPC-ITS – 10^5 pp + 10^4-10^5 cosmic

Assumptions ● Alien and PROOF are in the test phases ● Improvement in time, but, still fragile ● Difficult to distinguish between software bugs in analysis, calibration code and Alien and PROOF internal problems ● Chaotic (user) analysis -> can lead to chaos ● Not existing quotas, priorities ● Some restrictions already implemented ● e.g. On ALIEN - Possibility to submit jobs only for staged files ● Requirements for staging files – officially defined by PWGX ● The requirements from Detectors ?

Assumptions ● Alice test in October – (in one month) ● Full stress test of system ● Significant data volume ● ~20 Tby of raw data from test of 2 sectors (2006) ● Bottleneck (2006) – The processing time given by time of the data access - CPU time negligible ● We should be prepared for different scenarios ● We would like to start with the data copied at GSI and reconstruct/calibrate/align locally, later switch to GRID (The same we did in 2006) ● This approach enables several fast iteration over data

Proposal ● Algorithmic part of our analysis, calibration software should be independent of the running environment – TPC calibration classes (components) as example (running, tuning OFFLINE, used in HLT, DAQ and Offline) ● Analysis and calibration code should be written following a component based model – Tselector (for PROOF) and AliAnalysisTask (at GRID/ALIEN) – just simple wrapper

Example - Component ● Component: – class AliTPCcalibTracks : public TNamed {..... ● public :.. ● virtual void ProofSlaveBegin(TList * output); ● virtual void ProcessTrack(AliTPCseed * seed); ● void Merge(TCollection *); ● // histograms, Fitters.... ● Selector - wrapper – class AliTPCSelectorTracks : public TSelector {.... ● AliTPCcalibTracks *fCalibTracks; //! calib Tracks object – } – Bool_t AliTPCSelectorTracks::Process(Long64_t entry) ● fCalibTracks->ProcessTrack(seed); ●

Software development ● Write component ● Software validation - Sequence: 1)Local environment (first filter) 1)Stability – debugger 2)Memory consumption – valgrind, memstat (root) 3)CPU profiling - callgrind, vtune 4)Output – rough, quantitative – if possible 2)Batch system (second filter) 1)Memory consumption – valgrind, memstat 2)CPU profiling 3)Output - better statistic 3)PROOF 1)For rapid development – fast user feedback 2)Iterative improvement of algorithms, selection criterias... 3)Improve statistic 4)Be ready for GRID/ALIEN 1)Improve statistic

What should be done ● Test data transfer to GSI ( ~TBytes) – Using Alien – Kilian – Using xrdcp ● File system toolkit on top of xrd to be developed ● Write components ● Test and use of PROOF at GSI

Conclusion ● An analysis/calibration schema has been proposed ● It is not harder to implement than a regular TSelector/AliTask... ● TPC calibration has already been partly implemented using proposed schema ● This schema allows much better testing of the code before proceeding to the next steps (Local ->Batch/Proof-> GRID) ● As the same components are used in each step, we are not blocked if one of the distributed systems doesn't work properly