Presentation is loading. Please wait.

Presentation is loading. Please wait.

HLT/DAQ Status report Valerio Vercesi CSN1 April 2005.

Similar presentations


Presentation on theme: "HLT/DAQ Status report Valerio Vercesi CSN1 April 2005."— Presentation transcript:

1 HLT/DAQ Status report Valerio Vercesi CSN1 April 2005

2 V. Vercesi - INFN Pavia 2Outline  New TDAQ Organization  Italian activities and roles  Pre-series procurements  Status, deployment  Documentation  Activities  Combined Test Beam results  Monitoring and ROD Crate DAQ  Algorithms development  Planning and outlook  Systems commissioning  Cosmic data taking

3 CSN1 April 2005 V. Vercesi - INFN Pavia 3 ATLAS TDAQ system Muon ROD LVL1 LVL2 Event builder network Storage: ~ 300 MB/s ROBROBROB CaloInner Pipeline Memories Readout Drivers Readout Buffers ~1600 High-Level Trigger LEVEL-1 TRIGGER Hardware-Based Coarse granularity from calorimeter & muon systems LEVEL-2 TRIGGER Regions-of-Interest “seeds” Full granularity for all subdetector systems Fast Rejection “steering” EVENT FILTER “Seeded” by Level 2 result Full event access Algorithms inherited by offline RoI EF farm ~1000 CPUs 1 selected event every million TDAQ = Rates 40 MHz ~75 kHz ~2 kHz ~200 Hz ~ 2 ms ~ 10 ms ~ 1 s Latency EF LVL2 farm ( )

4 CSN1 April 2005 V. Vercesi - INFN Pavia 4 TDAQ Steering Group  The role of the TDSG in the next two years will be more focused on project planning and progress monitoring (strategic, financial)  Relying more on the 3 coordination structures for detailed technical follow-up  Ex-officio presence according to agenda (includes links to offline, DB & commissioning)  Experts and coordinators of system-wide activities invited as appropriate  This is a proposal for 2005  Reserve the possibility to propose modifications if needed

5 CSN1 April 2005 V. Vercesi - INFN Pavia 5  S. Falciano (Roma1) Coordinatore Commissioning HLT  A. Negri (Pavia) Coordinatore Event Filter Dataflow  A. Nisati (Roma1) TDAQ Institute Board chair e Coordinatore Muon Slice PESA  F. Parodi (Genova) Coordinatore b-tagging PESA  V. Vercesi (Pavia) Deputy HLT leader e Coordinatore PESA (Physics and Event Selection Architecture)  E numerose persone che hanno agito da forza trainante e da punto di riferimento per diverse attività durante il Combined Test Beam  Attività italiane  Trigger di Livello-1 muoni barrel (Napoli, Roma1, Roma2)  Trigger di Livello-2 muoni (Pisa, Roma1)  Trigger di Livello-2 pixel (Genova)  Event Filter Dataflow (Pavia, LNF)  Event Filter Muon Algorithms (Lecce, Pavia, Roma1)  DAQ (LNF, Pavia, Roma1)  Monitoring (Pavia, Pisa, Cosenza, Napoli)  DAQ CTB (TDAQ + gruppi detector)

6 CSN1 April 2005 V. Vercesi - INFN Pavia 6  New TDAQ Organization  Italian activities and roles  Pre-series procurements  Status, deployment  Documentation  Activities  Combined Test Beam results  Monitoring and ROD Crate DAQ  Algorithms development  Planning and outlook  Systems commissioning  Cosmic data taking

7 CSN1 April 2005 V. Vercesi - INFN Pavia 7Pre-series One central switch - DAQ rack - 128-port Geth for L2+EB One ROS rack - TC rack + horiz. Cooling - 11 ROS 44 ROBINs One Full L2 rack - DAQ rack - 32 HE PC One L2-misc rack - DAQ rack - 50% of RoIB - 3 LE PC ( 1pROS - 2L2SV) Part of EFIO rack - DAQ rack - 10 HE PC (6 SFI - 2SFO - 2DFM) Part of EvFilt rack - DAQ rack - 12 HE PC Part of ONLINE rack - DAQ rack - 4 HE PC (monitoring) - 2 LE PC (control) 1 ROS rack L2+EB Switch 5.5 RCCRCC 1 L2-misc rack 1 EvFilt rack 1 ONLINE rack 1 L2 rack 1 EFIO rack All racks : one or more Local File Servers - One or more Local Switches USA15 SDX1 “Module-0” of final system - 7 racks (10% of final dataflow)

8 CSN1 April 2005 V. Vercesi - INFN Pavia 8Accounting  CERN driven Market Survey to understand current costs versus technical specifications has been longer than expected  Some delay also due to specs definition itself, re-worked as a follow-up of CTB experience concerning reliability  INFN approved contribution shared as  Read-Out Systems: 51 kCHF (ROS Racks)  Online Computing System: 40 kCHF (Monitoring, Operations)  Online Network System: 44 kCHF (Switches, FileServer)  Description of components and specifications now available on EDMS  Together with the experience of deployment in 2005 this will form the base for procurements of items in 2006 and onwards

9 CSN1 April 2005 V. Vercesi - INFN Pavia 9  New TDAQ Organization  Italian activities and roles  Pre-series procurements  Status, deployment  Documentation  Activities  Combined Test Beam results  Monitoring and ROD Crate DAQ  Algorithms development  Planning and outlook  Systems commissioning  Cosmic data taking

10 CSN1 April 2005 V. Vercesi - INFN Pavia 10 … to G4 simulations … … to reality … H8: from drawings… Transition Radiation Tracker First Muon Chambers Hadronic Calorimeter Electromagnetic Calorimeter Beam Line

11 CSN1 April 2005 V. Vercesi - INFN Pavia 11 2004 ATLAS Combined Test Beam  Main scope: runs with combination of detectors  Full ATLAS Barrel Slice and Muon end cap on H8  Four important aspects  Calibrate the calorimeters in a wide range of energies (1-350 GeV)  Finalize the trigger studies with LVL1 Muon and Calorimeter  Study commissioning aspects and get experience with final elements of the readout  Study the detector performance of an ATLAS Barrel slice  Pre-commissioning activity  Shorter time to commission  Learn to integrate, operate the system  Find problems in advance  Executive summary  All systems of TDAQ have been integrated with detectors, with other parts of TDAQ, with data bases and with offline software  TDAQ time as service has been much bigger than as client  Setup was really big and detectors needed more time than expected to debug their own elements and functionalities  An impressive amount of information and experience collected  The TDAQ italian community wishes to thank the CSN1 and our referees for the support given to this activity

12 CSN1 April 2005 V. Vercesi - INFN Pavia 12 TDAQ @ CTB  TDAQ in ATLAS test beam has used latest prototypes to provide support for ATLAS activity  for a duration of eight months (on-call 24x7) !  The same releases of software are used for test beam, for performance measurements in test beds and as a base for further development  It has shown how complex a system it is and has measured its level of development  It always required TDAQ experts to set it up  Many italians in the support teams  TDAQ went to beam test with the experts  The support effort has been a key element for the CTB operations  All the infrastructure and general PCs were supported by TDAQ (network boot, DHCP etc…)

13 CSN1 April 2005 V. Vercesi - INFN Pavia 13 Event Builder DFM gateway SFI Tracker Calo Muon monitoring run control pROS EF farm @ Meyrin (few Km) Remote Farms: Poland Canada Denmark Local LVL2 farm Local EF farm 10101010001000 10010010001000 10110 ROS LVL1calo 10101010001000 10010010001000 10110 ROS LVL1mu 10101010001000 10010010001000 10110 ROS RPC 10101010001000 10010010001000 10110 ROS TGC 10101010001000 10010010001000 10110 ROS CSC 10101010001000 10010010001000 10110 ROS MDT 10101010001000 10010010001000 10110 ROS Tile 10101010001000 10010010001000 10110 ROS LAr 10101010001000 10010010001000 10110 ROS TRT 10101010001000 10010010001000 10110 ROS SCT 10101010001000 10010010001000 10110 ROS Pixel data network (GbE) SFO Infrastructure tests only Contains the LVL2 result that steers/seeds the EF processing TDAQ setup in CTB Compared to ATLAS ~10% of DAQ ~2% of HLT just counting PCs… Compared to ATLAS ~10% of DAQ ~2% of HLT just counting PCs… CASTOR (IT) LVL1

14 CSN1 April 2005 V. Vercesi - INFN Pavia 14 Integration of software  Components developed by different groups, often separately, are exercised together  Detector DAQ using ROD crate DAQ skeleton by TDAQ  Online SW (control, configuration, user interface, monitoring tools)  Data Flow (RCD, ROS, flow of data to LVL2 processors, Event Building, flow to EF, storage)  Detector monitoring (detector specific, using DAQ infrastructure)  High Level Trigger (selection algorithms, developed in Offline environment, run on LVL2 and EF processors)  Offline analysis (Athena framework, unpacking of raw data, analysis algorithms)  Conditions Data Base, link from Detector Control System to Offline  Huge dependencies in many corners on availability of off-line software components  Online-offline systems tightly coupled at various levels: need revised assessment of costs-benefits ratio  E.g. only next Athena release 10.0.1 will be “consolidated” release for CTB analysis

15 CSN1 April 2005 V. Vercesi - INFN Pavia 15 ROD Crate DAQ VME bus Total number of ROD crates: 90 Total number of ROS PCs:144  All in USA15 (underground) F.E. Electronics … ROD Crates ROD Crate Workstation LAN (GbEth.) GbEth. … ROS PCs ROD Fragments ROB Fragments ROS Fragments Event Fragments (Detector specific) L2 & Event Builder Networks ROLs … PCI bus Config & Control Event sampling & Calibration data NIC ROBIN RCPRCP RODROD RODROD RODROD RODROD Config & Control Event sampling & Calibration data Satisfy the need for detectors ROD crate centralized and uniform support for local processing, configuration, event sampling, … The ROD Crate DAQ (RCD) provides Data Acquisition functionality at the level of the Read-Out Drivers

16 CSN1 April 2005 V. Vercesi - INFN Pavia 16 Event Filter Dataflow design  The EFD function is divided into different specific tasks that could be dynamically interconnected to form a configurable EF dataflow network  The internal dataflow is based on reference passing  Only the pointer to the event (stored in the sharedHeap) flows among the different tasks  Tasks that implement interfaces to external components are executed by independent threads (Multi Thread design)  In order to absorb communication latencies and enhance performance  Proven to be a solid and versatile programming paradigm coupling effectively to modern PC architectures (SMP) Node n EFD SFO PT #1 PTIOPTIO PT #2 PTIOPTIO SFI Input Monitoring Sorting ExtPTs Output Trash SFI Input PT #3 PTIOPTIO PT #a PTIOPTIO PT #b PTIOPTIO SFO Calibration data Debugging channel Main output stream Calibration

17 CSN1 April 2005 V. Vercesi - INFN Pavia 17 Data Flow Commands to MPs ES Event Monitoring ROD/ROS/SFI/SFO Event Sampler GNAM Monitoring Process Interactive Presenter File on disk EFS File Sampler CORE User lib Online Histogramming Service DAQ/Online SW Group GNAM-Monitoring Group Detector Groups Transitions from users or controller OHistogram Service GNAM Monitoring  Starting from experience at previous TB, a group of people developed a complete chain for monitoring (GNAM Monitoring Tool)  P. Adragna, M. Della Pietra, A. Dotti, R. Ferrari, C. Roda, W. Vandelli, P.F. Zema  GNAM has been used since the first day of CTB to monitor the beam detectors  During the CTB, several detector groups provided their specific libraries (TileCal, MDT, Pixels, RPC)  GNAM was a useful tool, especially at the beginning, to understand the detector behaviour, to find faulty states and to get electronic calibrations

18 CSN1 April 2005 V. Vercesi - INFN Pavia 18 Monitoring: the Gatherer Readout System ROB,ROS,S FI,SFO,… LVL2/EF Tier 0 Calibration FARM Gatherer Subd. Mon Gatherer Subd. Gatherer Rec Gatherer Calib. Intelligent Monitoring Intelligent Monitoring Display Shift Crew Display Experts Display Experts Archiver Data Quality Assessment ALARMS & Status Displ. Slow Ctrl. DBS Slow Control Var. Ref. DBS Monitoring DBS Data Qual. DBS Var. Conf. DBS Dynamic Allocation Of Links online Mon LVL1 l About 10 monitoring algorithms were publishing between 800 and 1000 histograms concurrently l Including detector standalone, correlations, and EF performance l The latency overhead induced by the monitoring steps is at present acceptable (needs more validation)

19 CSN1 April 2005 V. Vercesi - INFN Pavia 19PESA  Physics and Event Selection Architecture  In the HLT the selection strategy is built around the identification of physics objects  PESA Core SW is responsible for the implementation of the Steering and Control  Built around standard Athena components  PESA Algorithms evolves and develops HLT software algorithmic tools using realistic data access and handling  LVL2 specialized algorithms, EF algorithms adapted from off-line  Important deployment in HLT testbeds  PESA Validation and Performance applies tools in a structured way to data samples to extract efficiency, rates, rejection factors, physics coverage  Builds on past experience from TP and TDR  CERN/LHCC 2000-17 and CERN/LHCC 2003-022  Stems from established structure, laid out in parallel with the organization of the Combined Performance working groups, in 5 main lines (“vertical slices”)  Electrons and photons  Muons  Jets / Taus / ETmiss  b-tagging  B-physics

20 CSN1 April 2005 V. Vercesi - INFN Pavia 20 Muon slice  LVL2 and EF Muon algorithms have been extensively tested on data simulated in ATLAS  LVL2:  Fast  Task: confirm the LVL1 trigger with a more precise Pt estimation within a Region of Interest (RoI)  Global pattern recognition, track fit, fast Pt estimate via Look Up Table with no use of time consuming fit methods  Event Filter: TrigMoore  Based on offline reconstruction algorithm Moore  Can run seeded (reconstruction starting from RoI of previous levels)  Precise Pt determination  Moore (offline version) already successfully tested as EF during 2003 Test Beam  The test beam 2004 has been a fundamental step forward to test the complete muon trigger slice, including HLT steering and seeding

21 CSN1 April 2005 V. Vercesi - INFN Pavia 21 DAQ Run Control showing the L2 partition up and running with L1, RPC and MDT Beam profiles on MDT and RPC Mdt hit clusters displayed by the online presenter Muon Level-2 partition  Further integration during (and after…) combined 25 ns run  Code stable: Level-2 with  Fast introduced in the standard DAQ partition  Communication between  Fast and TrigMoore was correct  Muon sagitta reconstructed at Level-2 but correlation with EF incomplete  However all HLT functionalities have been succesfully tested

22 CSN1 April 2005 V. Vercesi - INFN Pavia 22MuFast  MuFast pattern recognition and data preparation both work very well in both testbeam and testbed  Data preparation time is one of the most problematic issues in PESA  MuFast is today the only algorithm compliant with LVL2 latency (10 ms)  Work in progress to assess rate evaluation and efficiencies  Big planning for this year is the extension to the endcap  In collaboration with Israeli and US groups  Need also better assessment of Detector Description compliance (GeoModel)

23 CSN1 April 2005 V. Vercesi - INFN Pavia 23TrigMoore  Huge activity to study TrigMoore performance in presence of cavern background (safety factors 1 to 10) and pile-up events at 1x and 2x 10 33  Fake muons rate may become particularly important when algorithm applied at the EF “unseeded” by LVL2  Good performance of the seeded version today (latency)  Need extension to the endcap  Need also better evaluation of physics performance

24 CSN1 April 2005 V. Vercesi - INFN Pavia 24LVL1  LVL1 simulation is of course an integral part for the measurement of the full muon slice performance  Lot of work done in the past  Cabling, efficiency, robustness  Next steps (with available manpower)  Efficiency studies with cavern background using DC1 data  Careful evaluation of needed statistics (signal and background): big load on italian farms  Building of “horizontal slice” including the end-caps to assess LVL1 trigger rates on full eta range  New topics (with manpower to define..)  Efficiency studies with signal samples from DC2 production  Production starting up at CERN with Geant4 and latest spectrometer layout  Background studies with Geant4  Detailed study of LVL1 timing (cabling, time-of-flight)  Cosmic trigger  Physics rates efficiency

25 CSN1 April 2005 V. Vercesi - INFN Pavia 25 b-tagging selection  Identify variables to discriminate between b-jets and u-jets  d 0 /  d0 (p T ) (  d0 ~ 25µm at high p T )  z 0 : need primary vertex reconstruction after track reconstruction. Using the same algorithm as in the seed formation we get 200µm (enough precise considering the (similar) z 0 resolution of the tracks)  Number of tracks in the RoI  Energy fraction of the b candidate  For each variable compute the weight variable W and the discriminant variable X  Evaluate rejection at LVL2 and efficiency for tagging  Combination of the two most effective variables (d 0 /  d0 and z0) using 2D pdf’s (accounts for the full correlation between variables)  New results  (50%) = 12.0,  (70%) = 4.5.  Old results (d 0 only)  (50%) = 7.0,  (70%) = 3.0  B-physics implications under study

26 CSN1 April 2005 V. Vercesi - INFN Pavia 26  New TDAQ Organization  Italian activities and roles  Pre-series procurements  Status, deployment  Documentation  Activities  Combined Test Beam results  Monitoring and ROD Crate DAQ  Algorithms development  Planning and outlook  Systems commissioning  Cosmic data taking

27 CSN1 April 2005 V. Vercesi - INFN Pavia 27 PESA Validation & Performance  Building of Trigger Menus  Evolve and complement the work done in the present slices  Slices will always be part of the PESA validation process  People developing and trying algorithms will necessarily apply them to some sample in order to extract information about their behaviour  “Slices” however are only ingredients of the recipe we need in the runtime phase of ATLAS, where the complete Menu is the only global element that can be optimized against "environmental" conditions (detector knowledge, machine background, etc)  Operate steering on multiple combination of objects  Physics validation use-cases  List of items of increasing complexity, moving from simple processes used now (like Z  2e or Z  2  ) to others capable of addressing more complex menus (like H  2e2  or top or …)  Need feedback and help to select most interesting ones  Study feasibility of an exercise similar to the Athens one for physics, where a mixture of signal samples (plus some background) is produced and the Trigger Menu is tested (blindly) against those data  PESA Selection commissioning  On a time scale even earlier than the "final" Trigger Menu  Need to be ready for the cosmic data taking  Prepare modified algorithms if needed (e.g. non pointing tracks)  Understand detector needs and collect corresponding requirements in advance

28 CSN1 April 2005 V. Vercesi - INFN Pavia 28 ATLAS Commissioning Phases  Commissioning means bringing ATLAS systems from “just installed” to “operational”. It is broken in 4 phases  Subsystem standalone commissioning  DCS, LV, HV, cooling, gas safety, DB recording and retrieving  DAQ: pedestal runs, electronic calibration, write and analyze data  Integrate subsystems into full detector  Skeleton TTC needs to be available  Cosmic rays, recording data, analyze/understand, distribute to remote sites  Ad-hoc DAQ, Trigger and algorithms will be needed  Single beam, first collisions, increasing rates, etc…  Wow…  A sensible part of commissioning activities will be done during the installation itself  Phases will overlap since different systems may be in different phases  For the barrel calorimeter electronics commissioning will start soon  Tile calorimeter will start cosmics data taking this fall

29 CSN1 April 2005 V. Vercesi - INFN Pavia 29 HLT Commissioning  Commissioning is a set of activities which spans the time interval from the installation of the HLT racks and nodes …  A rack is the elementary unit for commissioning  The cooling, power, network cables are connected  OS, Dataflow and Online software are installed ... to the phase when the HLT is filtering physics data and recording them  HLT selection algorithms are installed and running stably  The complete trigger menu (at least for early physics) is configured  The trigger selection efficiencies and background rejection rates are understood and can serve as input for physics measurements  It is also clear that the time scales are “shifted” with respect to the rest of detectors  Installation will happen later than for other systems  Phase-1 Commissioning definition is the most urgent  Heavily use the Pre-series to exercise the procedures for installation and commissioning  Important steps will cover the integration of detectors into full system  Involve operations that have a very strong coupling with the offline commissioning activities  Development of specific algorithms looking at simple data decoding (cabling,…)  Final commissioning phases extend far beyond the data-taking startup (interface with run coordinator team)  Need good coordination with physics groups  Need to think as the trigger as a whole object to be commissioned (including LVL1)

30 CSN1 April 2005 V. Vercesi - INFN Pavia 30 Cosmic muons in ATLAS Rock ~ Silicon 600m x 600m x 200m deep (2.33 g/cm 3 ) Air Concrete Surface building PX14/16 shielding (2.5 g/cm 3 ) PX14 (18.0 m Inner Ø) PX16 (12.6 m Inner Ø ) ATLAS Geant Simulation Initial detector

31 CSN1 April 2005 V. Vercesi - INFN Pavia 31 Cosmic trigger issues  How to trigger on cosmics?  RPC, TGC (?)  From preliminary full simulations of LVL1  Cosmics: up to ~100 Hz pass low-pT RPC LVL1  How to increase cosmic trigger acceptance?  Exciting last opportunity!  After that, one will only be asked to reduce trigger rates…  Muon system  Requirement for cosmics (and beam-halo) triggers included in design:  e.g. trigger ASICs include programmable delays to compensate for TOF of down-going cosmic-ray muons in barrel  Projectivity constraints result from cabling between planes of trigger chambers  Lot of flexibility in the system  Timing adjustments  Open L1 roads  Relax coincidence requests  At LVL2 modified trigger algorithms can help in selecting non-pointing muon  Tile Calorimeter system  RPC commissioned later than foreseen  June 2005 : Tilecal in the pit equipped with electronic  commissioning with cosmics can start  need self-triggering scheme while waiting for RPC  consider back-to-back trigger towers (  x  =0.1 x 0.1, full calo depth)  ask E > 1.5 GeV in both towers  Expected rate from full simulation : ~ 130  /hr for 16 top+16 bottom module  Ongoing studies to refine present understanding  Soon to be checked with real measurements

32 CSN1 April 2005 V. Vercesi - INFN Pavia 32 Milestones and Finance  30/06/2005  TDAQ - Installazione, test e uso della "Pre-serie" (~ 10% TDAQ slice)  24/12/2005  TDAQ - Installazione e test dei ROS di Pixel, LAr, Tile, Muon (interfacciamento al ROD Crate e integrazione nel DAQ)  CORE budget allocato per il 2005 è di 214 k€  TDAQ Resource Committee (VV partecipa) sta attualmente pianificando i dettagli degli impegni finanziari e dello share  INFN impegnato su Read-Out System e Online components  Non si prevedono modifiche sostanziali al piano di share  Procederemo agli acquisti (molto probabilmente sempre attraverso ordini CERN), previa comunicazione ai referee, non appena possibile

33 CSN1 April 2005 V. Vercesi - INFN Pavia 33 Cost Profile (kCHF) 200420052006200720082009Total Pre-series 14000000 Detector R/O 0275 000550 LVL2 Proc 0065195230160650 Event Builder 0050 11070280 Event Filter 001701805703801300 Online 045135000180 Infrastructure 0080 20 200 INFN Total 1403207755059306303300 TDR Total 10483357408745447522454325101 INFN Percentage(%) 13.49.519.011.112.413.913.1

34 CSN1 April 2005 V. Vercesi - INFN Pavia 34Conclusioni  Stato attuale del progetto HLT/DAQ ben allineato con le scadenze previste nel 2005  Molti piccoli dettagli certamente da valutare con attenzione perché il progetto è estremamente complesso e anche le responsabilità italiane coprono diversi settori  Sarebbe estremamente positivo avere maggiori contributi alla forza lavoro ora che lo sforzo di costruzione è terminato  Successo delle attività al Combined Test Beam  Molti italiani in ruoli di grande visibilità  Abbiamo imparato tante cose, dobbiamo trovare il tempo di fermarci a riflettere  Lo sviluppo degli algoritmi procede bene, maggiore enfasi sarà via via posta sulle misure di performance di fisica complesse  La nuova struttura organizzativa del progetto è definita  Evolverà ulteriormente avvicinandosi al periodo di presa dati  La componente italiana è ben rappresentata  Riconoscimento di tutti gli impegni portati a termine con successo dai nostri ricercatori in questi anni


Download ppt "HLT/DAQ Status report Valerio Vercesi CSN1 April 2005."

Similar presentations


Ads by Google