HLT/DAQ Status report Valerio Vercesi CSN1 April 2005.

Slides:



Advertisements
Similar presentations
G ö khan Ü nel / CHEP Interlaken ATLAS 1 Performance of the ATLAS DAQ DataFlow system Introduction/Generalities –Presentation of the ATLAS DAQ components.
Advertisements

Sander Klous on behalf of the ATLAS Collaboration Real-Time May /5/20101.
ATLAS HLT/DAQ Progress report V. Vercesi on behalf of the ATLAS Italia HLT/DAQ Group Maggio 2007.
GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC GNAM and OHP: Monitoring Tools for the ATLAS Experiment at LHC M. Della Pietra, P. Adragna,
Inefficiencies in the feet region 40 GeV muons selection efficiency   Barrel – End Cap transition 10th International Conference on Advanced Technology.
The First-Level Trigger of ATLAS Johannes Haller (CERN) on behalf of the ATLAS First-Level Trigger Groups International Europhysics Conference on High.
Kostas KORDAS INFN – Frascati XI Bruno Touschek spring school, Frascati,19 May 2006 Higgs → 2e+2  O (1/hr) Higgs → 2e+2  O (1/hr) ~25 min bias events.
Chris Bee ATLAS High Level Trigger Introduction System Scalability Trigger Core Software Development Trigger Selection Algorithms Commissioning & Preparation.
The ATLAS trigger Ricardo Gonçalo Royal Holloway University of London.
Online Measurement of LHC Beam Parameters with the ATLAS High Level Trigger David W. Miller on behalf of the ATLAS Collaboration 27 May th Real-Time.
General Trigger Philosophy The definition of ROI’s is what allows, by transferring a moderate amount of information, to concentrate on improvements in.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
1 The ATLAS Online High Level Trigger Framework: Experience reusing Offline Software Components in the ATLAS Trigger Werner Wiedenmann University of Wisconsin,
First year experience with the ATLAS online monitoring framework Alina Corso-Radu University of California Irvine on behalf of ATLAS TDAQ Collaboration.
JCOP Workshop September 8th 1999 H.J.Burckhart 1 ATLAS DCS Organization of Detector and Controls Architecture Connection to DAQ Front-end System Practical.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Calibration streams in the Event Filter. Status report Mainz, Thursday 13 October 2005 Sander Klous – NIKHEF On behalf of the EF calibration team: Martine.
1 Trigger “box” and related TDAQ organization Nick Ellis and Xin Wu Chris Bee and Livio Mapelli.
Gnam Monitoring Overview M. Della Pietra, D. della Volpe (Napoli), A. Di Girolamo (Roma1), R. Ferrari, G. Gaudio, W. Vandelli (Pavia) D. Salvatore, P.
1 “Steering the ATLAS High Level Trigger” COMUNE, G. (Michigan State University ) GEORGE, S. (Royal Holloway, University of London) HALLER, J. (CERN) MORETTINI,
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
AMB HW LOW LEVEL SIMULATION VS HW OUTPUT G. Volpi, INFN Pisa.
Muon Trigger Slice Report Sergio Grancagnolo for the Muon Trigger group INFN-University of Lecce CERN Jan 23, 2007.
Cosmic Rays for ATLAS Commissioning Commissioning Meeting ATLAS Physics Workshop Athens May 2003 Halo+Cosmics group: M.Boonekamp, F.Gianotti, R.McPherson,
TRT Offline Software DOE Visit, August 21 st 2008 Outline: oTRT Commissioning oTRT Offline Software Activities oTRT Alignment oTRT Efficiency and Noise.
U.S. ATLAS Executive Committee August 3, 2005 U.S. ATLAS TDAQ FY06 M&O Planning A.J. Lankford UC Irvine.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
September 2007CHEP 07 Conference 1 A software framework for Data Quality Monitoring in ATLAS S.Kolos, A.Corso-Radu University of California, Irvine, M.Hauschild.
CHEP March 2003 Sarah Wheeler 1 Supervision of the ATLAS High Level Triggers Sarah Wheeler on behalf of the ATLAS Trigger/DAQ High Level Trigger.
Artemis School On Calibration and Performance of ATLAS Detectors Jörg Stelzer / David Berge.
ATLAS Trigger Development
S t a t u s a n d u pd a t e s Gabriella Cataldi (INFN Lecce) & the group Moore … in the H8 test-beam … in the HLT(Pesa environment) … work in progress.
Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct The ATLAS Data Acquisition.
The ATLAS DAQ System Online Configurations Database Service Challenge J. Almeida, M. Dobson, A. Kazarov, G. Lehmann-Miotto, J.E. Sloper, I. Soloviev and.
A Fast Hardware Tracker for the ATLAS Trigger System A Fast Hardware Tracker for the ATLAS Trigger System Mark Neubauer 1, Laura Sartori 2 1 University.
14 th IEEE-NPSS Real Time Stockholm - June 9 th 2005 P. F. Zema The GNAM monitoring system and the OHP histogram presenter for ATLAS 14 th IEEE-NPSS Real.
ATLAS The ConditionDB is accessed by the offline reconstruction framework (ATHENA). COOLCOnditions Objects for LHC The interface is provided by COOL (COnditions.
1 Plans for the Muon Trigger CSC Note. 2 Muon Trigger CSC Studies General performance studies and trigger rate evalution for the full slice Evaluation.
ANDREA NEGRI, INFN PAVIA – NUCLEAR SCIENCE SYMPOSIUM – ROME 20th October
1 TrigMoore: Status, Plans, Possible Milestones. 2 Moore in HLT- status and ongoing work Package under the CVS directory: Trigger/TrigAlgorithms/TrigMoore.
EPS HEP 2007 Manchester -- Thilo Pauly July The ATLAS Level-1 Trigger Overview and Status Report including Cosmic-Ray Commissioning Thilo.
Emanuele Leonardi PADME General Meeting - LNF January 2017
5/14/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
U.S. ATLAS TDAQ FY06 M&O Planning
TDAQ Phase-II kick-off CERN background information
Project definition and organization milestones & work-plan
Institute of Nuclear Physics Polish Academy of Sciences
2018/6/15 The Fast Tracker Real Time Processor and Its Impact on the Muon Isolation, Tau & b-Jet Online Selections at ATLAS Francesco Crescioli1 1University.
Risultati del run di integrazione M4
ATLAS MDT HV – LV Detector Control System (DCS)
Controlling a large CPU farm using industrial tools
Commissioning of the ALICE HLT, TPC and PHOS systems
Online Software Status
ProtoDUNE SP DAQ assumptions, interfaces & constraints
OO Muon Reconstruction in ATLAS
Cosmic Ray Tracker and Detector Control System at the GIF++
US ATLAS Physics & Computing
The Silicon Track Trigger (STT) at DØ
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
ATLAS Canada Alberta Carleton McGill Montréal Simon Fraser Toronto
The First-Level Trigger of ATLAS
ATLAS: Level-1 Calorimeter Trigger
12/3/2018 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
TDAQ commissioning and status Stephen Hillier, on behalf of TDAQ
1/2/2019 The ATLAS Trigger and Data Acquisition Architecture & Status Benedetto Gorini CERN - Physics Department on behalf of the ATLAS TDAQ community.
MOORE (Muon Object Oriented REconstruction) MuonIdentification
Bringing the ATLAS Muon Spectrometer to Life with Cosmic Rays
Design Principles of the CMS Level-1 Trigger Control and Hardware Monitoring System Ildefons Magrans de Abril Institute for High Energy Physics, Vienna.
Pierluigi Paolucci & Giovanni Polese
M.Biglietti (Univ. Naples and INFN Naples)
Presentation transcript:

HLT/DAQ Status report Valerio Vercesi CSN1 April 2005

V. Vercesi - INFN Pavia 2Outline  New TDAQ Organization  Italian activities and roles  Pre-series procurements  Status, deployment  Documentation  Activities  Combined Test Beam results  Monitoring and ROD Crate DAQ  Algorithms development  Planning and outlook  Systems commissioning  Cosmic data taking

CSN1 April 2005 V. Vercesi - INFN Pavia 3 ATLAS TDAQ system Muon ROD LVL1 LVL2 Event builder network Storage: ~ 300 MB/s ROBROBROB CaloInner Pipeline Memories Readout Drivers Readout Buffers ~1600 High-Level Trigger LEVEL-1 TRIGGER Hardware-Based Coarse granularity from calorimeter & muon systems LEVEL-2 TRIGGER Regions-of-Interest “seeds” Full granularity for all subdetector systems Fast Rejection “steering” EVENT FILTER “Seeded” by Level 2 result Full event access Algorithms inherited by offline RoI EF farm ~1000 CPUs 1 selected event every million TDAQ = Rates 40 MHz ~75 kHz ~2 kHz ~200 Hz ~ 2 ms ~ 10 ms ~ 1 s Latency EF LVL2 farm ( )

CSN1 April 2005 V. Vercesi - INFN Pavia 4 TDAQ Steering Group  The role of the TDSG in the next two years will be more focused on project planning and progress monitoring (strategic, financial)  Relying more on the 3 coordination structures for detailed technical follow-up  Ex-officio presence according to agenda (includes links to offline, DB & commissioning)  Experts and coordinators of system-wide activities invited as appropriate  This is a proposal for 2005  Reserve the possibility to propose modifications if needed

CSN1 April 2005 V. Vercesi - INFN Pavia 5  S. Falciano (Roma1) Coordinatore Commissioning HLT  A. Negri (Pavia) Coordinatore Event Filter Dataflow  A. Nisati (Roma1) TDAQ Institute Board chair e Coordinatore Muon Slice PESA  F. Parodi (Genova) Coordinatore b-tagging PESA  V. Vercesi (Pavia) Deputy HLT leader e Coordinatore PESA (Physics and Event Selection Architecture)  E numerose persone che hanno agito da forza trainante e da punto di riferimento per diverse attività durante il Combined Test Beam  Attività italiane  Trigger di Livello-1 muoni barrel (Napoli, Roma1, Roma2)  Trigger di Livello-2 muoni (Pisa, Roma1)  Trigger di Livello-2 pixel (Genova)  Event Filter Dataflow (Pavia, LNF)  Event Filter Muon Algorithms (Lecce, Pavia, Roma1)  DAQ (LNF, Pavia, Roma1)  Monitoring (Pavia, Pisa, Cosenza, Napoli)  DAQ CTB (TDAQ + gruppi detector)

CSN1 April 2005 V. Vercesi - INFN Pavia 6  New TDAQ Organization  Italian activities and roles  Pre-series procurements  Status, deployment  Documentation  Activities  Combined Test Beam results  Monitoring and ROD Crate DAQ  Algorithms development  Planning and outlook  Systems commissioning  Cosmic data taking

CSN1 April 2005 V. Vercesi - INFN Pavia 7Pre-series One central switch - DAQ rack port Geth for L2+EB One ROS rack - TC rack + horiz. Cooling - 11 ROS 44 ROBINs One Full L2 rack - DAQ rack - 32 HE PC One L2-misc rack - DAQ rack - 50% of RoIB - 3 LE PC ( 1pROS - 2L2SV) Part of EFIO rack - DAQ rack - 10 HE PC (6 SFI - 2SFO - 2DFM) Part of EvFilt rack - DAQ rack - 12 HE PC Part of ONLINE rack - DAQ rack - 4 HE PC (monitoring) - 2 LE PC (control) 1 ROS rack L2+EB Switch 5.5 RCCRCC 1 L2-misc rack 1 EvFilt rack 1 ONLINE rack 1 L2 rack 1 EFIO rack All racks : one or more Local File Servers - One or more Local Switches USA15 SDX1 “Module-0” of final system - 7 racks (10% of final dataflow)

CSN1 April 2005 V. Vercesi - INFN Pavia 8Accounting  CERN driven Market Survey to understand current costs versus technical specifications has been longer than expected  Some delay also due to specs definition itself, re-worked as a follow-up of CTB experience concerning reliability  INFN approved contribution shared as  Read-Out Systems: 51 kCHF (ROS Racks)  Online Computing System: 40 kCHF (Monitoring, Operations)  Online Network System: 44 kCHF (Switches, FileServer)  Description of components and specifications now available on EDMS  Together with the experience of deployment in 2005 this will form the base for procurements of items in 2006 and onwards

CSN1 April 2005 V. Vercesi - INFN Pavia 9  New TDAQ Organization  Italian activities and roles  Pre-series procurements  Status, deployment  Documentation  Activities  Combined Test Beam results  Monitoring and ROD Crate DAQ  Algorithms development  Planning and outlook  Systems commissioning  Cosmic data taking

CSN1 April 2005 V. Vercesi - INFN Pavia 10 … to G4 simulations … … to reality … H8: from drawings… Transition Radiation Tracker First Muon Chambers Hadronic Calorimeter Electromagnetic Calorimeter Beam Line

CSN1 April 2005 V. Vercesi - INFN Pavia ATLAS Combined Test Beam  Main scope: runs with combination of detectors  Full ATLAS Barrel Slice and Muon end cap on H8  Four important aspects  Calibrate the calorimeters in a wide range of energies (1-350 GeV)  Finalize the trigger studies with LVL1 Muon and Calorimeter  Study commissioning aspects and get experience with final elements of the readout  Study the detector performance of an ATLAS Barrel slice  Pre-commissioning activity  Shorter time to commission  Learn to integrate, operate the system  Find problems in advance  Executive summary  All systems of TDAQ have been integrated with detectors, with other parts of TDAQ, with data bases and with offline software  TDAQ time as service has been much bigger than as client  Setup was really big and detectors needed more time than expected to debug their own elements and functionalities  An impressive amount of information and experience collected  The TDAQ italian community wishes to thank the CSN1 and our referees for the support given to this activity

CSN1 April 2005 V. Vercesi - INFN Pavia 12 CTB  TDAQ in ATLAS test beam has used latest prototypes to provide support for ATLAS activity  for a duration of eight months (on-call 24x7) !  The same releases of software are used for test beam, for performance measurements in test beds and as a base for further development  It has shown how complex a system it is and has measured its level of development  It always required TDAQ experts to set it up  Many italians in the support teams  TDAQ went to beam test with the experts  The support effort has been a key element for the CTB operations  All the infrastructure and general PCs were supported by TDAQ (network boot, DHCP etc…)

CSN1 April 2005 V. Vercesi - INFN Pavia 13 Event Builder DFM gateway SFI Tracker Calo Muon monitoring run control pROS EF Meyrin (few Km) Remote Farms: Poland Canada Denmark Local LVL2 farm Local EF farm ROS LVL1calo ROS LVL1mu ROS RPC ROS TGC ROS CSC ROS MDT ROS Tile ROS LAr ROS TRT ROS SCT ROS Pixel data network (GbE) SFO Infrastructure tests only Contains the LVL2 result that steers/seeds the EF processing TDAQ setup in CTB Compared to ATLAS ~10% of DAQ ~2% of HLT just counting PCs… Compared to ATLAS ~10% of DAQ ~2% of HLT just counting PCs… CASTOR (IT) LVL1

CSN1 April 2005 V. Vercesi - INFN Pavia 14 Integration of software  Components developed by different groups, often separately, are exercised together  Detector DAQ using ROD crate DAQ skeleton by TDAQ  Online SW (control, configuration, user interface, monitoring tools)  Data Flow (RCD, ROS, flow of data to LVL2 processors, Event Building, flow to EF, storage)  Detector monitoring (detector specific, using DAQ infrastructure)  High Level Trigger (selection algorithms, developed in Offline environment, run on LVL2 and EF processors)  Offline analysis (Athena framework, unpacking of raw data, analysis algorithms)  Conditions Data Base, link from Detector Control System to Offline  Huge dependencies in many corners on availability of off-line software components  Online-offline systems tightly coupled at various levels: need revised assessment of costs-benefits ratio  E.g. only next Athena release will be “consolidated” release for CTB analysis

CSN1 April 2005 V. Vercesi - INFN Pavia 15 ROD Crate DAQ VME bus Total number of ROD crates: 90 Total number of ROS PCs:144  All in USA15 (underground) F.E. Electronics … ROD Crates ROD Crate Workstation LAN (GbEth.) GbEth. … ROS PCs ROD Fragments ROB Fragments ROS Fragments Event Fragments (Detector specific) L2 & Event Builder Networks ROLs … PCI bus Config & Control Event sampling & Calibration data NIC ROBIN RCPRCP RODROD RODROD RODROD RODROD Config & Control Event sampling & Calibration data Satisfy the need for detectors ROD crate centralized and uniform support for local processing, configuration, event sampling, … The ROD Crate DAQ (RCD) provides Data Acquisition functionality at the level of the Read-Out Drivers

CSN1 April 2005 V. Vercesi - INFN Pavia 16 Event Filter Dataflow design  The EFD function is divided into different specific tasks that could be dynamically interconnected to form a configurable EF dataflow network  The internal dataflow is based on reference passing  Only the pointer to the event (stored in the sharedHeap) flows among the different tasks  Tasks that implement interfaces to external components are executed by independent threads (Multi Thread design)  In order to absorb communication latencies and enhance performance  Proven to be a solid and versatile programming paradigm coupling effectively to modern PC architectures (SMP) Node n EFD SFO PT #1 PTIOPTIO PT #2 PTIOPTIO SFI Input Monitoring Sorting ExtPTs Output Trash SFI Input PT #3 PTIOPTIO PT #a PTIOPTIO PT #b PTIOPTIO SFO Calibration data Debugging channel Main output stream Calibration

CSN1 April 2005 V. Vercesi - INFN Pavia 17 Data Flow Commands to MPs ES Event Monitoring ROD/ROS/SFI/SFO Event Sampler GNAM Monitoring Process Interactive Presenter File on disk EFS File Sampler CORE User lib Online Histogramming Service DAQ/Online SW Group GNAM-Monitoring Group Detector Groups Transitions from users or controller OHistogram Service GNAM Monitoring  Starting from experience at previous TB, a group of people developed a complete chain for monitoring (GNAM Monitoring Tool)  P. Adragna, M. Della Pietra, A. Dotti, R. Ferrari, C. Roda, W. Vandelli, P.F. Zema  GNAM has been used since the first day of CTB to monitor the beam detectors  During the CTB, several detector groups provided their specific libraries (TileCal, MDT, Pixels, RPC)  GNAM was a useful tool, especially at the beginning, to understand the detector behaviour, to find faulty states and to get electronic calibrations

CSN1 April 2005 V. Vercesi - INFN Pavia 18 Monitoring: the Gatherer Readout System ROB,ROS,S FI,SFO,… LVL2/EF Tier 0 Calibration FARM Gatherer Subd. Mon Gatherer Subd. Gatherer Rec Gatherer Calib. Intelligent Monitoring Intelligent Monitoring Display Shift Crew Display Experts Display Experts Archiver Data Quality Assessment ALARMS & Status Displ. Slow Ctrl. DBS Slow Control Var. Ref. DBS Monitoring DBS Data Qual. DBS Var. Conf. DBS Dynamic Allocation Of Links online Mon LVL1 l About 10 monitoring algorithms were publishing between 800 and 1000 histograms concurrently l Including detector standalone, correlations, and EF performance l The latency overhead induced by the monitoring steps is at present acceptable (needs more validation)

CSN1 April 2005 V. Vercesi - INFN Pavia 19PESA  Physics and Event Selection Architecture  In the HLT the selection strategy is built around the identification of physics objects  PESA Core SW is responsible for the implementation of the Steering and Control  Built around standard Athena components  PESA Algorithms evolves and develops HLT software algorithmic tools using realistic data access and handling  LVL2 specialized algorithms, EF algorithms adapted from off-line  Important deployment in HLT testbeds  PESA Validation and Performance applies tools in a structured way to data samples to extract efficiency, rates, rejection factors, physics coverage  Builds on past experience from TP and TDR  CERN/LHCC and CERN/LHCC  Stems from established structure, laid out in parallel with the organization of the Combined Performance working groups, in 5 main lines (“vertical slices”)  Electrons and photons  Muons  Jets / Taus / ETmiss  b-tagging  B-physics

CSN1 April 2005 V. Vercesi - INFN Pavia 20 Muon slice  LVL2 and EF Muon algorithms have been extensively tested on data simulated in ATLAS  LVL2:  Fast  Task: confirm the LVL1 trigger with a more precise Pt estimation within a Region of Interest (RoI)  Global pattern recognition, track fit, fast Pt estimate via Look Up Table with no use of time consuming fit methods  Event Filter: TrigMoore  Based on offline reconstruction algorithm Moore  Can run seeded (reconstruction starting from RoI of previous levels)  Precise Pt determination  Moore (offline version) already successfully tested as EF during 2003 Test Beam  The test beam 2004 has been a fundamental step forward to test the complete muon trigger slice, including HLT steering and seeding

CSN1 April 2005 V. Vercesi - INFN Pavia 21 DAQ Run Control showing the L2 partition up and running with L1, RPC and MDT Beam profiles on MDT and RPC Mdt hit clusters displayed by the online presenter Muon Level-2 partition  Further integration during (and after…) combined 25 ns run  Code stable: Level-2 with  Fast introduced in the standard DAQ partition  Communication between  Fast and TrigMoore was correct  Muon sagitta reconstructed at Level-2 but correlation with EF incomplete  However all HLT functionalities have been succesfully tested

CSN1 April 2005 V. Vercesi - INFN Pavia 22MuFast  MuFast pattern recognition and data preparation both work very well in both testbeam and testbed  Data preparation time is one of the most problematic issues in PESA  MuFast is today the only algorithm compliant with LVL2 latency (10 ms)  Work in progress to assess rate evaluation and efficiencies  Big planning for this year is the extension to the endcap  In collaboration with Israeli and US groups  Need also better assessment of Detector Description compliance (GeoModel)

CSN1 April 2005 V. Vercesi - INFN Pavia 23TrigMoore  Huge activity to study TrigMoore performance in presence of cavern background (safety factors 1 to 10) and pile-up events at 1x and 2x  Fake muons rate may become particularly important when algorithm applied at the EF “unseeded” by LVL2  Good performance of the seeded version today (latency)  Need extension to the endcap  Need also better evaluation of physics performance

CSN1 April 2005 V. Vercesi - INFN Pavia 24LVL1  LVL1 simulation is of course an integral part for the measurement of the full muon slice performance  Lot of work done in the past  Cabling, efficiency, robustness  Next steps (with available manpower)  Efficiency studies with cavern background using DC1 data  Careful evaluation of needed statistics (signal and background): big load on italian farms  Building of “horizontal slice” including the end-caps to assess LVL1 trigger rates on full eta range  New topics (with manpower to define..)  Efficiency studies with signal samples from DC2 production  Production starting up at CERN with Geant4 and latest spectrometer layout  Background studies with Geant4  Detailed study of LVL1 timing (cabling, time-of-flight)  Cosmic trigger  Physics rates efficiency

CSN1 April 2005 V. Vercesi - INFN Pavia 25 b-tagging selection  Identify variables to discriminate between b-jets and u-jets  d 0 /  d0 (p T ) (  d0 ~ 25µm at high p T )  z 0 : need primary vertex reconstruction after track reconstruction. Using the same algorithm as in the seed formation we get 200µm (enough precise considering the (similar) z 0 resolution of the tracks)  Number of tracks in the RoI  Energy fraction of the b candidate  For each variable compute the weight variable W and the discriminant variable X  Evaluate rejection at LVL2 and efficiency for tagging  Combination of the two most effective variables (d 0 /  d0 and z0) using 2D pdf’s (accounts for the full correlation between variables)  New results  (50%) = 12.0,  (70%) = 4.5.  Old results (d 0 only)  (50%) = 7.0,  (70%) = 3.0  B-physics implications under study

CSN1 April 2005 V. Vercesi - INFN Pavia 26  New TDAQ Organization  Italian activities and roles  Pre-series procurements  Status, deployment  Documentation  Activities  Combined Test Beam results  Monitoring and ROD Crate DAQ  Algorithms development  Planning and outlook  Systems commissioning  Cosmic data taking

CSN1 April 2005 V. Vercesi - INFN Pavia 27 PESA Validation & Performance  Building of Trigger Menus  Evolve and complement the work done in the present slices  Slices will always be part of the PESA validation process  People developing and trying algorithms will necessarily apply them to some sample in order to extract information about their behaviour  “Slices” however are only ingredients of the recipe we need in the runtime phase of ATLAS, where the complete Menu is the only global element that can be optimized against "environmental" conditions (detector knowledge, machine background, etc)  Operate steering on multiple combination of objects  Physics validation use-cases  List of items of increasing complexity, moving from simple processes used now (like Z  2e or Z  2  ) to others capable of addressing more complex menus (like H  2e2  or top or …)  Need feedback and help to select most interesting ones  Study feasibility of an exercise similar to the Athens one for physics, where a mixture of signal samples (plus some background) is produced and the Trigger Menu is tested (blindly) against those data  PESA Selection commissioning  On a time scale even earlier than the "final" Trigger Menu  Need to be ready for the cosmic data taking  Prepare modified algorithms if needed (e.g. non pointing tracks)  Understand detector needs and collect corresponding requirements in advance

CSN1 April 2005 V. Vercesi - INFN Pavia 28 ATLAS Commissioning Phases  Commissioning means bringing ATLAS systems from “just installed” to “operational”. It is broken in 4 phases  Subsystem standalone commissioning  DCS, LV, HV, cooling, gas safety, DB recording and retrieving  DAQ: pedestal runs, electronic calibration, write and analyze data  Integrate subsystems into full detector  Skeleton TTC needs to be available  Cosmic rays, recording data, analyze/understand, distribute to remote sites  Ad-hoc DAQ, Trigger and algorithms will be needed  Single beam, first collisions, increasing rates, etc…  Wow…  A sensible part of commissioning activities will be done during the installation itself  Phases will overlap since different systems may be in different phases  For the barrel calorimeter electronics commissioning will start soon  Tile calorimeter will start cosmics data taking this fall

CSN1 April 2005 V. Vercesi - INFN Pavia 29 HLT Commissioning  Commissioning is a set of activities which spans the time interval from the installation of the HLT racks and nodes …  A rack is the elementary unit for commissioning  The cooling, power, network cables are connected  OS, Dataflow and Online software are installed ... to the phase when the HLT is filtering physics data and recording them  HLT selection algorithms are installed and running stably  The complete trigger menu (at least for early physics) is configured  The trigger selection efficiencies and background rejection rates are understood and can serve as input for physics measurements  It is also clear that the time scales are “shifted” with respect to the rest of detectors  Installation will happen later than for other systems  Phase-1 Commissioning definition is the most urgent  Heavily use the Pre-series to exercise the procedures for installation and commissioning  Important steps will cover the integration of detectors into full system  Involve operations that have a very strong coupling with the offline commissioning activities  Development of specific algorithms looking at simple data decoding (cabling,…)  Final commissioning phases extend far beyond the data-taking startup (interface with run coordinator team)  Need good coordination with physics groups  Need to think as the trigger as a whole object to be commissioned (including LVL1)

CSN1 April 2005 V. Vercesi - INFN Pavia 30 Cosmic muons in ATLAS Rock ~ Silicon 600m x 600m x 200m deep (2.33 g/cm 3 ) Air Concrete Surface building PX14/16 shielding (2.5 g/cm 3 ) PX14 (18.0 m Inner Ø) PX16 (12.6 m Inner Ø ) ATLAS Geant Simulation Initial detector

CSN1 April 2005 V. Vercesi - INFN Pavia 31 Cosmic trigger issues  How to trigger on cosmics?  RPC, TGC (?)  From preliminary full simulations of LVL1  Cosmics: up to ~100 Hz pass low-pT RPC LVL1  How to increase cosmic trigger acceptance?  Exciting last opportunity!  After that, one will only be asked to reduce trigger rates…  Muon system  Requirement for cosmics (and beam-halo) triggers included in design:  e.g. trigger ASICs include programmable delays to compensate for TOF of down-going cosmic-ray muons in barrel  Projectivity constraints result from cabling between planes of trigger chambers  Lot of flexibility in the system  Timing adjustments  Open L1 roads  Relax coincidence requests  At LVL2 modified trigger algorithms can help in selecting non-pointing muon  Tile Calorimeter system  RPC commissioned later than foreseen  June 2005 : Tilecal in the pit equipped with electronic  commissioning with cosmics can start  need self-triggering scheme while waiting for RPC  consider back-to-back trigger towers (  x  =0.1 x 0.1, full calo depth)  ask E > 1.5 GeV in both towers  Expected rate from full simulation : ~ 130  /hr for 16 top+16 bottom module  Ongoing studies to refine present understanding  Soon to be checked with real measurements

CSN1 April 2005 V. Vercesi - INFN Pavia 32 Milestones and Finance  30/06/2005  TDAQ - Installazione, test e uso della "Pre-serie" (~ 10% TDAQ slice)  24/12/2005  TDAQ - Installazione e test dei ROS di Pixel, LAr, Tile, Muon (interfacciamento al ROD Crate e integrazione nel DAQ)  CORE budget allocato per il 2005 è di 214 k€  TDAQ Resource Committee (VV partecipa) sta attualmente pianificando i dettagli degli impegni finanziari e dello share  INFN impegnato su Read-Out System e Online components  Non si prevedono modifiche sostanziali al piano di share  Procederemo agli acquisti (molto probabilmente sempre attraverso ordini CERN), previa comunicazione ai referee, non appena possibile

CSN1 April 2005 V. Vercesi - INFN Pavia 33 Cost Profile (kCHF) Total Pre-series Detector R/O LVL2 Proc Event Builder Event Filter Online Infrastructure INFN Total TDR Total INFN Percentage(%)

CSN1 April 2005 V. Vercesi - INFN Pavia 34Conclusioni  Stato attuale del progetto HLT/DAQ ben allineato con le scadenze previste nel 2005  Molti piccoli dettagli certamente da valutare con attenzione perché il progetto è estremamente complesso e anche le responsabilità italiane coprono diversi settori  Sarebbe estremamente positivo avere maggiori contributi alla forza lavoro ora che lo sforzo di costruzione è terminato  Successo delle attività al Combined Test Beam  Molti italiani in ruoli di grande visibilità  Abbiamo imparato tante cose, dobbiamo trovare il tempo di fermarci a riflettere  Lo sviluppo degli algoritmi procede bene, maggiore enfasi sarà via via posta sulle misure di performance di fisica complesse  La nuova struttura organizzativa del progetto è definita  Evolverà ulteriormente avvicinandosi al periodo di presa dati  La componente italiana è ben rappresentata  Riconoscimento di tutti gli impegni portati a termine con successo dai nostri ricercatori in questi anni