Snowmass2001 - P. Le Dû Next generation of Trigger/DAQ Where are we today? Evolution 2005-2010 On/off line boundaries What next ? 2010-2020 LC’s Triggers.

Slides:



Advertisements
Similar presentations
HLT Collaboration; High Level Trigger HLT PRR Computer Architecture Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer.
Advertisements

Freiburg Seminar, Sept Sascha Caron Finding the Higgs or something else ideas to improve the discovery ideas to improve the discovery potential at.
LHCb Upgrade Overview ALICE, ATLAS, CMS & LHCb joint workshop on DAQ Château de Bossey 13 March 2013 Beat Jost / Cern.
CMS High Level Trigger Selection Giuseppe Bagliesi INFN-Pisa On behalf of the CMS collaboration EPS-HEP 2003 Aachen, Germany.
27 th June 2008Johannes Albrecht, BEACH 2008 Johannes Albrecht Physikalisches Institut Universität Heidelberg on behalf of the LHCb Collaboration The LHCb.
The First-Level Trigger of ATLAS Johannes Haller (CERN) on behalf of the ATLAS First-Level Trigger Groups International Europhysics Conference on High.
The LHCb DAQ and Trigger Systems: recent updates Ricardo Graciani XXXIV International Meeting on Fundamental Physics.
20 Feb 2002Readout electronics1 Status of the readout design Paul Dauncey Imperial College Outline: Basic concept Features of proposal VFE interface issues.
J. Leonard, U. Wisconsin 1 Commissioning the Trigger of the CMS Experiment at the CERN Large Hadron Collider Jessica L. Leonard Real-Time Conference Lisbon,
ACAT 2002, Moscow June 24-28thJ. Hernández. DESY-Zeuthen1 Offline Mass Data Processing using Online Computing Resources at HERA-B José Hernández DESY-Zeuthen.
CHEP04 - Interlaken - Sep. 27th - Oct. 1st 2004T. M. Steinbeck for the Alice Collaboration1/20 New Experiences with the ALICE High Level Trigger Data Transport.
DSP online algorithms for the ATLAS TileCal Read Out Drivers Cristobal Cuenca Almenar IFIC (University of Valencia-CSIC)
July 7, 2008SLAC Annual Program ReviewPage 1 High-level Trigger Algorithm Development Ignacio Aracena for the SLAC ATLAS group.
The ATLAS trigger Ricardo Gonçalo Royal Holloway University of London.
General Trigger Philosophy The definition of ROI’s is what allows, by transferring a moderate amount of information, to concentrate on improvements in.
Trigger and online software Simon George & Reiner Hauser T/DAQ Phase 1 IDR.
February 19th 2009AlbaNova Instrumentation Seminar1 Christian Bohm Instrumentation Physics, SU Upgrading the ATLAS detector Overview Motivation The current.
Tracking at the ATLAS LVL2 Trigger Athens – HEP2003 Nikos Konstantinidis University College London.
The CMS Level-1 Trigger System Dave Newbold, University of Bristol On behalf of the CMS collaboration.
Niko Neufeld, CERN/PH-Department
Helmholtz International Center for CBM – Online Reconstruction and Event Selection Open Charm Event Selection – Driving Force for FEE and DAQ Open charm:
ALCPG Workshop, Cornell, July '03 1 G. Eckerlin Data Acquisition for an LC Detector Report from the extended ECFA/DESY LC Workshops Cornell, Jul. 15 th.
Copyright © 2000 OPNET Technologies, Inc. Title – 1 Distributed Trigger System for the LHC experiments Krzysztof Korcyl ATLAS experiment laboratory H.
Claudia-Elisabeth Wulz Institute for High Energy Physics Vienna Level-1 Trigger Menu Working Group CERN, 9 November 2000 Global Trigger Overview.
ILC Trigger & DAQ Issues - 1 ILC DAQ issues ILC DAQ issues By P. Le Dû
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
HEP 2005 WorkShop, Thessaloniki April, 21 st – 24 th 2005 Efstathios (Stathis) Stefanidis Studies on the High.
Trigger & Analysis Avi Yagil UCSD. 14-June-2007HCPSS - Triggers & AnalysisAvi Yagil 2 Table of Contents Introduction –Rates & cross sections –Beam Crossings.
Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC.
1 “Steering the ATLAS High Level Trigger” COMUNE, G. (Michigan State University ) GEORGE, S. (Royal Holloway, University of London) HALLER, J. (CERN) MORETTINI,
28/03/2003Julie PRAST, LAPP CNRS, FRANCE 1 The ATLAS Liquid Argon Calorimeters ReadOut Drivers A 600 MHz TMS320C6414 DSPs based design.
The ATLAS Trigger: High-Level Trigger Commissioning and Operation During Early Data Taking Ricardo Gonçalo, Royal Holloway University of London On behalf.
IOP HEPP: Beauty Physics in the UK, 12/11/08Julie Kirk1 B-triggers at ATLAS Julie Kirk Rutherford Appleton Laboratory Introduction – B physics at LHC –
LCWS05 Stanford - P. Le Dû 1 What next  Snowmass Menu Basic parameters and constraints Detector concepts Front end electronics and read out issues Data.
LHCb front-end electronics and its interface to the DAQ.
2003 Conference for Computing in High Energy and Nuclear Physics La Jolla, California Giovanna Lehmann - CERN EP/ATD The DataFlow of the ATLAS Trigger.
Overview of the High-Level Trigger Electron and Photon Selection for the ATLAS Experiment at the LHC Ricardo Gonçalo, Royal Holloway University of London.
Data Acquisition, Trigger and Control
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment is one of the four major experiments operating at the Large Hadron Collider.
LXWS042 - P. Le Dû n Basic parameters and constraints n Event selection strategy and evolution of architectures n Implementation of algorithms : hardware.
Axel Naumann, DØ University of Nijmegen, The Netherlands 04/20/2002 APS April Meeting 2002 Prospects of the Multivariate B Quark Tagger for the Level 2.
LHCbComputing Computing for the LHCb Upgrade. 2 LHCb Upgrade: goal and timescale m LHCb upgrade will be operational after LS2 (~2020) m Increase significantly.
DAQ Systems and Technologies for Flavor Physics FPCP Conference Lake Placid 1 June 2009 Beat Jost/ Cern-PH.
ECFA Workshop, Warsaw, June G. Eckerlin Data Acquisition for the ILD G. Eckerlin ILD Meeting ILC ECFA Workshop, Warsaw, June 11 th 2008 DAQ Concept.
A Fast Hardware Tracker for the ATLAS Trigger System A Fast Hardware Tracker for the ATLAS Trigger System Mark Neubauer 1, Laura Sartori 2 1 University.
July 22, 2002Brainstorming Meeting, F.Teubert L1/L2 Trigger Algorithms L1-DAQ Trigger Farms, July 22, 2002 F.Teubert on behalf of the Trigger Software.
Workshop ALICE Upgrade Overview Thorsten Kollegger for the ALICE Collaboration ALICE | Workshop |
ATLAS and the Trigger System The ATLAS (A Toroidal LHC ApparatuS) Experiment [1] is one of the four major experiments operating at the Large Hadron Collider.
The BTeV Pixel Detector and Trigger System Simon Kwan Fermilab P.O. Box 500, Batavia, IL 60510, USA BEACH2002, June 29, 2002 Vancouver, Canada.
ATLAS UK physics meeting, 10/01/08 1 Triggers for B physics Julie Kirk RAL Overview of B trigger strategy Algorithms – current status and plans Menus Efficiencies.
The LHCb Calorimeter Triggers LAL Orsay and INFN Bologna.
DAQ and Trigger for HPS run Sergey Boyarinov JLAB July 11, Requirements and available test results 2. DAQ status 3. Trigger system status and upgrades.
More technical description:
The CMS High-Level Trigger
Enrico Gamberini, Giovanna Lehmann Miotto, Roland Sipos
ALICE – First paper.
RT2003, Montreal Niko Neufeld, CERN-EP & Univ. de Lausanne
DAQ Systems and Technologies for Flavor Physics
Toward a costing model What next? Technology decision n Schedule
The First-Level Trigger of ATLAS
Example of DAQ Trigger issues for the SoLID experiment
John Harvey CERN EP/LBC July 24, 2001
Experimental Particle Physics PHYS6011 Putting it all together Lecture 4 6th May 2009 Fergus Wilson, RAL.
The LHCb Trigger Niko Neufeld CERN, PH.
LHCb Trigger, Online and related Electronics
Experimental Particle Physics PHYS6011 Putting it all together Lecture 4 28th April 2008 Fergus Wilson. RAL.
The LHCb Level 1 trigger LHC Symposium, October 27, 2001
The CMS Tracking Readout and Front End Driver Testing
LHCb Trigger LHCb Trigger Outlook:
The LHCb Front-end Electronics System Status and Future Development
Presentation transcript:

Snowmass P. Le Dû Next generation of Trigger/DAQ Where are we today? Evolution On/off line boundaries What next ? LC’s Triggers Technologies What about standards? Technology transfer to others fields Patrick Le Dû -

Snowmass P. Le Dû General comments about Trigger/DAQ From the Physics: NO loss From the Detector : Deadtimeless From the Machine :use 100% from T/DAQ people: maximum efficiency and minimum maintenance Can we achieve the ultimate T/DAQ system ?

Snowmass P. Le Dû Tevatron selection scheme Level 3 Available time (sec) Production rate Hardwired processors ( FPGA) Level 1 Level 2 Recorded Events 7MHz QCD W,Z Top Higgs? 396/132ns 4 µsec  sec > Sec Standard PC’s Farm KHz 50Hz 1 KHz Coarse, dedicated data RISC Processors and DSPs optimized code “Off-line” code 6

Snowmass P. Le Dû L2 4.2 µs <50 Hz D0 Detector L1 Buffer Pipeline) L1 Trigger L3 Farm Mass Storage 10 KHz Accept Acce/rej < 1KHz 100 µs latency 7.6 MHz Xing rate 48 Input nodes 50 ms latency L2 Global Preprocessing Farm L2 Buffer & digitization

Snowmass P. Le Dû LHC Multilevels Selection scheme Available time (sec) Production rate Hardwired processors (ASIC, FPGA) Standard processor farms & Networks Level 1 Level 2 Level 3 # / sec QCD W,Z Top Higgs Z’ 25nsfew µsec ~ms > Sec Recorded Events HLTHLT

Snowmass P. Le Dû Evolution  LHC (ATLAS & CMS)  Two levels trigger –L1 = physics objects ( e/g,jet,m..) using dedicated data –L2 + L3 = High Levels « software » Triggers using « digitized data » Complex algorithms like displaced vertices are moving downstream –CDF/DO : L2 vertex trigger –LHCb/Btev : L0/L1 b trigger Use as much as possible comodity products (HLT) –No more « Physic » busses  VME,PCI.. –Off the shelf technology Processor farms Networks switches (ATM, GbE ) –Commonly OS and high level languages

Snowmass P. Le Dû “Logical Strategy” for event selection Prompt Trigger “Identification of objects” Prompt Trigger “Identification of objects” High Level Trigger Selection “L 1 objects “ confirm Particle signature Global Topology Trigger Menu High Level Trigger Selection “L 1 objects “ confirm Particle signature Global Topology Trigger Menu Event Filter On-line processing Event Filter On-line processing collision rate > KHz > Hz “off-line” Coarse dedicated data Final Digitized data optimised code Partial to full event “Off-line” code type Local identification of Energy cluster Track segment Missing energy Objects Classification of Physics/calibration Process Refine Et and Pt cut Detector matching Mass calculation VTX & Impact parameters... Full or partial reconstruction Calibration & monitoring “Hot stream” physics “Gold platted ”events Final formatting etc... Storage& analysis S1S2S3S4Sn Few µsec Few msec Few sec Data streams Logical steps MHz L1 L2 L3

Snowmass P. Le Dû On-off line boundaries Detectors are becoming more stable and less faulty On-line processing power is increasing and use similar hw/sw components (PC farms..) On-line calibration and correction of data possible More complex analysis is moving on-line –Filter event –Sort data streams… become flexible

Snowmass P. Le Dû Trigger strategy & Event Analysis hours days Sec. Temporary storage Monitoring Calibration Alignements Physics monitoring “Gold Platted“ events Physics samples On-Line Processing Database “Garbage” Final storage Candidates Storage Calibration Constant Sub-Detector performance Event Background Infos to the LHC “Analysis” farm Sample Prescale Compress Fast Analysis StreamPhysics streams ms hours days Sec. S1 S2 Sn Simple signatures Complex signatures Topology Others signatures Menu Event Candidate & classification Simple signatures : e/g, µ, taus,Jet Refine Et and Pt cut Detector matching Complex signatures : Missing Et, scalar Et Invariant and transverse mass separation … vertices, primary and displaced Selection: Thresholding Prescaling “Intelligent formatting “ HLT Algorithms Select « Physics tools » Select objects and compare to Menus HLT Partial/Full Event Building Reject 1-2 KHz 5-10KHz 100Hz

Snowmass P. Le Dû Summary of T/DAQ architecture evolution Today –Tree structure and partitions –Processing farms at very highest levels –Trigger and DAQ dataflow are merging Near future –Data and control networks centered –Processing farm already at L2 More complex are moving on line Boundaries between on-line and off-line are flexible Comodity components more towards L1 L1 L2 L3 HLT Pass1 Pass2 Analysis farm Pass2 hardware On-line Off-line

Snowmass P. Le Dû What next ? Next generation of machines –LC (Tesla,NLC,JLC) Concept of « software trigger » –VLHC : like LHC –CLIC : < ns sec collision time! – Mu collider : Not invetigated yet! Next generation of detectors : –Pixels trackers : ex 800 M Ch (Tesla) –Si-W calorimeters: 32 M Ch. (Tesla) Very high luminosity > 10**34 High or continuous collision rate (< ns) multimillion Si read-out channels Challenges

Snowmass P. Le Dû LC beam structure Relatively long time between bunch trains 199 ms Rather long time between bunches: 337 ns Rather long bunch trains ( same order as detector rerad-out time: 1ms Relatively long time between bunch trains (same order as read- out time): 6.6 ms Very short time between bunches: 2.8 ns Rather short pulses : 238 ns TESLA JLC (NLC)  // / 199 ms 1ms 2820 bunches 5 Hz150 Hz / 85 bunches 6.6 ms 238 ns

Snowmass P. Le Dû LC basic trigger concept : NO hardware trigger Read-out and store front end digitized data of a complete bunch train into buffers –Deadtime free -- no data loss DAQ triggered by every train crossing –build the event and perform zero suppression and/or data compression –full event data information of complete bunch train available Software selection between train : software trigger –using « off-line » algorithms Classify events according – physics, calibration and machine needs Store events : –partial or everything!

Snowmass P. Le Dû Advantages Flexible –fully programmable –unforeseen backgrounds and physics rates easily accomodated –Machine people can adjust the beam using background events Easy maintenance and cost effective –Commodity products : Off the shelf technology (memory,switches, procsessors) –Commonly OS and high level languages –on-line computing ressources usable for « off-line » Scalable : –modular system

Snowmass P. Le Dû Consequences on detector concept Constraints on detector read-out technology –TESLA: Read 1ms continuously VTX: digitizing during pulse to keep VTX occupancy small TPC : no active gating –JLC/NLC : 7 ms pulse separation –detector read out in 5 ms –veto trains 3 ns bunch separation –off line bunch tagging Efficient/cheap read-out of million of front end channels should be developped –silicon detectors ( VTX and SiWcalorimeters)

Snowmass P. Le Dû Conclusion about LC triggers Software trigger concept remains the ‘ baseline ’ –T/DAQ for the LC is NOT an issue ! Looks like the ‘ ultimate trigger ’ –satisfy everybody : no loss and fully programmable Feasible - (almost) today and affordable –Less demanding than LHC Consequence on the detector design –constraint on detectors read-out electronics (trackers) Consequence on the sofware environment: –on and off-line are merging : need to develop a complete integrated computing model with common ressources from calibration, selection (algorithms and filter) and analysis /processing paths….

Snowmass P. Le Dû Technology forecast ( ) Fast logic & hardware triggering (L1) Move to digital & programmable ASICS not anymore developped FPGA’s is growing and can embed complex algorithms

Snowmass P. Le Dû Technology forecast ( ) (Software trigger) Processors and memories –Continuous increasing of the computing power More’s law still true until 2010!  x 64 Then double every 3years  –Memory size quasi illimited ! Today: 64 Mbytes 2004 : 256 MB 2010 : > 1 GB Networks:Commercial telecom/computer standards –Multi (10-100) GBEthernet –But : Software overhead will limit the performance… x 256 by 2016 Systematic use of : Off the Shelves comodity products

Snowmass P. Le Dû About standards Evolution of standards : no more HEP! –HEP : NIM (60s) CAMAC (70s), FASTBUS (80s), –Commercial OTS : VME (90s), PCI (2000)  CPCI? Looking ahead: today commercial technologies –No wide parallel data buses in crates –Backplanes used for power distribution,serial I/O, special functions –High speeGb/s fiber & copper serial data links –Wireless data link emerging –Higher densities for micros,memories standards commercial part –Hundred of pin packages

Snowmass P. Le Dû Transfer to other fields Last year IEEE NSS-MIC Conference shows a great interest and a common interest Medical Imaging as similar requirement as us for diagnostic TEP –Large data movment and on-line treatment –Fast selection and reconstruction

Snowmass P. Le Dû Final Conclusions Trigger/should not be an issue for the next generation of machines like LCs Fully commercial OTS comodity components Programmable & software triggers On-line and Off-line boundaries become very flexible: need a new « computing model » Challenges for 2020 –Very high luminosity > 10**34 –High or continuous collision rate (< ns)