Event selection and readout Online networks and architectures Online event filter Technologies and trends Computing and communication at LHC
European Laboratory for Particle Physics 2 Two superconducting magnet rings in the LEP tunnel. Opal Delphi SPS PS LEP -LHC Aleph L3 LHCb Alice CMS ATLAS Experiments at LHC ATLAS A Toroidal LHC ApparatuS. (Study of Proton-Proton collisions) CMS Compact Muon Solenoid. (Study of Proton-Proton collisions) ALICE A Large Ion Collider Experiment. (Study of Ion-Ion collisions) LHCb (Study of CP violation in B-meson decays at the LHC collider) The Large Hadron Collider (LHC)
European Laboratory for Particle Physics 3 The LHC challenges: Summary
European Laboratory for Particle Physics 4 Measurement and event selection
European Laboratory for Particle Physics 5 Event selection: The trigger system
European Laboratory for Particle Physics 6 Trigger levels at LHC (the first second)
European Laboratory for Particle Physics 7 Particle identification
European Laboratory for Particle Physics 8 CMS Level-1 : calorimeters and muons
European Laboratory for Particle Physics 9 Level-1 trigger systems
European Laboratory for Particle Physics 10 Level-1 trigger summary Time needed for decision tdec≈ 2-3 s Bunch-crossing time: tevt≈ 25 ns Need pipelines to hold data Need fast response (e.g. dedicated detectors) Backgrounds are huge Rejection factor ≈ 10,000 Algorithms run on local, coarse data Only calorimeter & muon information Special-purpose hardware (ASICs, etc) Rates: steep function of thresholds Ultimately, determines the physics
European Laboratory for Particle Physics 11 Bunch Crossing Times: LEP, Tevatron & LHC
European Laboratory for Particle Physics 12 Trigger and readout structure at LHC ATLAS CMS
European Laboratory for Particle Physics 13 CMS front-end readouts
European Laboratory for Particle Physics 14 LHC experiments trigger and DAQ summary
European Laboratory for Particle Physics 15 Evolution of DAQ technologies and architectures
European Laboratory for Particle Physics 16 LHC trigger and data acquisition systems LHC DAQ : A computing&communication network Alice ATLAS LHCb A single network cannot satisfy at once all the LHC requirements, therefore present LHC DAQ designs are implemented as multiple (specialized) networks
European Laboratory for Particle Physics 17 CMS trigger and data acquisition
European Laboratory for Particle Physics 18 Structure option: Two physical stages (CMS)
European Laboratory for Particle Physics 19 Structure option: Three physical stages (ATLAS)
European Laboratory for Particle Physics Million channels 100 kHz LEVEL-1 TRIGGER 1 Megabyte EVENT DATA 200 Gigabyte BUFFERS 500 Readout memories 3 Gigacell buffers 1 Terabit/s Gigabit/s SERVICE LAN Petabyte ARCHIVE EnergyTracks Networks 1 Terabit/s (50000 DATA CHANNELS) 5 TeraIPS EVENT BUILDER. A large switching network ( ports) with a total throughput of approximately 500 Gbit/s forms the interconnection between the sources (Readout Dual Port Memory) and the destinations (switch to Farm Interface). The Event Manager collects the status and request of event filters and distributes event building commands (read/clear) to RDPMs EVENT FILTER. It consists of a set of high performance commercial processors organized into many farms convenient for on-line and off-line applications. The farm architecture is such that a single CPU processes one event 40 MHz COLLISION RATE ChargeTimePattern Detectors Computing services Data communication and processing at LHC
European Laboratory for Particle Physics 21 Data communication at LHC
European Laboratory for Particle Physics 22 Event building
European Laboratory for Particle Physics 23 Event builder switch technologies
European Laboratory for Particle Physics 24 Reconstruction at LHC: CMS central tracking
European Laboratory for Particle Physics 25 Parallel processing by farms
European Laboratory for Particle Physics 26 CMS trigger levels
European Laboratory for Particle Physics 27 Online event selection(rejection) Rejected % Accepted 0.001%
European Laboratory for Particle Physics 28 CMS final system: Summary
European Laboratory for Particle Physics 29 Trigger and data acquisition trends
European Laboratory for Particle Physics 30 Technology ansatz (Moore's law)
European Laboratory for Particle Physics 31 Technologies trend (Moore’s law)s
European Laboratory for Particle Physics million new users expected online by 2001 Internet traffic doubled every 100 days 5000 domain name added every day 1999: last year of the voice Need more bandwidth ->Terabit switches at LHC Voice Data Traffic load CMS experiment technical proposal CMS Data Acquisition commissioning CMS Data Acquisition technical proposal Internet growth Start of Data era
European Laboratory for Particle Physics 33 Performances required by a LHC experiment (in 2005) versus high technology expectations Accelerated Strategic Computing Initiative. ASCI 97
European Laboratory for Particle Physics 34 ASCI PathForward Program Overview
European Laboratory for Particle Physics 35 NEC 32 Tera, CompaQ HPCC
European Laboratory for Particle Physics 36 Processor farms : the 90's supercomputer
European Laboratory for Particle Physics 37 After commodity farms what next? Fusion of data communication, data processing and data archive global resources : Grid approach ?
European Laboratory for Particle Physics 38 Raw Data: 1000 Gbit/s Raw Data: 1000 Gbit/s 5 TeraIPS Events: 10 Gbit/s Events: 10 Gbit/s 10 TeraIPS Controls: 1 Gbit/s Controls: 1 Gbit/s To regional centers 622 Mbit/s To regional centers 622 Mbit/s Remote control rooms Remote control rooms Controls: 1 Gbit/s Controls: 1 Gbit/s CMS data flow and on(off) line computing
European Laboratory for Particle Physics 39 Event selection and computing stages
European Laboratory for Particle Physics 40 Computing and communication perspectives at LHC
European Laboratory for Particle Physics 41 Short history of computing and new frontiers The origin Counting. Abacus 1600Numeric techniques, Logarithms, Calculator engine 1800Punch cards, Automates. Babbage difference engine driven on a program 1900Punched cards electromechanical Holletith's tabulator, vacuum tube, 1940Electronic digital and stored-program first computers First Generation Commercial computers. UNIVAC. FORTRAN. First transistor Second Generation General purpose computers IBM,DEC. COBOL, BASIC. Integrated circuits Third Generation Arpanet. First microprocessor chip. PASCAL, C and UNIX Forth Generation 1975Minicomputer, Microcomputer. Window and mouse. Cray supercomputer. 1980Personal Computer. Apple, Microsoft. Vector processors 1984Parallel computing. Farms. OO/C Massive parallel computing and massive parallel storage. LANs and WANs. Internet. WEB 1995Commodities and network/bandwidth explosion. Network computing. High Performance Computing ASCI initiative. 2000Present Fusion of computing, communication and archiving. Grid….