Presentation is loading. Please wait.

Presentation is loading. Please wait.

Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct. 2006 The ATLAS Data Acquisition.

Similar presentations


Presentation on theme: "Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct. 2006 The ATLAS Data Acquisition."— Presentation transcript:

1 Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct. 2006 The ATLAS Data Acquisition & Trigger: concept, design & status

2 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS2 ATLAS Trigger & DAQ: concept p p 40 MHz ~ 200 Hz 100 kHz ~ 3.5 kHz Event Filter LVL1 LVL2 ~ 300 MB/s ~3+6 GB/s 160 GB/s Full info / event: ~ 1.6 MB/25ns = 60 PB/s Algorithms on PC farms seeded by previous level decide fast work w/ min. data volume Hardware based No dead time

3 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS3 From the detector into the Level-1 Trigger Level 1 TriggerDAQ 2.5  s Calo MuTrCh Other detectors FE Pipelines 40 MHz

4 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS4 Upon LVL1 accept: buffer data & get RoIs ROS Level 1 Det. R/O TriggerDAQ 2.5  s Calo MuTrCh Other detectors Read-Out Systems L1 accept (100 kHz) 40 MHz 160 GB/s ROD ROB Read-Out Drivers Read-Out Buffers Read-Out Links (S-LINK) 100 kHz

5 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS5 Region of Interest Builder ROS Level 1 Det. R/O TriggerDAQ 2.5  s Calo MuTrCh Other detectors Read-Out Systems RoI L1 accept (100 kHz) 40 MHz 160 GB/s ROD ROB ROIB Read-Out Drivers Read-Out Buffers Read-Out Links (S-LINK) 100 kHz On average, LVL1 finds ~2 Regions of Interest (in  ) per event Upon LVL1 accept: buffer data & get RoIs

6 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS6 A much smaller ReadOut network … at the cost of a higher control traffic LVL2: work with “interesting” ROSs/ROBs L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI requests L1 accept (100 kHz) 40 MHz 100 kHz 160 GB/s ~3 GB/s ROD ROB L2SVROIB Level 2 LVL2 Supervisor LVL2 Processing Units Read-Out Buffers RoI data (~2% of full event) LVL2 Network

7 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS7 TriggerDAQ Calo MuTrCh EB L2 ROS Level 1 Det. R/O 2.5  s ~10 ms Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accepts L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz 160 GB/s ~3+6 GB/s ROD ROB SFI EBN Event Builder DFML2SVROIB Level 2 Sub-Farm Input Dataflow Manager After LVL2: Event Builder makes full events EB Network

8 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS8 Event Filter: deals with Full Events TriggerDAQ EB L2 ROS Level 1 Det. R/O 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz 160 GB/s ~3+6 GB/s EF EFP ~ sec ROD ROB SFI EBN Event Builder EFN DFML2SVROIB Event Filter Level 2 Farm of PCs Event Filter Network Full Event Sub-Farm Input ~ 200 Hz

9 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS9 EB L2 ROS Level 1 Det. R/O 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz 160 GB/s ~3+6 GB/s EF EFP ~ sec ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 Event Filter Processors Event Filter Network SFO EF accept (~0.2 kHz) ~ 200 Hz ~ 300 MB/s Sub-Farm Output From Event Filter to Local (TDAQ) storage

10 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS10 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz ~ 200 Hz 160 GB/s ~ 300 MB/s ~3+6 GB/s EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 TDAQ, High Level Trigger & DataFlow

11 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS11 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz 100 kHz ~3.5 kHz ~ 200 Hz 160 GB/s ~ 300 MB/s ~3+6 GB/s EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 TDAQ, High Level Trigger & DataFlow High Level Trigger (HLT) Algorithms developed offline (with HLT in mind) HLT Infrastructure (TDAQ job): –“steer” the order of algorithm execution –Alternate steps of “feature extraction” & “hypothesis testing”)  fast rejection (min. CPU) –Reconstruction in Regions of Interest  min. processing time & network resources

12 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS12 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN EFN DFM L2SVROIB 500 nodes 100 nodes 150 nodes 1600 nodes Infrastructure ControlCommunicationDatabases High Level Trigger & DataFlow: PCs (Linux)

13 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS13 TDAQ at the ATLAS site SDX1 USA15 UX15 ATLAS detector Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) UX15 USA15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links Gigabit Ethernet RoI Builder Regions Of Interest VME ~150 PCs Data of events accepted by first-level trigger Event data requests Delete commands Requested event data Event data pushed @ ≤ 100 kHz, 1600 fragments of ~ 1 kByte each LVL2 Super- visor SDX1 CERN computer centre DataFlow Manager Event Filter (EF) pROS ~ 500~1600 stores LVL2 output dual-CPU nodes ~100~30 Network switches Event data pulled: partial events @ ≤ 100 kHz, full events @ ~ 3 kHz Event rate ~ 200 Hz Data storage Local Storage SubFarm Outputs (SFOs) LVL2 farm Network switches Event Builder SubFarm Inputs (SFIs) Second- level trigger

14 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS14 SDX1 USA15 UX15 ATLAS detector Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) UX15 USA15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links Gigabit Ethernet RoI Builder Regions Of Interest VME ~150 PCs Data of events accepted by first-level trigger Event data requests Delete commands Requested event data Event data pushed @ ≤ 100 kHz, 1600 fragments of ~ 1 kByte each LVL2 Super- visor SDX1 CERN computer centre DataFlow Manager Event Filter (EF) pROS ~ 500~1600 stores LVL2 output dual-CPU nodes ~100~30 Network switches Event data pulled: partial events @ ≤ 100 kHz, full events @ ~ 3 kHz Event rate ~ 200 Hz Data storage Local Storage SubFarm Outputs (SFOs) LVL2 farm Network switches Event Builder SubFarm Inputs (SFIs) Second- level trigger “pre-series” DataFlow: ~10% of final TDAQ  Used for realistic measurements, assessment and validation of TDAQ dataflow & HLT TDAQ testbeds  Large scale system tests (at PC clusters with ~700 nodes) demonstrated required system performance & scalability for online infrastructure

15 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS15 August 2006: first combined cosmic ray run Muon section at feet of ATLAS Tile (HAD) Calorimeter  Triggered by Muon Trigger Chambers Muon + HAD Cal. cosmics run with LVL1 LVL1: Calorimeter, muon and central trigger logics in production and installation phases for both hardware & software

16 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS16 ROS units are PCs housing 12 Read Out Buffers, in 4 custom PCI-x cards (ROBIN) ReadOut Systems: all 153 PCs in place All 153 ROSs installed and standalone commissioned Input from detector R ead O ut D rivers

17 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS17 ReadOut Systems: all 153 PCs in place All 153 ROSs installed and standalone commissioned 44 ROSs connected to detectors and fully commissioned: –Full LAr-barrel (EM), –Half of Tile (HAD) and the Central Trigger Processor –Taking data with final DAQ (Event Building at the ROS level) Commissioning of other detector read-outs: expect to complete most of it by end 2006 ROS units are PCs housing 12 Read Out Buffers, in 4 custom PCI-x cards (ROBIN) Input from detector R ead O ut D rivers

18 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS18 EM + HAD calo cosmics run using installed ROSs 18

19 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS19 Event Building needs: bandwidth decides Read-Out Subsystems (ROSs) DFM Network switches Event Builder (SFIs) Gbit links Gbit links Throughput requirements: LVL2 accept rate: 3.5 kHz EB; Event size 1.6 MB  5.6 GB/s total input

20 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS20 We need ~100 SFIs for full ATLAS Network limited (fast CPUs): Event building using 60-70% of Gbit network  ~70 MB/s into each Event Building node (SFI) Event Building needs: bandwidth decides Read-Out Subsystems (ROSs) DFM Network switches Event Builder (SFIs) Gbit links Gbit links Throughput requirements: LVL2 accept rate: 3.5 kHz EB; Event size 1.6 MB  5600 MB/s total input

21 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS21 For HLT, CPU power is important At TDR we assumed: –100 kHz LVL1 accept rate –500 dual-CPU PCs for LVL2 –each CPU has to do 100Hz –10ms average latency per event in each CPU Assumed: 8 GHz per CPU at LVL2

22 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS22 For HLT, CPU power is important Preloaded ROS w/ muon events, run muFast @ LVL2 Test with AMD dual-core, dual CPU @ 1.8 GHz, 4 GB total  We should reach necessary performance per PC (the more we wait, the better machines we’ll get) At TDR we assumed: –100 kHz LVL1 accept rate –500 dual-CPU PCs for LVL2 –each CPU has to do 100Hz –10ms average latency per event in each CPU Assumed: 8 GHz per CPU at LVL2 8 GHz per CPU will not come (soon) But, dual-core dual-CPU PCs show scaling.

23 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS23 Online infrastructure: A useful fraction operational since last year. Growing according to need Final network almost done DAQ / HLT commissioning

24 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS24 DAQ / HLT commissioning ~ 300 machines on final network First DAQ/HLT-I slice of final system within weeks 153 ROSs (done) 47 Event Building + HLT-Infrastructure PCs 20 Local File Servers, 24 Loc. Switches 20 Operations PCs Might add Pre-series L2 (30 PCs) and EF (12 PCs) racks

25 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS25 DAQ / HLT commissioning  First 4 full racks of HLT machines (~100) early 2007  Another 500 to 600 machines can be procured within 2007  Rest, not before 2008. ~ 300 machines on final network First DAQ/HLT-I slice of final system within weeks 153 ROSs (done) 47 Event Building + HLT-Infrastructure PCs 20 Local File Servers, 24 Loc. Switches 20 Operations PCs Might add Pre-series L2 (30 PCs) and EF (12 PCs) racks

26 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS26 DAQ / HLT commissioning ~ 300 machines on final network First DAQ/HLT-I slice of final system within weeks 153 ROSs (done) 47 Event Building + HLT-Infrastructure PCs 20 Local File Servers, 24 Loc. Switches 20 Operations PCs Might add Pre-series L2 (30 PCs) and EF (12 PCs) racks  First 4 full racks of HLT machines (~100) early 2007  Another ~500 machines can be procured within 2007  Rest, not before 2008.  TDAQ will provide significant trigger rates (LVL1, LVL2, EF) in 2007 –LVL1 rate 40 kHz –EB rate 1.9 kHz –physics storage rate up to 85 Hz –final bandwidth for storage – calibration

27 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS27 ATLAS TDAQ design: –3-level trigger hierarchy –LVL2 works with Regions of Interest: small data movement –Feature extraction + hypothesis testing: fast rejection  min. CPU power Summary Architecture has been validated via deployment of testbeds We are in the installation phase of system Cosmic runs with Central Calorimeters + muon system An initial but fully functional TDAQ system will be installed, commissioned and integrated with Detectors till end of 2006 TDAQ will provide significant trigger rates (LVL1, LVL2, EF) in 2007

28 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS28 Thank you

29 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS29 ATLAS Trigger & DAQ: RoI concept 4 RoI  addresses In this example: 4 Regions of Interest: 2 muons, 2 electrons

30 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS30 Inner detector Calorimetry Muon system ATLAS total event size = 1.5 MB Total no. ROLs = 1600 Trigger ChannelsNo. ROLs Fragment size - kB MDT3.7x10 5 1920.8 CSC6.7x10 4 320.2 RPC3.5x10 5 320.38 TGC4.4x10 5 160.38 ChannelsNo. ROLs Fragment size - kB LAr1.8x10 5 7640.75 Tile10 4 640.75 ChannelsNo. ROLs Fragment size - kB LVL1561.2 ChannelsNo. ROLs Fragment size - kB Pixels0.8x10 8 1200.5 SCT6.2x10 6 921.1 TRT3.7x10 5 2321.2

31 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS31 L2SV gets RoI info from RoIB Assigns a L2PU to work on event Load-balances its’ L2PU sub-farm Can scheme cope with LVL1 rate? Test with preloaded RoI info into RoIB, which triggers TDAQ chain, emulating LVL1 LVL2 system is able to sustain the LVL1 input rate: – 1 L2SV system for LVL1 rate ~ 35 kHz – 2 L2SV system for LVL1 rate ~ 70 kHz (50%-50% sharing) Scalability of LVL2 system Rate per L2SV stable within 1.5% ATLAS will have a handful of L2SVs  can easily manage 100 kHz LVL1 rate

32 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS32 Data File LVL2 Ltcy Process Time RoI Coll Time RoI Coll Size # Req /Evt (ms) (bytes)  3.42.80.62871.3 di-jet3.63.30.327851.2 e17.215.51.7158207.4 Tests of LVL2 algorithms & RoI collection 2) Processing takes ~all latency: small RoI data collection time Note: Neither Trigger menu, nor data files representative mix of ATLAS (this is the aim for a late 2006 milestone) 3) Small RoI data request per event Electron sample is pre-selected 1) Majority of events rejected fast Di-jet,  & e simulated events preloaded on ROSs; RoI info on L2SV L2SV L2PU pROS Emulated ROS 8 1 1 pROS 1 DFM 1 Plus: 1 Online Server 1 MySQL data base server

33 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS33 ATLAS Trigger & DAQ: need 40 MHz ~ 200 Hz ~ 300 MB/s p p Need high luminosity to get to observe the (rare) very interesting events Need on-line selection to write to disk mostly the interesting events Full info / event: ~ 1.6 MB/25ns = 60k TB/s

34 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS34 ATLAS Trigger & DAQ: LVL1 concept 40 MHz ~ 200 Hz ~ 300 MB/s 100 kHz Full info / event: ~ 1.6 MB/25ns 160 GB/s p p LVL1 Hardware based No dead-time Calo & Muon info (coarse granularity) Identify Regions of Interest for next Trigger Level

35 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS35 ATLAS Trigger & DAQ: LVL2 concept 40 MHz ~ 200 Hz ~ 300 MB/s 100 kHz ~ 3.5 kHz Full info / event: ~ 1.6 MB/25ns ~3+6 GB/s 160 GB/s p p LVL2 Software (specialized algorithms) Use LVL1 Regions of Interest All sub-detectors : full granularity Emphasis on early rejection

36 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS36 ATLAS Trigger & DAQ: Event Filter concept 40 MHz ~ 200 Hz ~ 300 MB/s 100 kHz ~ 3.5 kHz Full info / event: ~ 1.6 MB/25ns ~3+6 GB/s 160 GB/s p p Event Filter Offline algorithms Seeded by LVL2 Result Work with full event Full calibration/alignment info

37 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS37 ATLAS Trigger & DAQ: concept summary 40 MHz ~100 kHz 2.5  s ~3 kHz ~ 10 ms ~ 1 s ~200 Hz Muon LVL1 CaloInner Pipeline Memories RatesLatency RoI LVL2 Event builder cluster Local Storage: ~ 300 MB/s Read-Out Subsystems hosting Read-Out Buffers Event Filter farm EF ROB ROB ROB ROB ROB ROB ROB ROB ROB ROB ROB ROB Hardware based (FPGA, ASIC) Calo/Muon (coarse granularity) Software (specialised algs) Uses LVL1 Regions of Interest All sub-dets, full granularity Emphasis on early rejection Offline algorithms Seeded by LVL2 result Work with full event Full calibration/alignment info High Level Trigger

38 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS38 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN Event Builder EFN DFM L2SVROIB Event Filter Level 2 ATLAS Trigger & DAQ: design 40 MHz ~ 200 Hz ~ 300 MB/s 100 kHz ~ 3.5 kHz Full info / event: ~ 1.6 MB/25ns ~3+6 GB/s 160 GB/s

39 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS39 Dataflow EB High Level Trigger L2 ROS Level 1 Det. R/O TriggerDAQ 2.5  s ~10 ms Calo MuTrCh Other detectors Read-Out Systems L2P L2N RoI RoI data (~2%) RoI requests L2 accept (~3.5 kHz) SFO L1 accept (100 kHz) 40 MHz EF EFP ~ sec EF accept (~0.2 kHz) ROD ROB SFI EBN EFN DFML2SVROIB Event Filter Level 2 160 GB/s ~ 300 MB/s ~3+6 GB/s Event Builder 40 MHz ~100 kHz 2.5  s ~3.5 kHz ~ 10 ms ~ 1 s ~200 Hz RatesLatency High Level Trigger & DataFlow: recap

40 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS40 Read- Out Drivers ( RODs ) First- level trigger Read-Out Subsystems ( ROSs ) USA15 Dedicated links Timing Trigger Control (TTC) 1600 Read- Out Links Gigabit Ethernet RoI Builder Regions Of Interest VME ~150 PCs Data of events accepted by first-level trigger Event data requests Delete commands Requested event data LVL2 Super- visor SDX1 DataFlow Manager Event Filter (EF) pROS ~ 500~1600 stores LVL2 output dual-CPU nodes ~100~30 Network switches Local Storage SubFarm Outputs (SFOs) LVL2 farm Network switches Event Builder SubFarm Inputs (SFIs) Second- level trigger

41 IPRD06, 1-5 Oct. 2006, Siena, ItalyATLAS TDAQ concept, design & status - Kostas KORDAS41 Read-Out Subsystems (ROSs) Timing Trigger Control (TTC) RoI Builder L2SVs DFM EFDs pROS Network switches SFOs L2PUs Network switches Event Builder ( SFIs) 1 12 1 2 2 6 3012 S-link Gbit


Download ppt "Kostas KORDAS INFN – Frascati 10th Topical Seminar on Innovative Particle & Radiation Detectors (IPRD06) Siena, 1-5 Oct. 2006 The ATLAS Data Acquisition."

Similar presentations


Ads by Google