Presentation is loading. Please wait.

Presentation is loading. Please wait.

BTeV Trigger Erik Gottschalk, Fermilab (for the BTeV Trigger Group)

Similar presentations


Presentation on theme: "BTeV Trigger Erik Gottschalk, Fermilab (for the BTeV Trigger Group)"— Presentation transcript:

1 BTeV Trigger Erik Gottschalk, Fermilab (for the BTeV Trigger Group)

2 Erik Gottschalk Beauty 2005: BTeV Trigger2 Overview Introduction Brief overview of the BTeV detector Level-1 trigger algorithm Trigger architecture Level-1 trigger hardware –Original baseline design (digital signal processors – DSPs) –New baseline design (commodity hardware) –Proposed change to the new baseline design (upstream event builder & blade servers) Not covered: Level-1 muon trigger Level-2/3 trigger (high-level trigger – “HLT”) Real Time Embedded Systems (RTES) Project

3 Erik Gottschalk Beauty 2005: BTeV Trigger3 Introduction The challenge for the BTeV trigger and data acquisition system was to reconstruct particle tracks and interaction vertices for EVERY proton-antiproton interaction in the BTeV detector, and to select interactions with B decays. The trigger system was designed with 3 levels, referred to as Levels 1, 2, and 3: “L1” – look at every interaction and reject at least 98% of minimum bias background “L2” – use L1 computed results & perform more refined analyses for data selection “L3” – reject additional background and perform data-quality monitoring Reject > 99.9% of background. Keep > 50% of B events. The data acquisition system was designed to save all of the data in memory for as long as was necessary to analyze each interaction, and move data to L2/3 processors and archival data storage. The key ingredients that made it possible to meet this challenge: –BTeV pixel detector with its exceptional pattern recognition capabilities –Rapid development in technology – FPGAs, processors, networking

4 Erik Gottschalk Beauty 2005: BTeV Trigger4 BTeV - a hadron collider B-physics experiment BTeV at C0 CDF D0 p p Tevatron Fermi National Accelerator Laboratory

5 Erik Gottschalk Beauty 2005: BTeV Trigger5 BTeV Detector in the C0 Collision Hall

6 Erik Gottschalk Beauty 2005: BTeV Trigger6 BTeV Detector Muon EM Cal Straws & Si Strips Dipole Magnet RICH 30 Station Pixel Detector

7 Erik Gottschalk Beauty 2005: BTeV Trigger7 Multichip module Silicon Pixel Detector 50  m 400  m 5 cm 1 cm 6 cm 10 cm pixel sensors sensor module 5 FPIX ROC’s 128 rows x 22 columns 14,080 pixels (128 rows x 110 cols) 380,160 pixels per half-station total of 23Million pixels in the full pixel detector Sensor module HDI flex circuit Wire bonds Readout module Bump bonds Pixel detector half-station TPG substrate

8 Erik Gottschalk Beauty 2005: BTeV Trigger8 Counting RoomCollision Hall Pixel Data Readout & 1 st Part of L1 Trigger FPGA segment tracker (ST) to neighboring FPGA segment tracker to neighboring FPGA segment tracker Pixel stations Pixel data combiner boards Pixel processor Pixel processor Pixel processor 1542 channels @140Mbps from 3 pixel stations (257/half-plane) Optical links 12 channels @2.5Gbps Row (7bits)Column (5bits)BCO (8bits)ADC (3bits) sync (1bit) Chip ID (13bits) FPIX2 Read-out chip Pixel processor time stamp ordering pixel clustering xy coordinates Internal bond pads for chip ID Data output interface LVDS drivers & IO pads Registers & DAC’s Command interface Debugging outputs 128x22 Pixel array End-of-column logic

9 Erik Gottschalk Beauty 2005: BTeV Trigger9 BTeV Trigger and Data Acquisition System 500 GB/s (200KB/event) 2.5 MHz L1 rate reduction: ~50x L2/3 rate reduction: ~20x 12.5 GB/s (250KB/event) 50 KHz 2.5 KHz 200 MB/s (250KB / 3.125 = 80KB/event)

10 Erik Gottschalk Beauty 2005: BTeV Trigger10 Simulated B Event

11 Erik Gottschalk Beauty 2005: BTeV Trigger11 Simulated B Event

12 Erik Gottschalk Beauty 2005: BTeV Trigger12 L1 Vertex Trigger Algorithm 1) Segment tracking stage: Use pixel hits from 3 neighboring stations to find the beginning and ending segments of tracks. These segments are referred to as triplets. Two stage trigger algorithm: 1.Segment tracking 2.Track/vertex finding

13 Erik Gottschalk Beauty 2005: BTeV Trigger13 1a) Segment tracking stage: phase 1 Start with inner triplets close to the interaction region. An inner triplet represents the start of a track. Segment Tracker: Inner Triplets

14 Erik Gottschalk Beauty 2005: BTeV Trigger14 1b) Segment finding stage: phase 2 Next, find the outer triplets close to the boundaries of the pixel detector volume. An outer triplet represents the end of a track. Segment Tracker: Outer Triplets Track/Vertex Finding

15 Erik Gottschalk Beauty 2005: BTeV Trigger15 2a) Track finding phase: Match inner triplets with outer triplets to find complete tracks. 2b) Vertex finding phase: Use reconstructed tracks to locate interaction vertices Search for tracks detached from interaction vertices Track/Vertex Finding

16 Erik Gottschalk Beauty 2005: BTeV Trigger16 Generate Level-1 accept if “detached” tracks going into the instrumented arm of the BTeV detector with: (GeV/c) 2 cm L1 Trigger Decision Execute Trigger

17 Erik Gottschalk Beauty 2005: BTeV Trigger17 BTeV Trigger Architecture

18 Erik Gottschalk Beauty 2005: BTeV Trigger18 ~0.5TB L1 Buffers Based on Commodity DRAM PC motherboard (L1 buffer server) Gigabit Ethernet (to L2/3 switch & farm) Output buffer (after L1 accept) L1 buffer module Optical Receivers (2 X 12 chan X 2.5 Gbps) FPGA L1 Buffer Memory (DDR DRAM)

19 Erik Gottschalk Beauty 2005: BTeV Trigger19 Prototype L1 Buffer PCI card based L1 buffer board with commodity DRAM PC motherboard with Gigabit ethernet acting as L1 buffer server

20 Erik Gottschalk Beauty 2005: BTeV Trigger20 Original Baseline: DSP-Based L1 Track/Vertex Hardware Block diagram of prototype L1 track/vertex farm hardware

21 Erik Gottschalk Beauty 2005: BTeV Trigger21 DSP-Based Track/Vertex Hardware Prototype

22 Erik Gottschalk Beauty 2005: BTeV Trigger22 L2/3 Farm New L1 Baseline using Commodity Hardware Track/Vertex Farm 56 inputs at ~45 MB/s each Level 1 switch 33 outputs at ~76 MB/s each Trk/Vtx node #1 Trk/Vtx node #2 Trk/Vtx node #N GL1 ITCH Front ends L1 muon L1 buffers Other detectors L2/3 Switch PTSM network 33 “8GHz” Apple Xserve G5’s with dual IBM970’s (two 4GHz dual core 970’s) Infiniband switch Xserve identical to track/vertex nodes Ethernet switch 1 Highway

23 Erik Gottschalk Beauty 2005: BTeV Trigger23 Prototype L1 Track/Vertex Hardware 16 Apple Xserves Infiniband switch Front Rear

24 Erik Gottschalk Beauty 2005: BTeV Trigger24 Current baseline architectureNew integrated upstream event builder architecture Proposed Upstream Event Builder Architecture described in: BTeV-doc-3342 described in: BTeV-doc-3342

25 Erik Gottschalk Beauty 2005: BTeV Trigger25 TV ST TV To Global Level 1 ST TV L1 Event Building Switch Segment Trackers have complete events. No need for a single large switch for event building. L1 Farm Transformation ST: Segment Tracker TV: Track/Vertex Finder

26 Erik Gottschalk Beauty 2005: BTeV Trigger26 TV ST TV To Global Level 1 switch ST TV switch However, we may still need a switching function to deal with TV node failures. This can be handled by smaller switches. L1 Farm Transformation

27 Erik Gottschalk Beauty 2005: BTeV Trigger27 TV ST TV To Global Level 1 TV ST Or, the switching function can be handled by a “Buffer Manager” as it was done in our DSP prototype. Which could possibly be integrated into the Segment Trackers (ST). L1 Farm Transformation

28 Erik Gottschalk Beauty 2005: BTeV Trigger28 Each 7U crate can hold up to 14 dual-cpu blades which are available with Intel Xeon’s (>3.0GHz) or IBM970PPC’s. Blade Server Platform Intel/IBM blade server chassis 6 of these crates will fit into a standard 42U rack for a total of 168 CPU’s (on dual cpu blades) per rack. In the proposed architecture we replace Apple Xserves with blade servers.

29 Erik Gottschalk Beauty 2005: BTeV Trigger29 front view Blade Server Platform Features 2 network interfaces are provided on the mid-plane of the chassis: - primary/base network: each slot connects blade’s on-board gigabit- ethernet ports to a switch module in the rear - secondary hi-speed network: each slot also connects ports on the blade’s optional I/O expansion card to an optional hi-speed switch module (myrinet, infiniband, etc.) rear view - the I/O expansion card might even be a custom card with an FPGA to implement a custom protocol.

30 Erik Gottschalk Beauty 2005: BTeV Trigger30 L1 Hardware for 1 Highway with Blade Servers Complete segment tracking and track/vertex hardware for 1 highway housed in 4 blade server crates

31 Erik Gottschalk Beauty 2005: BTeV Trigger31 L1 Trigger Hardware in a Blade Server Crate Module 4Module 3 156 MB/s out of each ST board (use 2 pairs) From pixel pre-processor Segment tracker boards Dual CPU Track & Vertex blades 78 MB/s into L1tv node (use 1 pair) 0.5 MB/s out of each L1tv node to GL1 (use 1 pair) 4 MB/s to GL1

32 Erik Gottschalk Beauty 2005: BTeV Trigger32 Level 1 Hardware Rack Count Complete L1 trigger hardware for one highway can fit in one 42U rack Entire L1 trigger for all 8 highways in 8 42U racks Segment finder & track/vertex hardware in 4 blade server crates Upstream event builder/pixel pre-processor

33 Erik Gottschalk Beauty 2005: BTeV Trigger33 Summary The BTeV trigger evolved from an original baseline design that satisfied all BTeV trigger requirements to a new design that was easier to build, required less labor, had lower cost, and lower risk. When BTeV was canceled we were on the verge of proposing a new design to the Collaboration. This design was expected to: –reduce the cost of the trigger system –reduce the amount of space needed for the hardware –improve the design of the system architecture –improve system reliability (redundancy & fault tolerance) –improve integration of L1 & L2/3 trigger systems Ideas that we developed for the BTeV trigger are likely to be used in future high-energy physics and nuclear physics projects.

34 Erik Gottschalk Beauty 2005: BTeV Trigger34 End

35 Erik Gottschalk Beauty 2005: BTeV Trigger35 Level 1 Vertex Trigger FPGA segment trackers Merge Trigger decision to Global Level 1 Event Building Switch: sort by crossing number track/vertex farm (~528 processors) 30 station pixel detector

36 Erik Gottschalk Beauty 2005: BTeV Trigger36 Backup slides

37 Erik Gottschalk Beauty 2005: BTeV Trigger37 Data Rate into L1 Farm Using the triplet format proposed in BTeV-doc-1555-v2: internal triplets: 128 bits = 16 Bytes external triplets: 74 bits = 10 Bytes Extrapolating linearly to 9 interactions/crossing and applying a safety factor of 2: 2 x (4.5 x 910 Bytes = 4095 Bytes) = 8190 Bytes Then assumed 35 internal and 35 external triplets for events with interactions/crossing: 35 x 26 Bytes = 910 Bytes Total rate going into L1 track/vertex farm in all 8 highways : 2.5 MHz x 8190 Bytes ~ 20 GBytes/s

38 Erik Gottschalk Beauty 2005: BTeV Trigger38 Required L1 Farm Computing Power Assume an “8.0 GHz” IBM 970: –L1 trk/vtx code (straight C code without any hardware enhancements like hash-sorter or FPGA segment-matcher) takes 379  s/crossing on a 2.0 GHz IBM 970 (Apple G5) for minimum bias events with interactions/crossing –assume following: L1 code: 50%, RTES: 10%, spare capacity: 40% 379  s + 76  s + 303  s = 758  s –758  s ÷ 4 = 190  s on a 8.0 GHz IBM 970 –Include additional 10% processing for L1-buffer operations in each node 190  s + 19  s = 209  s Number of “8.0 GHz” IBM 970’s needed for L1 track/vertex farm: –209  s/396 ns = 528 cpu’s for all 8 highways, 66 cpu’s per highway –33 dual cpu Apple Xserve G5’s per highway

39 Erik Gottschalk Beauty 2005: BTeV Trigger39 L1 Segment Tracker on a PTA Card Uses Altera APEX EPC20K1000 instead of EP20K200 on regular PTA Modified version of PCI Test Adapter card developed at Fermilab for testing hardware implementation of 3-station segment tracker (a.k.a. “Super PTA”)

40 Erik Gottschalk Beauty 2005: BTeV Trigger40 Prototype L2/3 farm using nodes from retired FNAL farms

41 Erik Gottschalk Beauty 2005: BTeV Trigger41 Real Time Embedded Systems (RTES) RTES: NSF ITR (Information Technology Research) funded project Collaboration of computer scientists, physicists & engineers from: Univ. of Illinois, Pittsburgh, Syracuse, Vanderbilt & Fermilab Working to address problem of reliability in large-scale clusters with real time constraints BTeV trigger provides concrete problem for RTES on which to conduct their research and apply their solutions


Download ppt "BTeV Trigger Erik Gottschalk, Fermilab (for the BTeV Trigger Group)"

Similar presentations


Ads by Google