Presentation is loading. Please wait.

Presentation is loading. Please wait.

LHCb future upgrades, Electronics R&D, Vertex trigger, J. Christiansen LHCb electronics coordination.

Similar presentations


Presentation on theme: "LHCb future upgrades, Electronics R&D, Vertex trigger, J. Christiansen LHCb electronics coordination."— Presentation transcript:

1 LHCb future upgrades, Electronics R&D, Vertex trigger, J. Christiansen LHCb electronics coordination

2 J.Christiansen 2 LHC experiments upgrade programs Other LHC experiments (CMS/ATLAS) are now considering future upgrade development programs. In principle for super LHC. –They (and we) do not want to run experiments for 10+ years just to get slightly better statistics –Increased luminosity of LHC machine (this in fact seems quite optimistic) –80 MHz bunch crossing ? Use of trackers in first level trigger (for B physics ?) To get some detectors to work as initially hoped for ?. Sub-detectors themselves in many cases too expensive to exchange and not much to be gained (except central trackers: silicon strip and pixel) Mainly subjects defined: Radiation tolerance, front-end electronics, trigger and readout –Radiation levels will increase: Central trackers and readout electronics –Front-end ASIC’s take a very long time to design, test, qualify, produce and then front-end boards can finally be prototyped. ASIC’s also getting very expensive to develop (NRE in 0.13 um = ~500k$) –Distributed power systems (this question of power distribution and its related problem of cooling has given major problems in LHC experiments, not so much in LHCb as open detector) –Want to keep R&D people attached to the experiments before they start jumping on other future experiments (e.g. ILC) Possible LHCb interest –We do not need/want increased LHC luminosity (especially not 80MHz Bx) We can increase LHCb luminosity without increase of LHC luminosity. –What LHCb could profit from is an improved L0 trigger: Vertex – track trigger What technologies can allow us to make (and pay) such a trigger ?.

3 J.Christiansen 3 LHCb interest in a R&D program –Factor ~50 LHCb luminosity increase possible with nominal LHC luminosity Factor 3 -10 seems more realistic –We are “statistics” limited in our physics results –We do currently not “like” multiple interactions per bunch crossing as we do/did not know how to handle this in L0 trigger. Pileup veto in fact removes these on purpose Do we know how to handle multiple interactions in L1 and HLT ? What if LHC is forced to stay with scheme of 1 in 3 bunches filled ? (electron cloud effect) Vertex detector sufficient to see primary and B decay vertices –Vertex detector needs exchange/upgrade after 3 years of nominal LHCb running Radiation damage to silicon detector close to beam –Massive use of modern optical links and FPGA’s can allow us to make a Vertex trigger in ~2010 (maybe even today). –But does not give Pt information Would inclusion of TT be needed ?. Expensive as not planned to be exchanged

4 J.Christiansen 4 Btev Btev assume(d) they can get such a trigger to work in ~2009 –~6 interactions per Bx We would have to decide for which luminosity we would like such a trigger to work –Low bunch crossing rate (2.5 MHz) They have plenty of time for readout of data for trigger They can do a lot more processing per BX period They have no spill-over problems We would have to use very fast “hardwired” processing in modern FPGA’s –Pixel detector If silicon strip with R and φ OK then why change ?. We could also convert to pixel or “strixel” if needed –Problem of reading out all these channels at 40MHz –Magnetic field in Vertex detector region (Pt) 1.We would possibly have to include TT information ? 2.Could we possibly add a bit of magnetic field in Vertex ? 3.Can we manage without Pt information with our 1 MHz accept rate ?. LHCb and Btev actually need similar rejection ratios: 1Mhz/40MHz = 50kHz/2.5MHz 4.Can we get some useful information from Cal/muon L0 trigger ? –Use of long (~1 ms = 2500 event buffer) first level trigger latency (with out of order extraction) We can not change our 4 us L0 trigger latency, unless we change front-end electronics of all sub-detectors (very expensive) –First level trigger accept rate: 50 kHz They do L0 (hardwired) and L1 (CPU farm) in one level Can we make this to work in a split two level trigger ? –Use of FPGA’s and DSP (or CPU farm) at first level We would need to be fully “FPGA-wired” to keep short latency at L0

5 J.Christiansen 5 Basic Problems / Questions Is Pt information needed in the trigger ? –Inclusion of TT ? –Magnetic field in Vertex ? Will we need pixel or strixel vertex detector ? –R projection seems very convenient for fast processing –Channel occupancy problems ?. –Spill-over from previous BX may give “real fake” tracks Faster shaping, automatic masking function needed ? Is φ information needed/useful for L0 trigger ? –Yes if TT needed for Pt Can current LHCb detectors work in high rate environment ? –Survive increased radiation levels. We may have to “eat” into our safety factors. We also have LHC cryogenics equipment in our cavern –Increased occupancy. Spill over - pileup Occupancy of RICH L0 buffer using zero-suppressed data Calorimeter baseline taken as lowest of two preceding samples Others: OT, Muon Keeping within L0 latency will be a major challenge in processing ! Can we find a L0 algorithm that can do the job ? –Simple enough to be “FPGA-wired” Can we handle multiple interactions in L1 and HLT ? –Currently we do not seem to know how to do this well Is current calorimeter/muon trigger of any use if multiple interactions ? –Upgrade or not use any more Such a study/development should not disturb finalization of LHCb detector for 2007 !. –Start low key (basic feasibility study) –Attract new collaborators (and PHD students) for detailed study –Possible collaborators from Btev (if really cancelled ?)

6 J.Christiansen 6 Possible vertex trigger implementation Only upgrade Vertex detector –Keep same Vertex vacuum tank and mechanics –Keep same number of Vertex stations. Ensures that all tracks of interest are seen in minimum three stations New front end ASIC with binary trigger output at 40MHz using high speed on-chip serializer. –128ch x 40MHz x 10/8 = 6.4 GHZ (or 2 x 3.2 GHz which is supported by today's FPGA’s) 1.6 GHz serializer exist in 0.25um (GOL). In 0.13um CMOS technology 3.2 GHz will be possible (already prototyped by CERN MIC) –Readout path after L0 trigger can be 8 bit digital if required: 128 x 8 x 10/8 x 1.1MHz =1.6 GHz (NOT any more analog on long distance cables !) –Could current Beetle be ok for a poor mans vertex trigger ? Or of 4 channels for L0 trigger. External serializer. External ADC and serializer New detector ? and new hybrid –Same number of channels and stations as now: 21 x 16 x 4 = 1344 (2688) optical links to counting room (including φ) Fiber ribbon cables with 8x8 way ribbons = 21 (42) cables –R (and φ) detectors should make processing as simple as possible Main tracking in R (with confirmation in φ ?) –How to handle alignment and still keep processing simple and fast Track processing divided in 8 (possibly 16 or 32) sectors Processing using boards with ~16 high end FPGA’s –Built in de-serializers to receive data –Built in serializers/deserializers for inter board communications –No other components needed on board (possible ECS via CPU in FPGA) –Fully synchronous processing Async processing will be very difficult with the short latency available: 4 us (~2us for real processing). Can we possibly extend latency to 256 cycles (who has limitations: ST, RICH, others ?) –Processing split: Track segments within three station R sectors (triplet finding) Linking and reduction of identified track segments (Confirmation in φ) (Pt from TT (or muon L0 or Cal)) Identification of vertices and classification ( Impact parameter, etc.) Final decision

7 J.Christiansen 7 Front-end ASIC outline 6.4GHz serializer or 2x3.2 GHZ ADC 1.6 GHz serializer 128 channels –Anything gained going to 256 or 512 channels ? –Faster shaping if possible (noise versus spill-over) Binary L0 trigger data on optical links: 6.4GHz or 2 x 3.2 GHz (now possible) –Is binary reliable in detector with beam pick up and other problems ?. Pileup veto detector will give us experience with this 8 bit digital readout –Binary readout probably not good enough for detailed event reconstruction –Do we want to keep analog memory or have simple ADC before L0 pipeline ? (power and chip area). Very similar to current Beetle architecture –Could current Beetle be sufficient for a poor mans vertex trigger ? Modern 0.13um CMOS or alike (makes digital fast but does not ease analog). Expensive 500k$ NRE We could possibly collaborate with CMS/ATLAS as they also want to do triggering with tracker. If pixel, then everything is obviously very different (Btev)

8 J.Christiansen 8 Detector and hybrid outline R and φ detectors –Do we need factor 2,4,8 increase in channel count ? (occupancy, resolution) –Strixel versus Pixel ?. –Is φ information needed in L0 trigger ? –Increased radiation hardness required (factor 10) Hybrid simplified as direct use of optical links out of vacuum tank –Only power and ECS interface in copper. –Optical vacuum feed-troughs to be verified for single fiber or fiber ribbon Is wire bonding the best way to connect FE ASIC ?. –TAB bonding –Bump bonding (didn’t we have enough problems with HPD ?)

9 J.Christiansen 9 FPGA processing block Large (huge) reprogrammable FPGA’s now available GHz serial interfaces (2.5GHz – 11 GHz) –Receive binary detector data and communication between processing elements and boards ½ GHz parallel DDR local busses Large logic resources –Needed for track segment finding and reduction On-chip memories –Calibration or alignment data –On-line monitoring and histogramming DSP blocks –Useful for data correction (alignment), beam line projection, impact parameter calculation and ?, ? ? CPU –Useful for control and monitoring (but not for real L0 processing ) Not cheap ( 500-1000$ per high end FPGA, same cost as a PC !) Current example Altera Stratix GX or Xilinx Virtex4 –New improved versions of this will be available in ~2 years

10 J.Christiansen 10 Optical links Trigger needs a lot of interconnectivity (all triggers have this problem) –Only feasible interconnection scheme with sufficient speed and distance (80m to counting house) is optical links –Affordable price will be key to implementation Link speed –10 GHz Ethernet links now available 40Gbits/s under development –Possible use of wavelength division multiplexing ( 4, 8,,) on same fiber First versions of 10GHz Ethernet made with 4 x 2.5Gbits/s Extensively used on very long distance telecom links Parallel links –12 way fiber ribbon links (currently max 2.5 Gbits/s). ~500$ for 12 way optical module –Cables with 8 x 12 fibers (or more) Radiation –Radiation hard serializer and laser will be required GOL = 1.8 Gbits/s, 3.2 Gbits/s seems obtainable VCSELS have been seen to be quite radiation hard For Vertex we will need vacuum feed-through’s

11 J.Christiansen 11 FPGA processing board outline Input data received on 8 fiber ribbon (8) links (3.2 Gbits/s) –8way ribbon fits one R sector per station FPGA’s interconnected with high speed (1/2 GHz) parallel links Boards interconnected via fiber ribbon links (or serial copper links on backplane) Output data generated on 2 (4, 6) fiber ribbon links Will be a general high performance processor board with optical inputs/outputs –Will be an expensive board: 20 – 40 k CHF (lets hope that this gets cheaper with time) –It may even happen that a similar module will be made available by industry x8 ECS Timing FPGA 8 way optical receiver 8 way optical transmitter Optional 8 way optical transmitter (SNAP12 socket)

12 J.Christiansen 12 System configuration outline R projection only 8 Two halfs 4 sectors per half –35 processing modules –2 crates –2200 optical links –36 multi ribbon cables (8x8) Clustering & Triplet finding & merging Track Identification & filtering Track merging Vertex identification Impact parameter calculation Final vertex trigger decision In counting house Task

13 J.Christiansen 13 Track segment finding (triplet finder) Clustering of neighbor hits Identify track segments (triplets) between three neighbor stations 1.Binary algorithm Use middle row as binary string seed. Shift row 1 up, shift row 3 down one position at a time. Detect hits in line as a triplet and store (max number of triplets to be defined) Can be paralized/serialized as needed (time multiplexed at high speed as very simple logic) 2.Hit coordinates Extract hit coordinate (max ~32 hits per row) Correlate row 1 and 3 with hits in row 2 (seed) in a time multiplexed fashion 3.Probably many other alternatives Ignore triplets that have no interest to us. Not pointing to interaction region Can simplify and speed up implementation This will depend on station location Simple criteria based on triplet slope and X0 Extract basic triplet parameters on a limited number of triplets 8 bits slope + 8 bit X0 Max number of local triplets in sector: 32, 64 ?. Possible problems Ghosts (noise hits, real ghosts from high occupancy, leftover from previous BX, etc.) At our L0 rate we can possibly accept some level of ghosts Multiple scattering Extended match region (+/- 1,2,3,4 search region) Alignment Use of corrected hit patterns with lookup tables or algorithm. Simple shift enough ? If significant misalignment then R projection not valid. Max triplet number exceeded To busy event that we can probably not reconstruct anyway –Plus a few others ? A B A = B (+/- 1) Track not pointing to interaction region Clustering

14 J.Christiansen 14 Track segment merging Individual triplets coming from same track must be merged into a single track. –Reduces amount of information to send (and process) further on in the system. –Based on triplet parameters with some defined uncertainty limits –When merging triplets improve track parameters and add some quality factor (e.g. number of total hits, variance, etc.) –Missing hit (and triplets) in track must be allowed –Single triplet that does not match triplets in neighbor stations can/must be deleted under a well defined set of conditions (ghost triplet, noise, etc.) Possible problems –Max number of tracks must be defined in different levels of system to ensure synchronous processing Max track number in a sector between processing of 8 stations: 16links x 3.2Gb/s x 8/10encoding / 40MHz / 32bit per track = 32 tracks (optionally 64 or 96) –How to ensure efficient merging without the risk of merging between two close tracks (local merging without global information) Do we care if we merge two close tracks ?. Missing hit = missing 3 triplets Single isolated triplet (remove)Single isolated triplet (keep) Wrongly pointing track Removed in triplet finding

15 J.Christiansen 15 Vertex identification (histogramming) (a la pileup and Btev) Project tracks on beam line (Z) –This only works well if beam location in X-Y covers small stable zone. –What about B decay or other decay vertices not necessary on beam line ?. B decay vertices have a strong forward tendency (basis for LHCb being forward detector only) Histogram tracks crossing beam line to identify vertex locations. –This is normally a time consuming action (find fast FPGA-wired implementation) Assign tracks to identified vertex locations –Maximum vertex candidates 8, 16 ?, maximum tracks per vertex 8,16, 32 ? Calculate vertex parameters from assigned tracks. –Vertex position and variance –Number of tracks (and their parameters) –Etc. Primary vertex B vertex ?

16 J.Christiansen 16 Impact parameter and final decision That’s the hard physics related part ! Classify primary and potential B vertices –Based on simple track count ? –Use of Pt from ? (TT, muon, cal). Determine primary and B vertex pairs –Minimum distance ? –Try multiple combinations ? –Any other possible fast way ? Calculate impact parameter –What information is really required for this and what precision is needed ? Make final decision

17 J.Christiansen 17 Inclusion of Pt information ? Use of TT –Velo φ detectors must be included to get projection to TT station 3D tracking must be done in Velo detector Do we have sufficient time (latency) to do this as this will be an additional complicated step after R projection tracking ? Additional 2200 optical links and 35 processor modules –TT station must be included Do we have sufficient time (latency) to perform the tracking from Velo to TT with Pt extraction ? New TT detector required (not planned to be exchanged after 3 years) Additional ~4400 optical links and 70 processor modules –Summary: System will cost 2-4 X more than just Velo R projection –10k optical links and 140 processor modules Will be very difficult to perform all this within L0 latency. Alternative: add some magnetic field to Vertex region –How much field will be needed ? –How will this affect required processing ? –Effect on RICH1

18 J.Christiansen 18 How to possibly progress 1.Verification of basic physics Can Vertex L0 trigger allow us to run at higher luminosity ? Pt information required ? 2.Verify effects on other parts of LHCb Radiation, buffer overflows, pileup, etc. 3.Develop system architecture 4.Implementation studies 5.Real implementation: 2007 – 20010 Detector, ASIC, Hybrid, FPGA board, VHDL code Plus new L1 trigger and HLT code 6.Installation: 20010 or later

19 J.Christiansen 19 Optimization scenario Two worlds have to meet Physics world Electronics world System architecture Silicon detector Basic physics Physics performance System Implementation Local FPGA algorithms Basic system performance Detailed system Model (VHDL) Monte Carlo Model (C++) We want these models to be 99.9% equivalent. Use of specific tool for this (Confluence or alike) This world to study and optimize implementation aspects without being overloaded with detailed physics Compare/discuss performance and changes to system architecture This world to study physics and trigger performance without being overloaded with implementation details Simple physics models and performance metrics L1 & HLT

20 J.Christiansen 20 Possible work plan scenario Physics 1.Need of PT information ? 2.Estimate feasibility from first system outline. 3.Extract basic physics parameters to allow system optimization by electronics engineers. –Characteristics of primary vertex: Number of tracks and distribution –Characteristics of B vertex: Number of tracks and distribution –Other vertices/decays ? –Distance from primary to B vertex –Background tracks: number and basic characteristics –Scattering –Hit detection efficiency –Noise hits –Spill over from previous bunch –Metrics to measure system efficiency: –Correctly found tracks –Misidentified tracks –Ghost tracks –Vertex identification and vertex parameters (tracks, etc.) –Primary and B vertex identification 4.Full blown Monte Carlo simulations –Detailed algorithm in C (++) –Supply data set for detailed VHDL simulations –Calculate trigger efficiencies, etc., etc.,,,, 5.How to handle events in L1 and HLT 6.Detector (strixel, radiation tolerance, etc.) System (electronics) 1.General feasibility: Links, FPGA’s, Cost 2.Basic algorithms fit for fast synchronous implementation in FPGA’s 3.General simulation framework to estimate efficiency of basic algorithms 1.Hit generator model based on simple model from Physics. 2.Local algorithms 3.System 4.Extract system performance parameters 4.Full simulation model 5.Start development of Front-end ASIC 6.Prototype hybrid 7.Prototype processor board and test system 8.Final system This must NOT disturb LHCb startup in 2007


Download ppt "LHCb future upgrades, Electronics R&D, Vertex trigger, J. Christiansen LHCb electronics coordination."

Similar presentations


Ads by Google