16 4-bit words (precision 0.2 x 0.2) - 64 bits + 24 bits of inclusive hit multiplicities nCMM(CPM) to TP (Topological Processor) data transfer n CMM will receive data at 160 Mb/s from max 16 modules ð24 lines for the data/parity + clock ð MHz => GBit/s ð12 fibers ribbon optical link driven by 6.5 GBit/s GTX serial transceivers nMuon data from 16 MIOCTs (Muon Interface OCTant module) n 2 fibers / MIOCT"> 16 4-bit words (precision 0.2 x 0.2) - 64 bits + 24 bits of inclusive hit multiplicities nCMM(CPM) to TP (Topological Processor) data transfer n CMM will receive data at 160 Mb/s from max 16 modules ð24 lines for the data/parity + clock ð MHz => GBit/s ð12 fibers ribbon optical link driven by 6.5 GBit/s GTX serial transceivers nMuon data from 16 MIOCTs (Muon Interface OCTant module) n 2 fibers / MIOCT">
Download presentation
Presentation is loading. Please wait.
Published byEmmeline Fowler Modified over 9 years ago
1
Algorithms and TP Y. Ermoline et al. Level-1 Calorimeter Trigger Joint Meeting, Heidelberg, January 11-13, 2010
2
1/11 Algorithm to implement nExclusive hit multiplicities trigger: n Use spatial overlap of em, tau and jets RoIs to identify overlaps ð0.2 x 0.2 em/tau RoI coordinate precision is sufficient nTest vectors for VHDL simulation from MC n Output of current L1Calo algorithm ignoring data transfer limitations n RoI maps with 0.2 x 0.2 precision from CPMs/JEMs ð32 (phi) x 32 (eta) = 1024 bins ðx 16 bits for CP (em/tau thresholds) ðx 8-bits for JP (jet thresholds) n Simple text files ð32 lines, first line for phi=0, last line for phi=31 ðeach line: 32 hex numbers, first for eta=0, last for eta=31 n Read into data arrays in VHDL testbench n New topology algorithm is applied on the test vectors both in MC and VHDL and results are compared.
3
2/11 Realistic input data n160 MHz (CPM/JEM -> CMM): 96 bits(24x4) + 1 line clock n JEM ROIs: 8 11-bit words (precision 0.2 x 0.2) - 88 bits (OK ) ð2 bits ROI position, 8 bits threshold + 1 status (saturation) n CPM ROIs: 16 8-bit words (precision 0.2 x 0.2) - 128 bits (notOK ) ðNo 2 bits ROI fine position (0.1 x 0.1), no 2 status bits nPossible solution for CPM (to be proved by MC): n Define a limited number of "topological processing" thresholds and transfer only bit-maps for those thresholds ðe.g. – 4 thresholds => 16 4-bit words (precision 0.2 x 0.2) - 64 bits + 24 bits of inclusive hit multiplicities nCMM(CPM) to TP (Topological Processor) data transfer n CMM will receive data at 160 Mb/s from max 16 modules ð24 lines for the data/parity + clock ð384 bits @ 160 MHz => 61.44 GBit/s ð12 fibers ribbon optical link driven by 6.5 GBit/s GTX serial transceivers nMuon data from 16 MIOCTs (Muon Interface OCTant module) n 2 fibers / MIOCT
4
3/11 Algorithm nCount inclusive hit multiplicities per jet threshold n Output: 8 3-bit counts (more bits?) nOverlap identification algorithm n which ROI maps overlaps to identify? ð4+4 possible overlaps per jet threshold ð8 enable bits – 1 enable-bit per em/tau threshold n jet window size 4 x 4 ðLook for em/tau RoIs in 3 x 3 window around jet ROI ðIf em/tau RoI is found, set jet ROI overlap bit n Muon data? n Output: 8 JP overlap RoI maps - 8 overlap bits per bin nCount exclusive hit multiplicities per jet threshold n Output: 8 counts (same as in the current design) jet em/tau
5
4/11 Few assumptions on TP development nTP hardware will probably not be used in the Phase II n but firmware (algorithms) may be used n concentrate on algorithms MC study and implementation nTP shall be developed relatively quickly to be used in Phase I n minimize HW/SW design efforts n (re)use existing knowledge, experience and “IP” parts nTP will be deployed together with the current L1Calo infrastructure n (re)use the existing HW (modules, crates, backplanes) & SW parts nTP maybe commissioned and tested in parallel with the running L1Calo system n may have implications for the developments of the new modules and may also require some backward compatible modifications of the current system, including data formats.
6
5/11 Simplified TP system block-diagram nGet data out of L1Calo trigger n CMM++ / upgraded CMM nDistribute L1Calo and muon data to multiple TP modules n Multiple CMM outputs / GOLD nCollect data processing results n CP/JP crate backplane nSend data to CTP at 80 MHz n CMM++ / upgraded CMM n CTPIN+ nL1Calo legacy: n CP/JP crate with backplane n TCM, VME SBC, ROD+ L1CaloTP CTP Muon CMM+ ROD+
7
6/11 Upgraded CMM development scenarios nCMM++ (fully new design) n Replacement for existing merger modules n Single, large FPGA collects, processes and distributes ROI data. n Topological algorithms on CMM++ and/or new topological processor subsystem ðIdeas for staged upgrade path using CMM++ n 2 year development time (components and manpower) nCMM+ (min re-design, backward compatible) n Provide all the necessary functionalities and interfaces to replace the current CMM ðNeed old G-Link chips n Be able to feed all the backplane data onwards to the topological processor ðNew firmware in the Crate Merging Logic for incoming data serialization + SNAP12(s) on board ðOptionally: one large FPGA instead of two n Faster development: RAL (CMM), Mainz (BLT/GOLD) and Stockholm (10Gb) experience
8
7/11 CMM+ development nDesign of the new hardware, based on the schematics of the current CMM – PCB with all present interfaces and connectors plus SNAP12 sockets (for the new topological processor) and one large FPGA replacing one/two current FPGAs nAdaptation of the current CMM firmware to the new hardware in order to provide full backward compatibility and test with the current system, test in the test rig nIn parallel - development of the new firmware for the CPM and JEM modules and the CMM+ for new functionality (including new data format), test in the test rig nMerging two firmware in one and test in the test rig and in the current system; CMM+ will supply the data to the topological processor running in parallel with the current L1Calo system
9
8/11 Simplified TP module block-diagram nHoused in L1Calo CP/JP crate nOptical ribbon links bring all L1 RoI data to each TP module nMultiple modules can run in parallel on individual algorithm(s) nReceive and preprocess data at quadrant level. Global processing of selected data nResults transferred via backplane two merger modules (interfaces to CTP)
10
9/11 TP module (TPM) development nHoused in “standard” L1Calo CP/JP crate with backplane n Up to 16 TP modules n Can also house optical fan-out modules nSeveral parts of the current L1Calo modules (e.g. - VME interface, TCM interface, ROD interface, Xilinx System ACE) – both schematics and firmware – can be reused to speed up the development of the TP module n common part may be specified as a schematics and firmware while the actual PCB layout will be different for the CMM+ and the TP module n equally valid for CMM+ nProcessing results are sent to 2 merger modules in the crate via existing lines at 160/320 MHz n Two CMM+ can be used as merger modules with upgraded to 80 MHz legacy interfaces to CTPIN+ (4 connectors)
11
10/11 TP system cabling CMM+ 4 CP crates CMM+ 2 JP crates CMM+ TP crate To CTPIN+ TP modules (TPM) 6.5 GBit/s 80 MBit/s 160/320 MBit/s 160 MBit/s Muon RoIs Optical splitter
12
11/11 Conclusions nBe backward compatible and keep L1Calo trigger system running nDevelop of TP in parallel nMinimize efforts nUse as much as possible existing modules, crates, backplane, knowledge, experience and “IP” parts… nMinimal modifications of existing interfaces to CPM and ROD nMinimal upgrade of CMM to provide data for TP nUse CPM/JEM crate with backplane, SBC, TCM to house TP modules and merge individual module results nUse upgraded CMM as interface to CTP nDon’t be too much ambitious nAim new technology developments for Phase II
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.