L1Calo Requirements on the DPS

Slides:



Advertisements
Similar presentations
JFEX Uli Schäfer 1 Mainz. Jet processing Phase-0 jet system consisting of Pre-Processor Analogue signal conditioning Digitization Digital signal processing.
Advertisements

Digital Filtering Performance in the ATLAS Level-1 Calorimeter Trigger David Hadley on behalf of the ATLAS Collaboration.
JFEX Uli Schäfer 1 Uli Intro / overview / issues Draft, will change !!!! Some questions flagged.
The First-Level Trigger of ATLAS Johannes Haller (CERN) on behalf of the ATLAS First-Level Trigger Groups International Europhysics Conference on High.
Phase-0 topological processor Uli Schäfer Johannes Gutenberg-Universität Mainz Uli Schäfer 1.
Level-1 Topology Processor for Phase 0/1 - Hardware Studies and Plans - Uli Schäfer Johannes Gutenberg-Universität Mainz Uli Schäfer 1.
Uli Schäfer 1 S-L1Calo upstream links architecture -- interfaces -- technology.
20 Feb 2002Readout electronics1 Status of the readout design Paul Dauncey Imperial College Outline: Basic concept Features of proposal VFE interface issues.
S. Silverstein For ATLAS TDAQ Level-1 Trigger updates for Phase 1.
Samuel Silverstein Stockholm University L1Calo upgrade hardware planning + Overview of current concept + Recent work, results.
6 June 2002UK/HCAL common issues1 Paul Dauncey Imperial College Outline: UK commitments Trigger issues DAQ issues Readout electronics issues Many more.
Samuel Silverstein Stockholm University L1Calo upgrade discussion Overview Issues  Latency  Rates  Schedule Proposed upgrade strategy R&D.
JFEX Uli Schäfer 1 Mainz. Jet processing Phase-0 jet system consisting of Pre-Processor Analogue signal conditioning Digitization Digital signal processing.
FEX Uli Schäfer, Mainz 1 L1Calo For more eFEX details see indico.cern.ch/getFile.py/access?contribId=73&sessionId=51&resId=0&materialId=slides&confId=
Uli Intro / overview / issues
ATLAS Upgrade ID Barrel: Services around ‘outer cylinder’ TJF updated According to the drawing ‘Preparation outer cylinder volume reservation’
JFEX Uli Schäfer 1 Mainz Logos ? jFEX (inc. specific software/firmware & Tilecal input signal, options) 30'
L1Topo-phase0 Uli Schäfer 1. Topo GOLD successfully used to explore technologies and initially verify 6.4Gb/s link integrity over moderate length electrical.
JFEX baseline Uli Schäfer 1 Uli. Intro: L1Calo Phase-1 System / Jets Uli Schäfer 2 CPM JEM CMX Hub L1Topo ROD JMM PPR From Digital Processing System CPM.
1 Status of receivers related activities Damien Prieur University of Pittsburgh 12/01/2010L1Calo Joint Meeting - Heidelberg.
ECL trigger for Super Belle B.G. Cheon (Hanyang U)‏ KEK 1 st open meeting of the Super KEKB Collaboration.
S. Rave, U. Schäfer For L1Calo Mainz
ATLAS Trigger / current L1Calo Uli Schäfer 1 Jet/Energy module calo µ CTP L1.
SL1Calo Input Signal-Handling Requirements Joint Calorimeter – L1 Trigger Workshop November 2008 Norman Gee.
JFEX Uli Schäfer 1. Constraints & Numerology Assumption: one crate, several modules. Each module covers full phi, limited eta range Data sharing with.
JFEX Uli Schäfer 1 Mainz. L1Calo Phase-1 System Uli Schäfer 2 CPM JEM CMX Hub L1Topo ROD JMM PPR From Digital Processing System CPM JEM CMX Hub L1Topo.
01/04/09A. Salamon – TDAQ WG - CERN1 LKr calorimeter L0 trigger V. Bonaiuto, L. Cesaroni, A. Fucci, A. Salamon, G. Salina, F. Sargeni.
Optical Networking University of Southern Queensland.
JFEX Uli Schäfer 1 Mainz. L1Calo Phase-1 System Uli Schäfer 2 CPM JEM CMX Hub L1Topo ROD JMM PPR From Digital Processing System CPM JEM CMX Hub L1Topo.
Calorimeter Digitisation Prototype (Material from A Straessner, C Bohm et al) L1Calo Collaboration Meeting Cambridge 23-Mar-2011 Norman Gee.
Issues with cluster calibration + selection cuts for TrigEgamma note Hardeep Bansil University of Birmingham Birmingham ATLAS Weekly Meeting 12/08/2010.
L1Calo Installation Status Murrough Landon, QMUL Level-1 Calorimeter Trigger (University of Birmingham, University of Heidelberg, University of Mainz,
L1Calo L1 Coordination Meeting March 24th, 2015 Rainer Stamen.
L1Topo post review Uli Schäfer 1 Observations, options, effort, plans Uli.
S. Dasu, University of Wisconsin February Calorimeter Trigger for Super LHC Electrons, Photons,  -jets, Jets, Missing E T Current Algorithms.
August 24, 2011IDAP Kick-off meeting - TileCal ATLAS TileCal Upgrade LHC and ATLAS current status LHC designed for cm -2 s 7+7 TeV Limited to.
Samuel Silverstein, SYSF ATLAS calorimeter trigger upgrade work Overview Upgrade to PreProcessor MCM Topological trigger.
ROD Activities at Dresden Andreas Glatte, Andreas Meyer, Andy Kielburg-Jeka, Arno Straessner LAr Electronics Upgrade Meeting – LAr Week September 2009.
29/05/09A. Salamon – TDAQ WG - CERN1 LKr calorimeter L0 trigger V. Bonaiuto, L. Cesaroni, A. Fucci, A. Salamon, G. Salina, F. Sargeni.
Off-Detector Processing for Phase II Track Trigger Ulrich Heintz (Brown University) for U.H., M. Narain (Brown U) M. Johnson, R. Lipton (Fermilab) E. Hazen,
GGT-Electronics System design Alexander Kluge CERN-PH/ED April 3, 2006.
L1 Technical Proposal: Phase 2 ● Slow progress on writing TP phase 2 – Distractions with beam and calibration – Also some things are still unclear ● Hence.
L1Calo Upgrade Phase 2 ● Phase 2 Functional Blocks? – Thoughts on L1 “refinement” of L0 ● Simulation framework ● Phase 1 Online SW Murrough Landon 2 February.
Phase2 Level-0 Calo Trigger ● Phase 2 Overview: L0 and L1 ● L0Calo Functionality ● Interfaces to calo RODs ● Interfaces to L0Topo Murrough Landon 27 June.
Calibration Operations ● Recent Changes ● Required Changes ● Calibration by Shifters Murrough Landon 23 March 2011.
Phase 2 L1Calo/Calo Interface ● Introduction ● Calo RODs ● Granularity and algorithms ● Calo-L1Calo links ● L1Calo design? ● Summary Murrough Landon 9.
Rainer Stamen, Norman Gee
parity bit is 1: data should have an odd number of 1's
Calibration Status Energy Calibration Stabilities/instabilities
ATLAS calorimeter and topological trigger upgrades for Phase 1
L1Calo Upgrade Phase 2 Introduction and timescales
ETD meeting First estimation of the number of links
Weiming Qian On behalf of the ATLAS Collaboration
L1Calo Phase-1 architechure
L1Calo upgrade discussion
General DSM layout for E+B
ATLAS L1Calo Phase2 Upgrade
Cabling Lengths and Timing
Possibilities for CPM firmware upgrade
Run-2  Phase-1  Phase-2 Uli / Mainz
Front-End Cabling ‘Bible’
The digital read-out for the CSC system of the TOTEM experiment at LHC
The First-Level Trigger of ATLAS
SVT detector electronics
FEE Electronics progress
The digital read-out for the CSC system of the TOTEM experiment at LHC
SVT detector electronics
parity bit is 1: data should have an odd number of 1's
U. Marconi, D. Breton, S. Luitz
Pixel layouts, cost, production…
Presentation transcript:

L1Calo Requirements on the DPS Murrough Landon 24 April 2012 Number and content of fibres Difficult regions Physical layer Timescales Backup

Number and Content of Fibres (1) Bad news: Fibre count depends on options and uncertainties We cannot give final numbers yet: but can show the worst case Whatever the options we would like a lot of outputs! Options: Using 1141 or 1441 for eFEX: we now prefer 1441 (Alan) https://indico.cern.ch/getFile.py/access?contribId=0&resId=0&materialId=slides&confId=187524 Basic granularity of jFEX: prefer 0.1*0.1 (may drop to 0.2) Link speeds: baseline of 6.4 Gbit/s (may go to 9.6???) Link contents fairly well known, may squeeze a few bits Optical link duplication Two copies of every signal (separate stream from FPGA)

Number and Content of Fibres (2) Links to eFEX (before duplication of signals) Minimum link contents (plus quality bits if space): 10 bit Et + 1 bit BCMUX flag = 11 bits per BCMUX pair of supercells For 1141: total 7*11 = 77 bits per tower pair For 1441: total 10*11 = 110 bits per tower pair Consider group of 8 towers (56 or 80 supercells) For 1141: total 4*77 = 308 bits per 8 towers For 1441: total 4*110 = 440 bits per 8 towers Link speed (assuming 8/10 encoding): Have 128 (or 192) bits per BC for 6.4 Gbit/s (or 9.6 Gbit/s) For 1141 @ 6.4 Gbit/s: need 3 fibres for 8 towers For 1441 @ 9.6 Gbit/s: need 3 fibres for 8 towers For 1441 @ 6.4 Gbit/s: need 4 fibres for 8 towers

Number and Content of Fibres (3) Links to jFEX (before duplication of signals) Link contents (for the moment assume 0.1*0.1 granularity): 12 bits Et per tower SumE for whole DPS mezzanine: 14 bits? Some quality bits per tower? Consider group of 8 towers 8*12 + 14 = 110 bits + few quality bits? Link speed (assuming 8/10 encoding): Have 128 (or 192) bits per BC for 6.4 Gbit/s (or 9.6 Gbit/s) Expect 1 fibre per 8 towers, irrespective of link speed

Number and Content of Fibres (4) Would like factor 2 optical duplication at FPGA source => Totals for 1441 eFEX and 0.1 granularity jFEX 6.4 Gbit/s: (4+1)*2 = 10 fibres for 8 towers 9.6 Gbit/s: (3+1)*2 = 8 fibres for 8 towers IF(?) DPS has mezzanines taking in 32 towers 6.4 Gbit/s: need four 12 fibre minipods output 9.6 Gbit/s: need three 12 fibre minipods output Plus the minipods for the input data! Groupings larger than 8 towers may save a few fibres But also have an impact on sliding window environments

Difficult Regions Overlap region (1.4 < |eta| < 1.5) Better to handle overlap in DPS (before multiple fanout) Barrel component provides 1001 (PS + “barrel end”) Endcap component provides 0140 Can fibres from dTBBs be regrouped to make 1141 in DPS?? 1341 region (1.8 < |eta| < 2.0) 24 strips means 6 shaper sums Huchengs spreadsheet suggests 3 equal width supercells Is it possible to have 4 unequal width supercells instead? Eg 0.016, 0.033, 0.033, 0.016 in eta (approx)

Physical Layer & Optical Duplication Baseline minipods at each end? Currently offers best optical power Our real requirement: Sufficient optical power at the end of our links Not sure of the exact numbers (to be provided) NB we may need to do extra passive optical splitting of some signals, even with 100% duplication at source Other link requirements Links active continuously (no dead periods for resync) Checksums (over groups of links) to double check for errors Design for 10 Gbit/s even though the baseline is 6.4 Gbit/s Low latency link protocol

Timescales UK high speed demonstrator: this summer Test link options, speeds & PCB routing: but only 2 ribbons First prototype eFEX module: end 2014: Test many links per module Only then would we be able to be confident on running many 10Gbit/s links per module

Backup (from L1Calo discussions)

TBB Layout TBB: up to 32 towers Various geometries dTBB: assume 1:1 map? LAr like mezzanines 4 mezzanines/dTBB 8 towers per mezzanine 80 supercells (1441) ADC: max 12 bits? 80*12 = 960 bits LAr aim for 10 Gbit/s Need 5 out of 12 fibres Say 6 with one redundant SNAP12 half used

Overlap Region (1.4 < |eta| < 1.5) Barrel component: 1001 Presampler “Barrel end”: Sum of ~3 front + 1 middle “Barrel end” handled by back layer FEB Endcap component: 0140 Front: low granularity => 1 supercell Middle: 4 supercells as elsewhere Merging to 1141? Can 5 fibres per dTBB mezzanine be grouped as: 1 fibre for PS + back, 2 fibres Front, 2 fibres Middle? If so may be able to merge overlap region as 1141 into DPS Assumes there could be fibre patching between dTBB and DPS And might lose any redundancy in fibres from dTBB?

1341 Region (1.8 < |eta| < 2.0) Two eta bins per endcap have 24 strips per tower Gives six sums of four cells from the shaper LAr suggests providing equal width 3 supercells here Excel spreadsheet from Hucheng Chen [CHECK] I wonder if we would prefer four unequal width supercells? Eg 0.016, 0.033, 0.033, 0.016 If thats feasible in the layer sum board... (Assuming we want the 1441 option in general)

DPS Layout? LAr still like mezzanines... Suggested design has 4 mezzanines on one ATCA carrier Mezzanine (toy layout?) has four minipods and one FPGA Inputs: two 12 fibre ribbons? Regroup two half used dTBB SNAP12s to one DPS SNAP12? One DPS mezzanine handles four dTBB mezzanines? If so, one mezzanine handles 32 towers, eg 4*8 or 8*4 shape

DPS Output? Bandwidth: output >= input BCMUX halves output bandwidth (to eFEX) But duplication at source doubles it, also need output to jFEX DPS mezzanine: two input and two output minipods? If 100% duplication, have one minipod (12 fibres) “core” out Output to eFEX “Sam scheme” has 8 towers on 3 fibres Assumes 1141 if 6.4 Gbit/s links, or 1441 if 10 Gbit/s links Need 12 fibres for output to eFEX: no room for jFEX! Unless we have less than 100% duplication: can we find a geometry that works? Output to jFEX “Sam scheme”: 16 towers/fibre => 2 “core” fibres/mezzanine Any way to group jFEX outputs at the module level?

Assumptions and Open Questions (1) Assume we will never want EM triggers for |eta|>2.5 Never want finer than 0.2 granularity in inner EMEC? Link speeds between DPS and eFEX/jFEX Looks like 1441 now preferred over 1141 on Physics grounds Impact on DPS design of 6.4 vs 10 Gbit/s links Environment size for eFEX? 0.3*0.3 reduces required optical fanout – but feels too small Data content of links? Unclear if we really need more than Et Precise time, pulse quality, sumE

Assumptions and Open Questions (2) Granularity of jFEX? 0.1*0.1 vs 0.2*0.2 has big impact on DPS output to jFEX When do we need to make a decision on this? Maximum granularity from present FCAL for jFEX? Allowance for inputs from future new FCAL Eta phi geometry of phase 2 Tile RODs Affects possible geometries of hadronic fibres already now