Presentation on theme: "HLT Collaboration; www.ti.uni-hd.de/HLT High Level Trigger HLT PRR Computer Architecture Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer."— Presentation transcript:
HLT Collaboration; High Level Trigger HLT PRR Computer Architecture Volker Lindenstruth Kirchhoff Institute for Physics Chair of Computer Science University Heidelberg, Germany Phone: Fax: WWW:www.ti.uni-hd.de L0 L1 L2 HLT
HLT Collaboration; 3 ALICE HLT / DAQ operation modes foreseen A1: DAQ Mode – the front end Processors run the event building data recorders (LDC), directly interfacing to the raw ingress data stream. No HLT functionality is present at all in the system. This mode can be used for debugging of the event builder system. In that mode the aggregate data is still limited to 1.25 GByte/s, which makes it cost effective to terminate several DDLs (for example four) in one processor B1: HLT pass-through mode – full HLT architecture, accepting all events unmodified. B2: Full HLT mode – full processing on input data. Selection of readout type is performed on an event-by-event basis, including full readout of any specific event. Mode B1 is a subset of B2 and can be selected dynamically at all times for any kind of event. All HLT functionality discussed in this document are part of this mode.
HLT Collaboration; 5 HLT logical Dataflow On-line Tracker Cluster Finder On-line Tracker Cluster Finder On-line Tracker Cluster Finder On-line Tracker Cluster Finder On-line Tracker Cluster Finder On-line Tracker Cluster Finder Zerosuppressed TPCRAWData Sectorparallel Zerosuppressed TPC raw data Sectorparallel reject event HLT Time, causality Binary loss less Data compression(RLE, Huffman, LZW, etc.) 216 DDL 83 MB/event Zerosuppressed TRD RAWData +TRDTracklets Zerosuppressed TRD raw data +TRDTracklets Zerosuppressed DiMuonRAWData Zerosuppressed Dimuonraw data Zerosuppressed ITS RAWData +ITS Vertex Zerosuppressed ITS raw data +ITS vertex 18 DDL, 8 MB/event 80 MB/event 4-40 MB/event MB/event 10 DDL, 500kB/event Fine grainRoI e + e - tracks RoIs Track segments Space points TRD Seeds, RoI TRD HLT Verifye + e - hypothesis General HLT triggerdecision, ReadoutRoIdefinition On-lineDataReduction RoIreadout,vectorquantization,trackletreadout DiMuonHLT RefineL0/L1trg. enable PrimaryVertex locator High Level Trigger
HLT Collaboration; 6 Receiving Detector Data into Cluster Probably cheapest memory available is in commodity computers Use that memory as event- and latency buffer Perform first level processing (e.g. cluster finder) where data exists Avoid unnecessary copying of data Use programmable FPGA to off-load host processors (FPGA Coprocessor) Make PCI(-X) baseline bus enabling use of all commodity computers Use CMC form factor for optical detector data link circuitry on mezzanine commercialized Prototype
HLT Collaboration; 7 ~16000 analog channels/sector 512 time bins per analog channels 10-bit datum per time bin 128 channels per front-end board 32 front-end boards per readout controller or optical link Up to 4k channels per optical link de-randomizing buffer for 8 full events in front-end zero suppression in front-end Ship zero suppressed raw event 2.3 MB / (central event*sector) readout rate up to 200 Hz or 460 MB/sec and sector average link throughput 92 MB/sec Use host memory of Receiver Processor as elasticity buffer Only off the shelf hardware except for PCI receiver card ALICE TPC Sector Readout
HLT Collaboration; 8 HLT Topology TPC only Timm Steinbeck, Heidelberg Front-End Processors cluster finder ( nodes) Sector Processors Track segments ( nodes) Tracking Processors Track merger (72+36 nodes) Event Processors Global HLT 12 nodes Assume 40 Hz coinzidence trigger plus 160 Hz TRD pretrig with 4 sectors per trigger aggregate (MB/sec) ? links, 200 Hz 11 spare ?? TRD All data rates in MB/sec (readout not included here) DiMuonITS