Presentation is loading. Please wait.

Presentation is loading. Please wait.

Agenda and presentations are available on:

Similar presentations


Presentation on theme: "Agenda and presentations are available on: "— Presentation transcript:

1 Summary of XFEL DAQ and Control for photon beam systems workshop 10-11th March 2008
Agenda and presentations are available on: The workshop was a Pre-XFEL project partially funded by the European Commission under the 7th Framework programme.

2 47 registered participants
ALESSANDRO, Polini ANGELSEN, BILLICH, BOUROV, CHELKOV (JINR), CLAUSEN, Matthias, CLAUSTRE, COPPOLA, nicola, COUGHLAN, John, DECKING, Winfried, DIMPER, Rudolf, DUVAL, Philip, ESENOV, Sergey, FURUKAWA, Yukito, GRAAFSMA, Heinz, GUELZOW, Volker, GöTTLICHER, Peter, HALSALL, Rob, HATSUI, Takaki, HENRICH, Beat, HOMS PURON, Alejandro, HOTT, Thomas, JALOCHA, Pawel, KLEESE VAN DAM, KLORA, Jörg KRACHT, Thorsten KUGEL, Andreas, MANT, Geoffrey, MURIN, Pavel, NICHOLLS, Tim, PERAZZO, Amedeo, PLECH, Anton, POTDEVIN, Guillaume, REHLICH, Kay, RYBNIKOV, Vladimir, SCHWARZ, Andreas, SCHöPS, Andreas, STEFFEN, Lothar, STEPHENSON, Paul, TIEDTKE, Kai, TRUNK, Ulrich, VAN BAKEL, Niels, VAN BEUZEKOM, Martin, VISSCHERS, WINTER, Graeme, YOUNGMAN, Christopher, ZIMMER, Manfred, ALBA, DESY(20), Daresbury(4), ESRF(3), ITEP, JINR, NIKHEF(2), PSI(3), RAL(4), SLAC(2), Spring8(2), Bologna, Heidelberg, Konstanz, Slovakia, …

3 Aims Review ALL areas of WP 76 = 8 sections & 20 talks & 529 slides
Machine parameters and timing Photon beam line instruments and detectors Control systems Archiving and data processing DAQ and control at other Labs Infrastructure requirements Perspectives for data rejection and size reductions 2D pixel detectors Aims meet other groups, exchange ideas, etc. produce a list of work, milestones required = any fires clarify, if possible, work with other WPs identify regions of in-kind contribution is sufficient manpower and other resources available?

4 How to summarize 20 talks or 529 slides ?
Talks reviewed: sometimes performed individually (Decking, Rehlich, …) sometimes as part of a related group (2D-pixel detectors, …) comments/highlights added to slides are in green Talks not reviewed – lack of time Large scale data retrieval and processing Analysis tools Fast 2D data acquisition at ESRF beam lines Other simplifications: My talks not reviewed – instead I have built my comments into the slides

5 XFEL machine parameters relevant to beam lines – W. Decking
talk = understand the machine parameters seen by instruments. explain allowed bunch structure - walk through e+γ machine find “new” facts every time reviewed: e beam dumps limit pulses per beam line to 1500 (new) at full E train to train variation of pulses delivered possible energy variation pulse to pulse and train to train DAQ frontend+backend readout configuration driven by timing parameters Facts needed for frontend systems – need definition document ! Will need a tool to decide which beamline gets what Parameter Nominal Max variation range beam energy (E) 17.5 GeV ? ≤ E ≤ ? train repetition rate 10 Hz 30 Hz at ½ E pulses train (N) 3000 0 ≤ N ≤ 3000 pulse separation 200 ns 200 ≤ N ≤ few micro-secs ΔE pulse in train (sweep/chirps/steps) +/- 1.5% over delivered train

6 Timing & data synchronisation for machine and experiments – K. Rehlich
talk = how do the instruments get triggered / synchronized Kay’s vision femto-sec to pico-sec stable clocks, event triggers, XFEL wide train# and pulse# unique tag of expt. and machine data XFEL timing = FLASH development = more than 1 system few pico-sec jitter for experiment clocks and events (≤ ½ ns required) ~10 femto-sec jitter for demanding users (RF feedback, Laser) Experiment interfaced to system via TCA board development of Tönsch IP module at FLASH distributes received timing events (e.g. train start) and datagram delayed by programmable value datagram = train number, bunch pattern, energies, etc. input from users required datagram definitions (defn. of data required, LAN requirement?) prototype expected end 2008 Need a final definitions document (connectors, datagrams, etc.) Need review of usage at 2D, 1D, 0D instruments (sharing, etc.)

7 Photon beam line and diagnostic systems – K. Tiedtke
talk = what’s there and what does DAQ and control look like ? Beam line systems: design and prototyping for XFEL ongoing Systems: shutters, mirrors, … Use beam line systems at FLASH as baseline Control system = HASYLAB (DORIS and PETRA) Train sensitive timing (~0.1s) Diagnostic systems: design and prototyping for XFEL ongoing Systems: pulse intensity, wave front, … Must redesign detectors and DAQ for higher rates, lower cross-sections, etc. Control system = use FLASH solution Pulse sensitive need connection to timing system Still missing basic LISTS of what & where, responsibilities for readout, requirements on other groups !! Not easy, but must be done.

8 Experiment systems 0D, 1D and 2D detectors – H. Graafsma
Startup scenario from June 2007 foresees 3 beamlines (red) T.Tschentscher et al. Distribution of the 5 (3) beamlines, with 2 experimental stations each

9 Experiment systems 0D, 1D and 2D detectors – H. Graafsma
talk = what experiment detectors are expected. detectors in various stages of prototyping/design 0D little work done 1D initial work started (128 pixel strips) collaboration with FLA started and possibly NIKHEF DAQ and control look like a slice of 2D Sensor/ASIC/~512(?) deep pipeline/frontend Data transfer during train gap (~1MB/s) more pixels by adding slices together. Three 2D proposals in, DAQ and control being developed see 2D-pixel detector talks later DAQ and control Sensor/ASIC/≤512 deep pipeline/frontend Data transfer during train gap (~5GB/s) 1Mpixel detectors now, but 2, 4, 8, 16… later.

10 Layout of 1D detector array (1D-DA) H. Schlarb
PD array Gain & Shaping ADC N x 10bit DATA processing ASIC control Controls/ Arch. Crate uTCA N channels 128, … 1024? FPGA Rocket IO Feed back Feed back 5 MHz >50 MHz Two readout streams: 1. Fast feedback lvds 2. IP(?) readout to storage needed Machine Clock & Trigger

11 Experiment systems 0D, 1D and 2D detectors – H. Graafsma
talk = what experiment detectors are expected ? 1D and 2D detectors should share solutions generic backend DAQ and control generic clock&control hardware generic (slow) control systems

12 Other boundary conditions:
We need to modify the experiment: add, remove (auxiliary) detectors within a day  flexibility We want to be able to use different (new) detectors  flexibility and standardization. We want to be able to use larger detectors in the future (4, 9, 25, … Mpixels)  Modular approach We need to CONTROL the experiment (see SR-talks). Means “move and count”  (part of) the data needs to be visible “immediately” We need to store other data (machine and experiment) with the images We want to store only “useful” images (fast veto) We will have single module prototypes by 2010 (LCLS; Petra,…) Heinz’s wish list !

13 Control systems at DESY– P. Duval
talk = what control systems are in use at DESY non legacy control systems used at DESY: in house: DOOCS, TINE, SPECTRA (hasylab) external: EPICS (cryogenics+utilities), TANGO (hasylab) commercial: PVSS (H1), D3 (cryogenics) Grand unification of control systems effort underway TINE-DOOCS, EPICS-DOOCS, etc. but further diversification (TANGO) = more interfaces. no agreement likely for using one, so use the most convenient. For the uninitiated – explains why there is not one light source control package.

14 Control Systems (one way or another) have to deal with … - P. Duval
Distributed end points and processes Data Acquisition (front end hardware) Real-time needs (where necessary) Process control (automation, feedback) Central Services (Archive, Alarm, Name Resolution, …) Security (who’s allowed to do what from where?) States (Finite State Machines, sequencing, automation…) Time synchronization (time stamps, cycle ids, etc.) Databases (configuration, machine data, post-mortem data, …) Statistics (control system itself, operation, …) Logging (central, local, application, …) Data transport (data flow, control system protocol, scalability) DAQ : focus on Data Acquisition and Central Services ! Agrees with our requirements if experiment algorithm software is added

15 Using FLASH as a prototype for XFEL – V. Rybnikov
talk = explains why we start with DOOCS for the control software design outwardly complete user interface layer: GUIs (XView panels look old – replace with Java) middle layer: FSM, Name service, EVB, Archive, Webservices hardware interface layer: DOOCS servers (one per instrument type) implementation outwardly complete implementation: C++, Java, MATLAB, LABview, Oracle Applications: Doocs Data Display, MATLAB, ROOT, eLog DAQ functionality: Run control, error handling Used by FLASH and XFEL machines and some FLASH experiments Archive file formats: ROOT insufficient performance measured Move to something else – RAW format being tested what about LCLS experience (Nexus) ? Workshop result: DAQ+control starting with DOOCS is sensible

16 V.Rybnikov Ongoing development = we profit from their work and
can potentially get our requirements inserted.

17 DESY site data archiving – V. Guelzow (+M.Gasthuber)
talk – storage, data management and the GRID Five components of analysis chain network = 10 later 40/100 GE (Gbit/s Ethernet) - OK computer resources Use GRID solutions = available, tuned by LHC, worldwide access and computing model. workgroups computing (on- and off-site) Data management and access using Grid tools expect increased CPU performance via multi-core tech. storage = use disk cache and storage silos - OK XFEL requirements look OK, 2013 ~3x5GB/s, 2016 more. costs money software – needs effort OS (Linux) and compiler optimize for multi-core CPUs support = money and manpower (operation and software) Urgently need Computing Model/TDR document

18 V.Guelzow+M.Gasthuber

19 V.Guelzow+M.Gasthuber This is data management and offline computing requirements = new WP

20 V.Guelzow+M.Gasthuber This is a fire !

21 DAQ and control at ESRF – L.Cluastre
talk = useful parallel developments? ESRF beamlines BLISS (Beamline Inst. Software Support group) low-level driver to analysis and visualization software large group 18 people Control system development TACO for acc. Control SPEC as main control program TANGO (TACO compatible collab. with DESY+) Future challenges addressing many XFEL type problems visualization on+offline analysis tools automated sample, exposure and result handling new beamline and experiment detectors Future developments might be of interest to XFEL.

22 DAQ and control at Diamond – T. Nicholls
talk = useful parallel developments? Diamond some numbers: Phase 1 – 7 beamlines 2007 Phase 2 – 15 beamlines 2012 1000 experiment proposals per year (8 hr to many day operations) Data management per year: 103 TB, 106 files, … Control and DAQ EPICS used for acc. and beamline control GDA – Generic Data Acquisition sits above EPICS and detector interfaces (non EPICS) GDA is the Diamond equivalent of DOOCS Java, XML, CORBA service broker, … similar three layer structure to DOOCS developed by STFC (RAL, Daresbury,…) DOOCS could profit from GDA cross developments (visualization,…) contacts made

23 DAQ and control at LCLS – A. Perazzo
talk = asked to concentrate on hardware developments Data system architecture: readout chain: det.>frontend>acquistion (L1) >processing (L2) >archive control is L0 (L=Level) 120 Hz timing system EVG+EVR system (Event Gen. and Recv.) L1 can veto and L2 reject pulses L1 common interface to all frontend systems Control system architecture EPICS based because used by machine (only reason) Networking Highly partioned: separate frontend thru L2 slice, user networks, beamline systems, etc.

24 DAQ and control at LCLS – A. Perazzo
talk – part 2 RCE = Reconfigurabe Cluster Element L1 acquisition node custom module ATCA based, System On Chip technology – Xilinx Vitrex 4 Small footprint Pretty Good Protocol (PGP) used for p-2-p connections, many features (reliable, deterministic low latency,…) PGP used to interface RCE to frontend CIM = Cluster Interconnect Module Switch module custom built 24 port fulcrum switch ASIC interconnects RCE and L2 networked nodes LCLS solutions very close to our baselines – but presently more thought through.

25 Data System Architecture – A. Perazzo
Photon Control Data Systems (PCDS)‏ Detector specific Detector + ASIC FEE Timing L0: Control L1: Acquisition L2: Processing L3: Data Cache Beam Line Data Detector Experiment specific May be bump-bonded to ASIC or integrated with ASIC Front-End Electronics (FEE)‏ Provide local configuration registers and state machines Provide ADC if ASIC has analog outputs FEE uses FPGA to transmit to DAQ system

26 Register Command Data Interface – A.Perazzo
Detector specific blocks PCDS blocks FEE FPGA MGT Fiber MGT L1 Node RegAddr[23:0] RegDataOut[31:0] RegReq RegOp ReqAck RegFail RegDataIn[31:0] Register Block PGP Block Transceiver Transceiver Interface defined between FEE and L1 Common interface among different experiments Provide data, command and register interfaces Custom point-to-point protocol (Pretty Good Protocol, PGP) implemented as FPGA IP core FEE FPGA assumed to be Xilinx Virtex-4 FX family with Multi Gigabit Transceivers (MGT)‏ CmdCtxOut[23:0] CmdOpcode[6 :0] CmdEn Command Block FrameTxEnable FrameTxSof FrameTxDataWidth FrameTxEof FrameTxEofe FrameTxData[15:0] Data Block Statement is define a common interface for 2D, 1D and other detectors

27 Back End Zone Network Diagram – A. Perrazo
DMZ SLAC Domain Service Traffic Science bulk data User Accelerator Domain NFS, DNS, NTP, AAA Control & DAQ nodes Data cache machines Service CDS DSS XFEL will have similar network partitioning Front End Zone

28 DAQ and control infrastructure requirements – T. Hott
talk = when and what infrastructure is required? Planning status for Experimental hall and tunnels. little planned so far ! requirements for: space, power, air/water cooling, network connections Milestones: Areas accessible for installation: XTD ~ Apr. 2011 XTD6-10 ~ Feb (photon beamlines) Expt. Hall underground ~ May (hutches) Expt. Hall surface ~ Dec 2012 Current first dates for: 1st beam injector ~ end. 2012 1st beam Linac ~ end. 2013 1st SASE(1) at 0.2nm ~ end. 2014 Needed now: (Thursday: 1st meeting with IT hall+tunnel IT infrastructure) Space requirements catalogue for Expt. Hall – soon Power (UPS, etc.) air/water cooling, networks – June 2008 Cables and Fibre routing – end 2008 define budget requirement – soon

29 Perspectives for data rejection and size reductions – G.Potdevin
talk – first results from simulated pictures on large 2D pixel detectors Data rejection and reduction needed to reduce archive rate !! Frame rejection ideas Veto frames if no Fluorescence light seen in (e.g.) PM system assumed to work 100%, what about noise – needs simulation Online reduction in backend – orientation varies, physics varies – need sophisticated analysis software Data reduction ideas Zero suppression assume no pedestal approach: simulate photons, add flat noise, add fluorescence, add detector imperfections. use jpeg compression algorithm to estimate reduction once noise added little gain in data size Feature extraction store difference between consecutive frames

30 Data Reduction: zero suppression (2)
Lame attempt to simulate compression: Conversion in jpeg with best quality (~lossless) 99.8% blank pixels Reduced to 6% of original size Single Molecule, noiseless image 92.4% blank pixels Reduced to 80% of original size Large crystal, noiseless image 42.8% blank pixels No gain Large crystal noised image

31 Conclusion - G.Potdevin
Data rejection with veto What will be the proportion of rejection is unknown Data reduction possible with Zero suppression What the data will look like we don’t know How strong the background will be we don’t know But, preliminary simulations tend to show that not so much can be gained in this direction Early days – more work needed, initial results not encouraging large set of images for different experiments improve background simulation cross check results with expt.

32 XFEL 2D pixel detector status
Review as a single block HPAD LPD LSDD – now DEPFET Similarities: initially 1 Mpixel detector, later 2, 4, 8, 16, 25, … similar geometrical tile design sensor > ASIC > frontend > backend readout modular design, e.g. 32 modules = full detector similar ASIC 50 MHz ADC digitize data into pipeline similar readout pipeline processing/readout during inter-train gap similar control requirements

33 XFEL 2D pixel detector status
Differences pixel sizes (LPD 500x500, HPAD&LSDD 200x200μm) gain handling: HPAD switch dynamically LPD 3 gains pick best LSDD DEPFET specific layout: HPAD Sensor&ASIC&frontend on detector head LPD Sensor&ASIC one detector head, frontend O(10)cm away LSDD Sensor&ASIC one detector head, frontend O(10)m away

34 sensitive DEPFET array Optional heat spreader
Modular frontend sensitive DEPFET array Bump and wire bonds r/o ASICs Auxiliary ASIC and passive components Optional heat spreader Flex hybrid Three building blocks sensor+ASIC tile module full detector LSDD and HPAD sensor and ASIC bump bonded LPD sensor and ASIC separated Modular design = allows increase in final full detector size !!

35 2D pixel detector sensor/ASIC/frontend parameters
HPAD LPD LSDD technology Si, … Si - DEPFET pixel size 200 x 200 μm 500 x 500 μm topology tile roof tile single γ sensitive yes soft x-ray sensitive no Max. digitizing rate 5 MHz gain control switched 1 fold 3 fold DEPFET dynamic range (γ count) 0 to 5x104 0 to 105 0 to 104 ADC 14 bit (12 eff.) 14 bit (12 eff) 10 bit (eff?) pixel data size 2 bytes frame pipeline depth ≤ 400 512 500 Module count 32 16 readout IO channels 128 x 1GE 96 x 1GE 64 x 1Gbit/s LVDS startup pixel count 1k x 1k startup frame size 2 Mbytes Heat load sensor/total ~1.2 / 2.4 kW ~1.0–2.0kW ? Detectors DAQ 2D frontend parameters very similar = suit generic backend

36 Basic approach slow info timing frontend backend
DAQ and control implementation frontend = LSDD, HPAD, LPD specific: sensor, ASIC, pipeline buffers, … backend = generic readout, control and data archiving and management Baseline solution agreed to by the participating groups

37 2D pixel backend – C. Youngman
At least three backend solutions have existed: HPAD 2007 PC farm backend LPD EoI ATCA backend LSDD ePCI proposal backend XFEL advisory committee recommended having a generic backend design used by all detectors

38 HPAD – PC farm backend wiretime/ available (ms)
Two PC layers L1 and L2 collect complete frames in L1 collect bunch train frames in L2 1Gbit/s links UDP rocket IO Module 1 2 x8 66/100 FL switch 1 2 x12 UDP 66/100 FL PC x108 TCP 66/100 Use 10 GE as standard 1 2 SL switch x12 Possible to build trains in 1 layer 10 GE links need more memory at frontend obviously fewer links 9x66/900 1 2 SL PC x9 7128/8100 archiver 1 2 x9

39 Front End Readout 2nd Stage – J.Coughlan
Close to Detector FEM Flexible Layer Concentrator Buffering DMA controller Calibration corrections Peds.. XFEL Formatting (headers id) Serial Optical output GEthernet Traffic Shaping + Reject bunches MSCs FPGA Interface to DAQ Interface to CTR Fast Timing and Ctrls System Receive clocks, synch, event nr, bunch fill information, vetos? Configuration User Controls via DOOCs

40 LPD backend Straw Man Event Builder – J.Coughlan
10 GbE Event builder and Processor. Assuming Data Processing with Data Reduction Need access to complete frames? Data Processing engines FPGA vs FPNA vs MicroProcessor? FPt ?? Implementation AMC Mezzanines on Commercial ATCA Carrier Board Or AMCs in MicroTCA Large scale data switching problem -> ATCA Serial mesh fabric sharing between 1 MPix cards 1 MPix Line Card Receiver out PC backend farm in 1 board = 1 Mpixel

41 LSDD backend DAQ box – A.Kugel
Similar to ATLAS „ROS“ PC DAQ software framework Global calibration Control Intermediate layer with existing hardware FPGA Serial link interface (8 links/card) 1 GB local memory to assemble images Online calibration, offset/gain compensation (Simple) Data processing 10GbE output to BE-DAQ using UDP Frontend/Backend plans: What are the plans for the frontend to backend readout. When I saw you in Munich you were looking at a development of the ROBIN. I'm interested in trying to get a generic frontend to backend solution between all the 2D pixel detectors. You should describe what you're thinking w.r.t. the ROBIN. At the workshop I'd like to have a discussion of possible generic frontend to backend solutions. Ideas like an intermediate layer between the detector specific frontend and the backend (PC farm), possibly using TCA modules. Defining a generic data transfer protocol from the frontend to either the intermediate or backend PC layer would be a result of the workshop, plus any agreement on a generic intermediate layer. XFEL DAQ Workshop 41 41

42 How to progress on backend
Need agreement on a generic solution for all detectors Agree on protocol from frontend - assume UDP - need test measurements rates/errors (from HPAD developers and WP 76) Scalability issues for >1Mpixel detectors Intermediate layer between frontend and backend PC farm Sub group formed to look at other intermediate layer solutions: M.French, J.Couglan, T.Nicholls, R.Halsall, P.Goettlicher, M.Zimmer, A.Kugel, J.Visschers, M.v.Beuzekom, C.Youngman Produce an on paper design of a ATCA intermediate to investigate feasibility. Assume 10 GE inputs 1 board per 1Mpixel input Build frames and ordered trains Send result to backend PC farm LPD had looked at other solutions: iWARP,…

43 Other 2D pixel issues Clock and Control interface XFEL timing system
Need definitive definition of machine parameters Starting point W.Decking’s talk at workshop Aim at generic design for all 1D and 2D detectors Train config (LAN?) Bunch clk (5MHz) Bunch clk (5MHz) Train start XFEL timing system Train start Clock&Control Bunch veto Busy Frontend PC Intermediate layer 1 or 10 GE backend farm 10 Gbit/s PC PC

44 Aims - revisted Results and fires
meet other groups, exchange ideas, etc done useful input from all: LCLS particularly interesting w.r.t. DAQ and control implementation DOOCS – GDA contact being established NIKHEF interested in in-kind work Slovak group interested in backend/data management/offline how to handle multiple requests (Uppsala) ? produce a list of work, milestones required = any fires (yes) need a computing TDR, see V.Guelzow’s talk hall and tunnel infra structure requirements need definitive documents: milestones and full work list machine parameters, experiment clock&control requirements timing system interface impelementation.

45 Aims – revisted cont. Results and fires continued
clarify, if possible, work with other WPs will need a data management and offline WP soon no significant progress defining beamline and diagnostic interaction identify regions of in-kind contribution sub group for intermediate layer founded – potentially in-kind NIKHEF have submitted a participation outline: detector development 1D frontend, 2D cooling, … DAQ and control (trigger, FPGA, DSP) expertise is sufficient manpower and other resources available? No – needs finalizing, but could be to do control work need for HPAD (probably LSDD) need 1 person to select and integrate control systems 1 technical person for control circuitry work (possibly clock&control)

46 Acknowledgements Thanks to: Imke Gembalis the secretary
The international organizing committee Heinz Graafsma and Andreas Schwarz for their suggestions The speakers for their presentations The participants for their attention Thomas Hott for leading the visit to FLASH and Hall 3

47 Spare slides

48 Nomenclature Nomenclature used here:
pulse = packet of photons or electrons sometimes called a bunch (e-beam context) train = consecutive group of pulses or bunches sometimes called a bunch train (e-beam) sometimes called a pulse train (γ-beam) sometimes called a macro-bunch (?) train number unique incremented number for each train generated by XFEL ~1010 after 20 years of 30 Hz operation sometimes called the event number

49 HPAD = time sliced DAQ operation
0.6 ms bunch train bunch gap 99.4 ms frontend capture data to pipeline frontend format data for transfer f+backend build frames backend do something – analyze? backend build bunch trains Time slicing the data transfer and processing simplifies the conceptual design Using the bunch train gap also fixes the design – 30Hz operation lower frame count


Download ppt "Agenda and presentations are available on: "

Similar presentations


Ads by Google