MIMOSA Analysis Framework (MAF) used in Test beam and what a test beam analysis software should be able to do (personnal point of view)  Short overview.

Slides:



Advertisements
Similar presentations
GWDAW 16/12/2004 Inspiral analysis of the Virgo commissioning run 4 Leone B. Bosi VIRGO coalescing binaries group on behalf of the VIRGO collaboration.
Advertisements

Collection Of Plots for A Testbeam Paper. List of Possible Plots R/Phi resolution, charge sharing, noise etc. Noise performance and few Landau distributions.
13/02/20071 Event selection methods & First look at new PCB test Manqi Ruan Support & Discussing: Roman Advisor: Z. ZHANG (LAL) & Y. GAO (Tsinghua))
Using the EUDET pixel telescope for resolution studies on silicon strip sensors with fine pitch Thomas Bergauer for the SiLC R&D collaboration 21. May.
Standalone VeloPix Simulation Jianchun Wang 4/30/10.
Off-axis Simulations Peter Litchfield, Minnesota  What has been simulated?  Will the experiment work?  Can we choose a technology based on simulations?
Track Fitting and Comparator Results Emu UC Davis Feb. 26, 2005 Yangheng Zheng University of California, Los Angeles  Motivation & Introduction.
Cluster Threshold Optimization from TIF data David Stuart, UC Santa Barbara July 26, 2007.
Striplet option of Super Belle Silicon Vertex Detector Talk at Joint Super B factory workshop, Honolulu 20 April 2005 T.Tsuboyama.
Tracking System at CERN 06 and 07 test beams Michele Faucci Giannelli.
STS Simulations Anna Kotynia 15 th CBM Collaboration Meeting April , 2010, GSI 1.
SPiDeR  First beam test results of the FORTIS sensor FORTIS 4T MAPS Deep PWell Testbeam results CHERWELL Summary J.J. Velthuis.
Semiconductor detectors
The LiC Detector Toy M. Valentan, M. Regler, R. Frühwirth Austrian Academy of Sciences Institute of High Energy Physics, Vienna InputSimulation ReconstructionOutput.
Calibration, simulation and test-beam characterisation of Timepix hybrid-pixel readout assemblies with ultra-thin sensors International Workshop on Future.
Performance test of STS demonstrators Anton Lymanets 15 th CBM collaboration meeting, April 12 th, 2010.
SPiDeR  SPIDER DECAL SPIDER Digital calorimetry TPAC –Deep Pwell DECAL Future beam tests Wishlist J.J. Velthuis for the.
15 Dec 2010 CERN Sept 2010 beam test: Sensor response study Chris Walmsley and Sam Leveridge (presented by Paul Dauncey) 1Paul Dauncey.
MAMMA data analysis Marco Villa – CERN 3 rd May 2011.
GEM MINIDRIFT DETECTOR WITH CHEVRON READOUT EIC Tracking Meeting 10/6/14 B.Azmoun, BNL.
Ooo Performance simulation studies of a realistic model of the CBM Silicon Tracking System Silicon Tracking for CBM Reconstructed URQMD event: central.
Optimising Cuts for HLT George Talbot Supervisor: Stewart Martin-Haugh.
11 Wish list for July May testbeam Keep It (Stupidly) Simple..
LEPSI ir e s MIMOSA 13 Minimum Ionising particle Metal Oxyde Semi-conductor Active pixel sensor GSI Meeting, Darmstadt Sébastien HEINI 10/03/2005.
Recent developments on Monolithic Active Pixel Sensors (MAPS) for charged particle tracking. Outline The MAPS sensor (reminder) MIMOSA-22, a fast MAPS-sensor.
11 Sep 2009Paul Dauncey1 TPAC test beam analysis tasks Paul Dauncey.
NA62 Trigger Algorithm Trigger and DAQ meeting, 8th September 2011 Cristiano Santoni Mauro Piccini (INFN – Sezione di Perugia) NA62 collaboration meeting,
1 Behaviour of the Silicon Strip Detector modules for the Alice experiment: simulation and test with minimum ionizing particles Federica Benedosso Utrecht,
LHCb VErtex LOcator & Displaced Vertex Trigger
Vertex 2005, Nikko Manfred Pernicka, HEPHY Vienna 1.
21 Jun 2010Paul Dauncey1 First look at FNAL tracking chamber alignment Paul Dauncey, with lots of help from Daniel and Angela.
Feb. 7, 2007First GLAST symposium1 Measuring the PSF and the energy resolution with the GLAST-LAT Calibration Unit Ph. Bruel on behalf of the beam test.
8 th Aug 2009 ANL User Workshop Towards Nuclear Spectroscopy using a Position Sensitive Avalanche Photo Diode S. Lakshmi University of Massachusetts Lowell.
Monolithic Active Pixel Sensors (MAPS) News from the MIMOSA serie Pierre Lutz (Saclay)
LM Feb SSD status and Plans for Year 5 Lilian Martin - SUBATECH STAR Collaboration Meeting BNL - February 2005.
1 A first look at the KEK tracker data with G4MICE Malcolm Ellis 2 nd December 2005.
Studying the efficiency and the space resolution of resistive strips MicroMegas Marco Villa – CERN MAMMA meeting Tuesday, 13 th December 2011 CERN, Geneva.
Forward Pixel Beamtest Analysis Carsten Rott Purdue University Januar 24, 2001.
Sensor testing and validation plans for Phase-1 and Ultimate IPHC_HFT 06/15/ LG1.
Abstract Beam Test of a Large-area GEM Detector Prototype for the Upgrade of the CMS Muon Endcap System V. Bhopatkar, M. Hohlmann, M. Phipps, J. Twigger,
D 0 reconstruction: 15 AGeV – 25 AGeV – 35 AGeV M.Deveaux, C.Dritsa, F.Rami IPHC Strasbourg / GSI Darmstadt Outline Motivation Simulation Tools Results.
FIRST RESULTS OF THE SILICON STRIP DETECTOR at STAR Jörg Reinnarth, Jonathan Bouchet, Lilian Martin, Jerome Baudot and the SSD teams in Nantes and Strasbourg.
06/2006I.Larin PrimEx Collaboration meeting  0 analysis.
Development of a pad interpolation algorithm using charge-sharing.
RD51 GEM Telescope: results from June 2010 test beam and work in progress Matteo Alfonsi on behalf of CERN GDD group and Siena/PISA INFN group.
Irradiated 3D sensor testbeam results Alex Krzywda On behalf of CMS 3D collaboration Purdue University March 15, 2012.
FNAL beam test data analysis A Hands-on session at CMS Upgrade School Aiwu Zhang Florida Institute of Technology On behalf of the CMS GEM collaboration.
Calibration algorithm and detector monitoring - TPC Marian Ivanov.
Timepix test-beam results and Sensor Production Status Mathieu Benoit, PH-LCD.
CMOS Pixels Sensor Simulation Preliminary Results and Plans M. Battaglia UC Berkeley and LBNL Thanks to A. Raspereza, D. Contarato, F. Gaede, A. Besson,
The CDF Upgrade - Incandela -CERN - January 26, 2001 slide 1 96 wire planes –(8 superlayers) –50% are 3 o stereo –Uniform drift (0.88 cm cell) –30,240.
Mitglied der Helmholtz-Gemeinschaft Hit Reconstruction for the Luminosity Monitor March 3 rd 2009 | T. Randriamalala, J. Ritman and T. Stockmanns.
IPHC, Strasbourg / GSI, Darmstadt
Analysis of LumiCal data from the 2010 testbeam
EUTelescope Testbeam Analysis: Summer 2014
EZDC spectra reconstruction and calibration
Spatial Resolution of DEPFET Matrices
Beam Gas Vertex – Beam monitor
Tracking System at CERN 06 and 07 test beams
Integration and alignment of ATLAS SCT
Roberto Chierici - CERN
Design of Digital Filter Bank and General Purpose Digital Shaper
AIDA Alignment Package
5% The CMS all silicon tracker simulation
The Silicon Track Trigger (STT) at DØ
Test Beam Measurements october – november, 2016
CMS Pixel Data Quality Monitoring
SCT Wafer Distortions (Bowing)
Clustering-based Studies on the upgraded ITS of the Alice Experiment
Presentation transcript:

MIMOSA Analysis Framework (MAF) used in Test beam and what a test beam analysis software should be able to do (personnal point of view)  Short overview of MAF / test beam  Shopping list  Alignment issues  Data analysis issues Auguste Besson (IPHC-Strasbourg)

EUDET, Geneva, January 25th 2007Auguste Besson2 Strasbourg telescope Telescope –8 reference planes  silicon microstrips, 256 x 50 μm –Trigger: 2 scintillitators planes (2x4 mm 2 and 7x7 mm 2 ) –Spatial resolution  ~2 μm/plane –Resolution in the DUT plane  ~ 1 μm –Cooling system –DUT rotation possible Data taking –Online monitoring –Rate  ~ evts / hour (M9, 8k pixels)  ~ 2500 evts / h (M5, 500k pixels)  ~ Runs / week  ~ 100 s Go / week. –Off line analysis  first results available in few hours Acquisition and monitorin PC Analysis PC

EUDET, Geneva, January 25th 2007Auguste Besson3 General structure of MAF (not optimal) Originally the program developped by RD42 collaboration (for strips) –~ 6 different authors  Structure not optimized but…  take advantage of all the mistakes already made ! 1)Generate eta function of the telescope planes 2)Alignment of telescope planes (from 2 fixed planes) 3)Reconstruction (hits in DUT and tracks selection) 4)Generate eta function of DUT 5)Alignment of DUT 6)Analysis (eff, noise, S/N, etc.) Store: –Alignment parameters of the telescope planes –Eta functions parameter of telescope planes –Alignment parameters for DUT –Eta function parameters for DUT ?

EUDET, Geneva, January 25th 2007Auguste Besson4 eta functions for optimised resolution Eta function CoG Value (input) eta Value (output) CoG(5x5) Residual:  = 2.30 µm Eta(3x3) Residual:  = 1.84 µm 1/ CoG is not the optimal way to obtain a position = bias introduced by non linearities of charge collection vs distance [diode-impact] 3/ improves resolution significantly (see back up slides for details) 2/ eta method flatens the distributions (probability density is expected to be flat) Impact position given by CoG inside a 20x20 µm 2 pixel Impact position given by eta function inside the pixel

EUDET, Geneva, January 25th 2007Auguste Besson5 What a test beam analysis software should be able to do ? (1) The goal is not only to determine standard performances (efficiency, S/N, resolution, etc.) but also and mainly to determine and understand quickly any unexpected behaviour of the DUT (and there will be since we’re dealing with prototypes !) Absolutly crucial –On line monitoring  other software.  rough alignment  trigger rates, beam profile/ flux setting  rough baseline, pedestal, noise, hit selection of DUT  detect anomalies quickly –Event display / scanning  jump to a particular event and analyse it in details (whole matrix, noise, pedestal, etc.) –Filters:  Map / mask of bad pixels, lines, etc. to tag / exclude it easily from analysis. –Flexibility = Complete datacard/input system  all possible parameters even radiation dose, Temperature, etc. so that this parameters are set in only one place and are accessible in analysis.  keep track of all the information in a not ambiguous way.  change the telescope configuration easily (positions, angles, number of planes, etc.)  All DUT have different formats, different numbers of sub-structures which could be analysed separatly or not.  adapt reco easily < any DUT particularities. e.g. add some common mode correction at the reconstruction level in a particular region of a given matrix for events taken between 2:12 AM and 3:47 AM.

EUDET, Geneva, January 25th 2007Auguste Besson6 What a test beam analysis software should be able to do ? (2) No crucial but very convenient for users –Complete analysis fast enough compared to data taking time to react quickly –Most users will want to compare different runs between them  some tool/framework to compare different runs easily:  e.g. : 5 consecutives 5 different temperatures  study Noise vs Temp. –Most users will need some calibration/ Noise runs  same framework to analyse these runs.  necessary to compute a fake hit rate –Most users will want to optimize their analysis  perform different hits algorithms / sets of cuts in the same reconstruction to compare performances

EUDET, Geneva, January 25th 2007Auguste Besson7 The dream of the user… Datacards options –Define matrices –Define several regions (A,B,etc.) –For each region define:  Noise: algo A, B, etc.  Pedestal algo A,B, etc.  hit selection algorithm x,y,z  S/N thresholds  cluster size (1, 2x2, 3x3, 5x5, 7x7, etc.)  simulated output chain (ADC, etc.)  filters (hot pixels, dead lines, etc.) –1 root branch per region example M9: 4 outputs, 2 submatrices, study temperature gradient effect region A region B 1 Matrix Mask

EUDET, Geneva, January 25th 2007Auguste Besson8 Alignment issues (1) In principle –Alignment of the telescope done when necessary In practice –Alignment of all planes is done for each run.  We are dealing with microns, so any change in temperature, any access in the test beam area could modify the alignment. –2 alignments:  telescope alignment: relatively stable  DUT alignment: done for each run and possibly for each submatrix individually ! to get the best alignment: –assume the algorithm needs to minimize distances between extrapolated tracks and all the associated hits in the DUT. –BUT:  you don’t know a priori which hits can be associated to a given track. To know this  you need to know the correct parameters of the alignment…  need some maximum track-hit distance cut (large before alignment)  some tracks don’t go through the DUT itself (if the DUT is smaller than the track acceptance for instance)  need to know the alignment to select the « good » tracks (=good acceptance of telescope)  may be necessary to do it for each submatrix individually

EUDET, Geneva, January 25th 2007Auguste Besson9 Alignment issues (2) « Recursivity » is not avoidable  To make the optimised alignement, need to already know it pretty well –angles in the fit are very useful (study hits with large incident angle for instance) : 5 parameters (and not only 2) if position z is correctly known. –Alignment of the telescope before the reconstruction New alignement for each configuration change (temperature, prototype, etc.) Track quality monitoring is mandatory  control chi 2, check that track selection doesn’t introduce a bias, etc.  track hit distance, chi 2  2 /NdF distance track-hit

EUDET, Geneva, January 25th 2007Auguste Besson10 Is resolution homogenous ? How Mimosa breathes in CERN… Offset ~ 4-5 µmEvent number (k) Residual (µm) vertical horizontal  Temperature monitoring coupled to the anaysis software

EUDET, Geneva, January 25th 2007Auguste Besson11Acceptance Remarks on triggers –Use of a large trigger to see where the DUT actually is –having an adapted trigger acceptance compared to the DUT is the easiest way to reduce data taking time. All track extrapolation position in the DUT plane which where not associated to a hit in the DUT Trigger acceptance (2 x 4 mm 2 ) DUT acceptance

EUDET, Geneva, January 25th 2007Auguste Besson12 Data issues The raw data can be very different from one DUT to another. –The user should only have to specify:  headers, trailers, data encoding (binary, hexa, etc.), nbits of ADC, number of submatrices, etc. Somehow need to reduce the amount of data via reconstruction –Select hits and tracks objects  keep only small amount of data –Example: Mimosa 9: 4 matrices (with different pitchs)  to get few 1000s events in a given matrix  ~10-20 Go runs Somehow need to be able to monitor everything during the run: –Example 1: pedestal and noise of a given number of pixels versus event number.  huge amount of data if it is done for all the pixels. –Example 2: some common mode in a given region. Study inefficiency: –Assume you reconstruct a track but have not corresponding hit in the DUT. You want to know what happened  access to the DUT signal AFTER alignment is done ! Software analysis should allow to do: –Some standard reconstruction –Some « monitoring » reconstruction = Event display/scan –Some 2 nd access to raw data after reconstruction

EUDET, Geneva, January 25th 2007Auguste Besson13 Some examples of monitoring Event display Efficiency vs fake rate Noise: vs… regions, time, etc. Resolution: vs method, impact in the pixel, etc. S/N: in seed, neighbours, etc. Clusters: size, charge sharing, etc. Double hit separation (time stamping?) Matrix uniformity Radiation, temperature, read-out frequency, etc. Incident angle Digitisation, ADC, etc. Edges effect

EUDET, Geneva, January 25th 2007Auguste Besson14Conclusion The software will have to deal with many different configurations  It’s not as simple as a « select hits and tracks » software. Users will want to study everything and in particular, things you didn’t foresee.  The architecture has to be carefully discussed at the beginning (very bad experience with software made of 100s patchs)  User input is crucial

back up

EUDET, Geneva, January 25th 2007Auguste Besson16 Eta function principle -pitch/20+pitch/2 -pitch/20+pitch/2 N entries U CoG - U Dig (µm) 1/ Compute Center of gravity position from 3x3 cluster charge (Q i ) information 2/ Plot Center of Gravity distance from the center of the pixel 3/ Integrate this distribution to get the f eta distribution function: If there was no bias, this should be a flat distribution U CoG - U Dig (µm) u v U dig V dig

EUDET, Geneva, January 25th 2007Auguste Besson17 -pitch/20 +pitch/2 4/ Normalize by the Number of event (N entries), multiply by the pitch and shift it of - (pitch/2) N entries -pitch/20+pitch/2 -pitch/2 +pitch/2 0 5/ Get a flat distribution of all hits in the pixel This width is proportionnal to the number of entries in the yellow bin. Consequences/issues:  Needs to generate these eta functions…  The bin size versus available statistics (N entries)  The CoG can be outside the range [- pitch/2 ; + pitch/2] (happens ~1/1000)  Number of different CoG values (from ADC so from different charge values) can not be lower than the number of bin. example: 2 bits ADC ~ 17 values; 3 bits ADC ~ 89 values; 4 bits ADC ~ 400 values  Low statistics « in the corners of the phase space ».  assume no correlation between U and V directions (not completly true) out in

EUDET, Geneva, January 25th 2007Auguste Besson18 Objectifs  Tester les capteurs avec des m.i.p.  Reconstruction du passage de la particule grâce à un télescope  Alignement vis à vis du chip à tester  Caractérisation des capteurs via: Rapport signal/bruit Efficacité de détection Charge collectée (pixel siège et amas) Résolution spatiale  Différents paramètres: Température Chips irradiés (X, e -, n) Exploration de différents procédés de fabrication Explorations des paramètres géométriques Pitch; surface de la diode de collection de charge Épaisseur de la couche épitaxiale  Études complémentaires: Critères de sélection/efficacité Angle d’incidence Pouvoir de séparation des impacts Cartographie des matrices (uniformité des caractéristiques) Uniformité entre les prototypes Effets de la digitisation, etc.  La caractérisation précise des performances des capteurs passe nécessairement par des tests en faisceau