Presentation is loading. Please wait.

Presentation is loading. Please wait.

Autopsy of Measurements with the ATLAS Detector at the LHC Pierre-Hugues Beauchemin Tufts University Workshop on the Epistemology of the LHC, USC 04/25/2014.

Similar presentations


Presentation on theme: "Autopsy of Measurements with the ATLAS Detector at the LHC Pierre-Hugues Beauchemin Tufts University Workshop on the Epistemology of the LHC, USC 04/25/2014."— Presentation transcript:

1 Autopsy of Measurements with the ATLAS Detector at the LHC Pierre-Hugues Beauchemin Tufts University Workshop on the Epistemology of the LHC, USC 04/25/2014

2 Outline  Philosophical methodology adopted and objectives of the presentation  Critical study of measurement components:  Event selections  Background estimates  Unfolding corrections  Systematic uncertainties  Combination of compatible results  Interpretation of the experimental results  Discovery perspectives  Philosophical implications of the analysis of the measurement components 2

3 Philosophical Approach Adopted 3

4 Studying the facts…  Many discussions on the meaning or the elaboration of theories, on their relation to experiments, on corroborations of ideas, etc.  Crucial to these discussions are the meaning and the impact of what experimental results, facts, entail on these discussions  E.g.: theory laddenness  Want to focus on a critical analysis of the “facts” in HEP, to trigger philosophical discussions on the implications the observations obtained from these studies have on epistemological conceptions  Obtain observations of philosophical interests from concrete examples of measurements done with the ATLAS detector at the LHC ⇒ Somehow experimental philosophy… 4

5 Disclaimer  Analyzing scientific statements with a critical perspective generally implies some philosophical a priories about conceptions of Sciences  Will try to outline on which grounds experimental results are established, what they mean, how they are interpreted and how they are used, in practice, in HEP using examples.  As such, the bias toward specific epistemological options will be minimized, leading to raw material to be used in various element of philosophical discussions 5

6 Philosophical questions  Examples of philosophical questions relevant to the analysis I will present:  To which extend theory impacts experimental results?  What is the role of simulation and modeling on the meaning of data?  Which criteria establish a result as robust and accepted?  How uncertainties affect the meaning of results?  How quantitative and qualitative result interpretations are?  On which basis claims of discovery are made?  How theories are corroborated by experimental results?  Some of these have broader philosophical reach than others; some will be less covered than others. The aim of my talk is to do the spadework on such topic from LHC experiment perspectives. 6

7 Measurements in HEP 7

8 What is a measurements in HEP?  A measurement consists in using a set of data obtained from the output of a detector, and to apply a certain number of corrections that will transform the instrumental outcome into a quantity attributed to a theoretical meaning and thus comparable to predictions.  Typical measurements are of a few different types:  Properties of particles o W mass, top-quark charge, Higgs width, etc.  Parameters of the theory o Strong coupling constant, weak angle, etc.  Cross section and differential cross section of given processes o d  (W+jets)/dP T jet, Forward-backward asymmetry, etc.  Discrepancies with respect to known (and thus expected) phenomena o Search for new physics in Jets+E T miss events, etc.  No large epistemological differences between them… 8

9 How measurements are obtained? (I)  Simply want to outline here the elements to be discussed below  Will mostly use one example of the type “cross section measurements” to gear the discussions, although the discussions and conclusions will essentially equally apply to other type of measurements  Situation: let suppose we want to measure the momentum of the leading jet in W(->e )+jets events:  The measurement results will be attributed to: 9 From Campbell and Ellis: hep-ph/

10 How measurements are obtained? (II)  To perform such measurement we:  Collect electronic signal from detector to form events and apply selection cuts to reconstructed objects in events (N i data )  Raw jet distributions are corrected for background (N i bkg )  Efficiency and resolution effects are then corrected for (U i (O))  Acceptance corrections are applied if needed (A i )  Results from comparable channels are combined o Eg: W->e and W->m channels  Assuming that the results are binned into an histogram, the measurement can be summarized for bin i by:  This is what is equated to the theory quantity of previous page 10

11 Events and Selections 11

12 What is an Events?  Measurements are performed from a dataset, which is a list of reconstructed events obtained from the detector output  An events is a set of empirical objects (tracks, calorimeter clusters, etc.), identified from a set of electronic signals and traced back to a given bunch crossing (25 ns time window)  All signals are synchronized by the Event Builder  These empirical objects are then used to identify physics objects (electrons, muons, jets, etc.), and to reconstruct their kinematic (energy, momentum, charge, etc.)  An event is thus interpreted as the realization of a physics process, represented by Feynman diagrams, for a point in the phase space  We will show that this is a naïve and inadequate interpretation 12

13 Events can be visualized 13 Jet Muon

14 Calorimeter signal (I) 14  The analysis to be performed here is clearer for calorimeter signal and jets, but apply equally well to other detector and objects  Let assume “Nature” inputs a  + signal of 50 GeV into a calorimeter  Sloppy physicist language, just for sake of the argument…  The pion produces an hadronic shower  Eg: light in scintillators produces electronic signal via Photo Multiplier Tube  Energy of cells from signal shape

15 Calorimeter signal (II)  The interaction of the  + or secondary particles, and the material response are probabilistic in nature  The particles in active material and the signal randomly fluctuate, for same input, in repeated detection 15 From AMS-02 From ATLAS

16 Calorimeter signal (III)  The kinematic of an object in an event is thus an instance randomly picked from the resolution distribution  Approximate the resolution distribution by a Gaussian  Calibration and binning choice are also affecting relation to “input”  Due to stochastic ambiguity, there is no way to tell in each event what the reconstructed kinematic of a particle/object is referring too o e.g.: Is a measured energy of 50 GeV the central value of a 50 GeV input or a 3  signal from a 15 GeV input?  The reconstruction of the kinematic in one event is not a measurement of the kinematic of a process since no inference can be made to the input from one event  Only a large sample of events can be used to provide meaning to the reconstructed kinematic and connect it to “input particles” 16

17 Backgrounds 17

18 What are backgrounds?  Various physics processes can generate the same experimental signature than a process of interest 18 1)Irreducible background: Experimental signature is due to the same final state as the underlying physics process 2)Reducible background: The experimental signature is due to detector effects which mimic the experimental signature of the process of interest Z→ +1-jet q q QCD dijet 1-jet+E T miss

19 Assignment to a process?  A given selected event cannot be assigned to one process or another: only the probability that it comes from a specific process can be estimated  Not only the kinematic and properties of particle in an events cannot be observed and assigned to one underlying set of particles, but an event itself can even not be assigned to a specific process and particle content 19

20 Philosophical implications  Quantities and objects reconstructed in one event are NOT an observation of the corresponding quantities or particles coming from an underlying process  An event is an instrumental artifact  Only statistical inferences can be made, which implies that a lot of theoretical inputs are needed to make such inference  If underlying objects must be defined from their observations, it is hard to see how properties of individual particles can be defined, given that they cannot be observed  This is not a big surprised: theory makes no sense of single events  Theory predicts laws of probability, and this is what can be tested  To avoid confusion, we refer to measurements rather than observations 20

21 Event selections  The sensitivity of an experiment to the physics of interest can be compromised by backgrounds  The idea is to keep in the dataset on which to perform the measurement, only those events that have a high probability to come from the process of interest, using cuts on discriminating distributions:  Choice based on purity vs efficiency  Criteria: minimize uncertainties  Figure of merit can be improved by multivariate techniques  Residual background contribution must be estimated 21

22 Comments on event selections  There is a danger that the wanted signal get sculpted by tuned cuts and thus become an artifact of the selections made to eliminate the background, rather than a physics effect to measure  Robustness studies and tests must be done  Philosophical implications discussed by Karaca  Blind analyses in searches for new physics reduce the impact of human bias on the results  MVT often uses algorithms trained to “recognize” a signal, and as such, add an extra layer of statistics relationship between empirical data and measurement results  Want to focus the discussion on background estimate procedures 22

23 Background estimation techniques 23  Three different techniques are typically used to estimate backgrounds 1. Simulation-based estimates: Apply event selections to samples of events obtained from simulations 2. Fit to data and extrapolate: Take a guessed functional form to fit a certain range of a distribution obtained from data. Extrapolate the function to the probed phase space 3. Data-driven background estimate: Fit a distribution template from an uncorrelated data set representative of the background to estimate, with another template for the signal to the data sample of interest. We will see that these estimates embed theoretical inputs, various type of modeling or arbitrary choices in the experimental results

24 Simulation-based estimates  Large dependence on theory and modeling from various sources:  Theory cross section o sometime even of the one to be measure  Parton Distribution Function: o Multi-parameters empirical functions fitted to some data and supplemented by theoretical extrapolation equations  Hadronization and underlying events: o phenomenological model of non-calculable effects expected from theory, with parameters tuned to data  Soft radiation (parton shower): o theoretical calculations performed under simplifying assumptions  Detector effects from simulation of: o geometry of the detector, theory of interaction of particles with matter, empirical laws, developed in dedicated experiment or tuned to test beam data 24

25 Fit and extrapolation  Functional form not from 1 st principle, nor experimental measurement, but from a guess that “work” according to “goodness- of-the-fit” criteria, i.e. a statistical evaluation of the quality of the fit to data  Actual function cannot be chosen solely on the base of quantitative statements because higher polynomial functions yield better fits, but also fit fluctuations  The choice is thus rather appreciative  Assume that the function fitted on the “control region data” is a valid proxy of the function for the background estimated in the signal region, with no bias… 25

26 Data-driven estimates  Similarly as for the “fit and extrapolate” method, it assumes no biases are introduced by the “side band” selections used to get the template  This is virtually impossible to test and biases are unavoidable  Experimentalists deal with this in the systematic uncertainties  Many other caveats can affect the adequacy of the estimate:  All the MC effects described above generally affect the signal templates  Choice of fit range is arbitrary  Statistical fluctuation in data can create biases in the estimate which can thus be smeared out by hand when judged relevant  Could be biased by trigger  Data-driven background estimates are thus not fundamentally different than other approaches regarding their impact on the meaning of an experimental results. There are not less assumptions. 26

27 Summary of implications  The experimental measurement results, obtained by subtracting bkg estimates from data events, thus intrinsically contain:  Theory elements that are supposed to be tested by the measurement  Empirical inputs that implicitly relate the obtained results to many other measurement  Simplification assumptions and approximations that are even not faithful to the theoretical framework  Some ambiguities and arbitrary choices  Various fits or tuning to data sometime correlated to those used in the measurement  Concrete examples of theory laddeness of experimental “facts”  To account for all this in a scientific way, SM is assumed and residual ambiguities on these effects are added to the systematic uncertainties that become an intrinsic component of measurements 27

28 Unfolding 28

29 What is unfolding?  Procedure used to correct distortions caused by detector effects on the law of probability (differential cross section) to be inferred from data  Strategy consists in using simulations to model detector response R on input distribution f theo and to apply inverse of this response to data  The simplest method is the bin-by-bin correction where the ratio of theory distribution to simulated distribution is used to correct data  Problem: For large resolution, results can significantly depend on the theory distribution used, leading to biases toward the theory to be tested  More sophisticated methods use response matrices for the distribution to be measured, which account for migration between bins  Statistical methods are however needed to deal with the amplification of statistical noise affecting the solutions 29

30 An example: Bayesian unfolding  First the response matrix is formed  Then the input distribution is used to form a prior  Bayesian theorem is used to perform matrix inversion  Procedure is iterated to reduce dependence on prior  Results are applied to the data distribution  This account for efficiency and residual calibration corrections 30

31 Implications  The distributions thus obtained are a statistical inference of the targeted distributions, and must be interpreted as such  They are not observed distributions from which we would have removed detector effects (misconception)  The actual results are intrinsically not a unique distribution, but a posterior probability density defined in the space of possible spectra. The best estimator is typically used as unfolded results  This explicitly used simulation and all caveats mentioned above to form the estimate of the theory-level distribution from data  Once again, these doubt on these effects are encoded in the uncertainty  The connection to the underlying physics is reconstructed, guided by theory assumptions and stats tool, but not observed 31

32 Uncertainties and Combination of Compatible Results 32

33 Being intrinsic to empirical facts  We already saw that uncertainties account for theoretical dependences, non-equivoque assumptions, empirical modeling, etc.  Estimate how much the results vary when these effects are varied  They are the ~variance of the estimate constituting the “central value”  Experimental results make no sense without uncertainties  There are no such things as empirical facts in the naïve sense of the term  They also provide repeatability criteria essential to science, and are thus at the very foundation of discovery  Under-estimating uncertainties lead to inadequate inconsistencies, over- estimating them can ruin sensitivity to physics to discover  In order to understand the results of measurements in HEP and to interpret them, we must analyze how uncertainties are estimated and what they mean 33

34 Analysis of uncertainty estimates (I)  Consider a measurement result R = X ±  X  This means that if an experiment is repeated many times in compatible conditions, the results would be distributed around X with a spread of  X  X is like the most probable value given the experimental outcome, and  X is usually seen as the range in which the “real” value is.  This interpretation is subject to revision:  The results are typically assumed to be Gaussianly distributed around X, but this applies only to statistical uncertainties, not to systematics  Often systematics are obtained as the difference in results between two assumptions. The  X thus obtained is NOT an estimate of the width of a Gaussian around X  The law of distribution of X is not known; a 3  deviation will not necessarily correspond to a p-value of 0.3% is systematics are important 34

35 Analysis of uncertainty estimates (II)  An experimental result is one instance of all possible results that can be obtained in repeating experiments. It is possible that this result doesn’t correspond to the central one, shifting uncertainty band  The usage of uncertainty band is misleading and proper interpretation of the results must not be confused by these bands:  A band doesn’t reveal anything about the assumed shape of a distribution of results; it is thus hard to say it is a 1  confidence interval  There are a lot of correlations that make the picture more complex, but for which bands loose such information which does change the results* o Correlation between bins, observables, etc. o Careful experimentalist account for full set of correlations, but results are still often presented as X±  X, which could then be inadequately used after  This impacts the interpretation made of the experimental results 35

36 Example of information lost in a band  Example of uncertainty on jet energy calibration, showing that different sources differently affect the shape of a distribution and the band is a simplification that can affect results 36 This is the question of adding in quadrature first and propagating after or vice versa

37 Why discussing about combination? 37  To enhance the precision or the sensitivity of measurement to reported phenomena, independent experimental results are often combined  e.g. Mtop, Higgs observation,  (W+jets), etc.  Some caveats concerning combination are fundamental to a proper understanding of yielded results, making combined results often complementary, but not necessarily superseding individual results  Combination can make systematics relatively even more important compared to statistical uncertainties  We will analyze a combination procedure to outlined these caveats and account for them in the interpretation of experimental results

38 Some examples 38 Combining data sets for Higgs discovery Combining top-quark mass measurements results

39 Caveats (I)  The combination is done using statistical methods from experimental results, and his thus an estimate obtained using assumptions that often don’t have empirical origin such as:  Combination can be linear -> the combination is a weighted average  Weights are obtained under specific objectives such as minimizing uncertainty, or maximizing sensitivity, etc.  These objectives are built-in the new combined results which thus constitute somehow a theoretical modeling from experimental inputs  Sometime combined results are counter-intuitive  Input measurement with negative weights 39

40 Caveats (II)  Essential to combination is the compatibility of input measurements.  Data or background fluctuations can apparently ruin compatibility ⇒ A decision must be take as of if these fluctuations are genuine or not. This cannot be mechanically done by a simple statistical test  Combination looses information on individual channels that could be essential to the physics message resulting from the measurement  Results could be theoretically incompatible, but empirically compatible within uncertainties o E.g.: top mass definition in top mass measurements ∴ Compatibility is defined with respect to a given experimental context and might not be valid in another one  Combination must make simplifying assumptions about correlations  Combination is driven by our understand of the input measurements, and is not a purely formal and quantitative operation 40

41 Interpretation of the results 41

42 Comparison to predictions  To obtain a physics interpretation, measurement results are compared to theoretical predictions  This produces the “physics message” of the measurement, i.e. it determines what the measurement allows to conclude with respect to the objectives and possible non-expected observations  From this comparison, the value of the measurement itself will be judged  Sometime it will not point to deficiency in the theory, but in the measurement  This typically involves comparison to other measurements too  The physics meaning of experimental results depend on assumptions used in it, on theories it is compared to and on other physically correlated but experimentally independent measurement results 42

43 A few examples of results 43 Sensitivity to SM Higgs Incompatibility between Higgs mass in two resonant Channels??? Which pQCD prediction better describe  * results?

44 Other type of comparison  Example of results where many experimental results are used in the predictions testing the consistency of the overall theory 44

45 Analysis of the comparison (I)  Meaningless or incomplete statements: “We have an agreement between data and predictions”  “Agreement” is vague and indecisive without reference to used criteria  A better statement would be: “We have a 10% or better agreement with a 68% CL between this theory prediction and the reported measurement results.”  However, this explicitly refers to elements of a measurement for which we raised caveats earlier. Let see what they imply on the interpretation of the results 45

46 Analysis of the comparison (II)  Lack of knowledge of the distribution of results underlying uncertainty would make quantitative statements like above often inadequate or approximate, when uncertainties are systematics dominated  Since results and uncertainties are only statistical in nature, quantitative statements only give an appreciation not a verdict  A 10% deviation with a 10% uncertainty is not enough for a discovery, but annoying in precision measurements  Deviations are not uniform in a distribution and multiple comparison statements must be done  Deviations can be attributed to a statistical fluctuation, when other measurement results are considered too  Very hard to determine how much theory must be changed to accommodate data. Stabilization occurs with collective results  Theory and experiment comparison is an adjustment procedure 46

47 Discovery results (I)  The interpretation of experimental results are highly biased toward known physics  Most of observed deviations will be considered as “agreement” with SM.  Embarrassing disagreements are required for new physics to be even considered  Discovery of never observed phenomena that are expected from SM is established more easily o Smaller deviations to null hypothesis, less corroboration by independent results, etc. o E.g.: single top, Bbbar-oscillation, rare decay, etc. 47

48 Discovery results (II)  The too often 5  criteria for a discovery is in fact only a guideline  No statistical fluctuations have produced such deviation  4.9  is as fine, so 5  is only arbitrarily used for sensitivity studies  Actual 5  results are NOT giving immediate discovery interpretation  Eluded systematic effects are generally first proposed and assumed  Corroboration from other results are needed and looked for  Contradicting results are giving more weights than “discovery results”  Establishing a discovery or a theory is some kind of holistic process that include theories, many experiments, various iterations, etc. 48

49 Example: Direct Dark Matter Searches  DAMA/LIBRA (2010) had a 8.9s signal and yet no claim for discovery  Annual modulation of a signal expected from DM “wind”  Uses scintillator technology  But KIM didn’t find any signal (2012) with similar technology…  CoGeNT and CRESST also observed signal in 2012 with solid state cryogenic detectors  But not EDELWEISS (2011) and CDMS (2010) for similar masses, nor XENON100 with noble liquid technology 49

50 A special case: resonances (I)  Discovery of new resonant states suffer from less caveats and make the interpretation and the claim for discovery easier  That’s the case of the Higgs boson  Quantum mechanical signature of massive unstable particles is a resonance and this is directly observable from sufficiently large dataset when the decay products are detectable 50 Observed position of the peak = mass Observed Width -> lifetime

51 A special case: resonances (II)  Resolution and calibration issues affect measurement of resonance characteristics (M, , etc.), but not the observation of the resonance  Only the quantified probability that the resonance doesn’t come from background fluctuation will be slightly affected by this  The resonance is observable on top of a smooth background  We still can’t say which events come from which process  Systematics uncertainty and combination will also only affect statements about the significance of the observed bump.  Little corrections are applied for the observation  Direct comparison can be done across channels and experiments, even if detector effects are not estimated  Telling what is responsible for it is another story 51

52 Philosophical conclusions 52

53 Discussion on underlying physics objects  Particles and their properties cannot be individually observed (signaled); only their postulated probability laws inside given processes can be statistically inferred from data  Events are instrumental artifacts related to electronic signals consistent with a specific time window around a collision  Inputs to a detector are not empirically defined, causality does it  This is also true from the theory  At best, we can infer the collective existence of physics processes, of the particles they involve and of their characteristics  That applies even to the stable ones  How this can impact the discussion on the ontological status of abstract entities??? 53

54 Theory laddeness  Experimental results are inferences about some underlying physics that intrinsically use many theoretical inputs to get a physics meaning  Guided choice of selections  Theory, assumptions, various type of modeling, arbitrary choices, etc. are used to obtained background estimates used to modify collected data o Regardless of the technique used  Unfolding procedure uses again similar a priories to obtain an estimate of the theory quantity to test, from a set of data  Combination implies a qualitative compatibility judgment  Comparison to theory is central to “physics message”  The connection to the underlying physics is (re)constructed, guided by theory assumptions and stats tool, but not observed  This influence discussions about “rightness” of theories… 54

55 Establishing theories  Uncertainty estimates are intrinsic to any experimental results, but leave unknown and ambiguities concerning the expected distribution of results in repeated experiments  Quantitative statements about theory-experiment agreements are indicatives  The meaning of a result involve accepted theories, and other, independent experimental results  No decisive experimental results (well-known)  It is the steering of the large bulk of theory and experimental results altogether that lead to establishing theories or models  Discoveries proceed from such lengthily exercise, and not from 5  p- value for null hypothesis 55

56 Other topics  These conclusions have been brought in the context of the questions raised in slide 6. Many other philosophical questions could also have inputs from similar analysis of experimental results and measurement procedures  A few examples of questions to later investigate are:  The concept of model is used in very different ways in HEP. Their study might help understand what is(are) (a) model(s) and its(their) epistemological status in sciences  How instrumental signal are obtained  Role of fundamental ideas like causality on measurement results  How planning affects results (e.g. in searches)  Many other exciting things I forgot when writing this… 56

57 Acknowledgement  I cannot end this presentation without offering my warm thanks to Michael for the organization of this very interesting workshop and to his sponsors (Provost office)  I would also like to thanks all of you for very fruitful discussions  IMO, this is very useful not only to philosophical discussions, but for physics as well  I hope we’ll have the chance to interact frequently in close future 57


Download ppt "Autopsy of Measurements with the ATLAS Detector at the LHC Pierre-Hugues Beauchemin Tufts University Workshop on the Epistemology of the LHC, USC 04/25/2014."

Similar presentations


Ads by Google