Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Methods of Experimental Particle Physics Alexei Safonov Lecture #24.

Similar presentations


Presentation on theme: "1 Methods of Experimental Particle Physics Alexei Safonov Lecture #24."— Presentation transcript:

1 1 Methods of Experimental Particle Physics Alexei Safonov Lecture #24

2 The Likelihood Function 2

3 Parameter Extraction The likelihood function discussed earlier is the basis for all further interpretations While the absolute value of L is not very telling on its own (need to calculate the p-value), relative changes in L are If you have a choice of parameters for the hypothesis, you can check how L is changing as a function of that parameter (fix “data” to what you saw in the data) Typical example: Likelihood function versus normalization of the signal S Some values lead to better agreement with the data than others 3

4 Bayesian Interpretation While the easiest way to pass all the information about your measurement is via the Likelihood function, one usually want a simple way to quantify the results Conventionally, people report 68% C.L. intervals for the parameters being Many ways to choose 68%, need a convention One example is highest posterior density intervals “Probability outside is less than on the inside” C.L. stands for: Confidence Level when using frequentist approach Credibility Level is Bayesian 4

5 Frequentist Limits Suppose that the true value of  is  0 :  0 lies between  1 (x) and  2 2(x) if and only if x lies between x 1 (  0 ) and x 2 (  0 ) The two events have the same probability, Since it is true for any value  0 This is Neyman procedure. Constructed to ensure the desired coverage Basically coverage is the probability that the true parameter is within the interval you report If you say that at 68% C.L. z=1+/-0.1, it better be that in 68% of experiments the true value of z is within the interval you report 5

6 Setting Frequentist Limits 6 Once you constructed the Likelihood, you can do pseudoexperiments Find what the limits should be to ensure desired coverage for any outcome x Report the limits for the x you saw in your data You still need how to define the borders of the confidence belt Also a convention, one option is require a/2 on each side Or can follow a different procedure (e.g. Feldman-Cousins)

7 Multi-Parameter Likelihoods Not too difficult to extend what we did onto a multi-dimensional space: Your reported intervals will become ellipses: The conventional way to report the numeric results “on paper” is to “overcover” by reporting rectangular box limits But you can also report the correlation matrix or make a 2D-plot like above if prudent, happens often 7

8 One Sided Limits It is typical in HEP to look for things that are at the edge of your sensitivity You frequently can’t “see” the signal, but you still want to say that the cross-section for that new process can’t be larger than X Also very useful information for theorists and model builders as your results can rule out a whole bunch of models or parameter space of the models Can do it for either Bayesian or Frequentist methods Most of the time fairly straightforward – either construct a one-sided intervals with known coverage or calculate the integral from 0 to x in Bayesian case 8

9 Bayesian vs Frequentist Practical differences usually only really relevant when the measured parameter is near a physical boundary Example: you looking for signal by measuring the number of some type of events during one year. You estimated that the background processes should yield B=100 events on average, so any “excess” in data (D=S+B) would indicate a potential signal If you observe 300 events, you clearly see signal and can easily do the interpretation If you see 100 events, you are still okay But what if you see 30 events or even zero events? What would be the upper 95% C.L. limit for signal in each case? Does that make sense? 9

10 Bayesian vs Frequentist 10

11 Practical Notes You usually will calculate –log(L) instead of L Assuming you are doing a measurement, the minimum of –log(L) is the maximum of L, so that gives the most likely parameter value Changing – log(L) by +/-1/2 (think of taking a log of a gaussian distribution – it will give you (x-x 0 ) 2 /2   so x shifted by sigma gives ½) gives you 1 sigma deviation in the parameter (68% C.L.) With the MLS method, you vary it by 1 instead: MINUIT is the most used minimization package in HEP (it is part of ROOT), easy to use in simple cases, some experience required for more complex ones 11

12 Some good to remember numbers Upper 90% and 95% limit on the rate of a Poisson process in the absence of background when you observe n events If you observe no events, the limit is 3 events In Bayesian case, this would also be true for any expected B rate 12

13 Un-Binned Likelihood Binned likelihood is easy to interpret and gives consistent and predictable outcomes But there is a choice of bin size and strictly speaking you are loosing information by clamping things together Can you avoid that? 13

14 Next Time Take as a use case Higgs results from CMS and walk through various plots Upper limits, various definitions of tests, central values, confidence intervals etc 14


Download ppt "1 Methods of Experimental Particle Physics Alexei Safonov Lecture #24."

Similar presentations


Ads by Google