Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sensitivity of searches for new signals and its optimization

Similar presentations


Presentation on theme: "Sensitivity of searches for new signals and its optimization"— Presentation transcript:

1 Sensitivity of searches for new signals and its optimization
Giovanni Punzi SNS & INFN - Pisa

2 Giovanni Punzi - PHYSTAT2003
Outline of this talk Unavoidable preamble: what is a ‘search’ Issues with the definition of “sensitivity” A better definition of sensitivity Application to counting experiments Optimizing sensitivity Real-life applications Giovanni Punzi - PHYSTAT2003 9/8/2003

3 Giovanni Punzi - PHYSTAT2003
What is a `search’ ? Search = Test of Hypotesis + Limits I have an observable x I have a default theory H0 (“no signal”) I have a set of alternatives Hm,depending on a parameter m (e.g. a mass, or x-section of a new process) I will choose a significance level  and define an appropriate critical region I don’t care how exactly you do it. If x in critical region, I have a discovery. I ignore what you do after (parameter estimation?) A useful classical concept: test power: 1-(m) (m) = probability of missing the discovery when Hm If I didn’t discover anything, I will set limits on m H0 Hm p(x) x critical accept. 1 -  m Giovanni Punzi - PHYSTAT2003 9/8/2003

4 Giovanni Punzi - PHYSTAT2003
Plan B: Setting limits I consider Neyman’s Confidence Limits, at a (previously) chosen CL Hm m2 m1 x Can use a variety of methods, in general no connection with the Test Making the two agree may be difficult (limits may exclude H0 while no discovery was claimed...). A difficult issue, but outside the scope of this talk I will make only a minimal assumption (wait for more) Giovanni Punzi - PHYSTAT2003 9/8/2003

5 How sensitive is a search experiment?
Usually have many possibilities regarding the sample to use for the measurement (cuts, observables…) - how do you make the optimal choice ? Apart from optimization, you sometimes need to be able to quote a figure Several approaches, based on two alternative views: Assume no signal, and ask how tight your limits will be. Suppose a signal exists, and look at some measure of “significance” S/sqrt(B) , S/sqrt(S+B) , S/(sqrt(B)+sqrt(B+S))… Large power Giovanni Punzi - PHYSTAT2003 9/8/2003

6 Troubles with sensitivity definitions
No unified view - lead to quote two separate figures (can adoption of a particular viewpoint produce a bias?) Limit optimization hard to define except for 1-D & 1-sided Use mean or median limit Problems with significances: They are “expected”, not actual, significances S/sqrt(B) diverges for small B (B= with S=0.1 is called good) The others are not obviously what you need. Also, they depend on the cross section of the signal being sought (may be unknown) In principle, what you want is maximum power. But: In general you can’t maximize simultaneously 1-(m) for every m You don’t really care about turning 90 into 100, or 0.01 into 0.1 Giovanni Punzi - PHYSTAT2003 9/8/2003

7 A different definition of sensitivity of a search
Sensitivity not given by a number, but a REGION in parameter space Define sensitivity region of the experiment the set of m values such that: (m) > CL The following two statements hold simultaneously: If the true value of m is anywhere inside the sensitivity region then you have a probability > CL of claiming discovery  ) If you don’t get a discovery result then you will be able* to exclude (at least) the sensitivity C.L. (N.B. holds independently of the true value of m !) an exclusion region sensitivity region m2 m1 *green covers blue (1- CL > ) Yields tighter limits for free Giovanni Punzi - PHYSTAT2003 9/8/2003

8 Advantages of the proposed definition of sensitivity
Makes your experiment predictable Good for planning: if one (or a set of experiments) cover the whole parameter space of a theory, you know you have it covered. “Unified” view of sensitivity Good candidate to optimization, whether or not you expect a signal: in both cases you want the sensitivity region to be as large as possible No dependence on metric or priors (purely frequentist) Works in any number of dimensions Independent of choice of a limit-setting algorithm Giovanni Punzi - PHYSTAT2003 9/8/2003

9 Application to Poisson (“counting experiment”)
Sensitivity region takes the simple form S(m) > Smin(B) Smin B (0.95,0.95) (“3”,0.95) (“5”,0.90) (a, CL) Giovanni Punzi - PHYSTAT2003 9/8/2003

10 Use in optimization - compare to “significance”
1/Smin S depends on cuts t: S(m,t) = (t)*L*(m) > Smin => (m) > Smin/((t)*L) => maximize (t)/Smin Very useful feature: independent on cross section expected for signal (as S/√B, but it does not diverge ) 1/√B 1/√(S+B) B Giovanni Punzi - PHYSTAT2003 9/8/2003

11 Giovanni Punzi - PHYSTAT2003
Approximate formulas (a,b) = # of sigmas for (,) Tail-improved Gaussian approx. Simplest: approximate b~a Giovanni Punzi - PHYSTAT2003 9/8/2003

12 A real-life example Initial search for charmless B decays@CDF
S/(a/2+√B) Optimal formula eliminates fake solution with tight cuts Signal found (summer 2002). Giovanni Punzi - PHYSTAT2003 9/8/2003

13 Another real-life example Search for the rare decay D0->µ+µ- @CDF
Expect B=1.8 No signal found, improve on current best limit (hep-ex/ , recently submitted to PRD) Giovanni Punzi - PHYSTAT2003 9/8/2003

14 Giovanni Punzi - PHYSTAT2003
Conclusions S/(a/2 + √B) a pretty good idea Giovanni Punzi - PHYSTAT2003 9/8/2003


Download ppt "Sensitivity of searches for new signals and its optimization"

Similar presentations


Ads by Google