Presentation is loading. Please wait.

Presentation is loading. Please wait.

Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati.

Similar presentations


Presentation on theme: "Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati."— Presentation transcript:

1 Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati

2 Kalanand Mishra Kaon Neural Net 2 This exercise is aimed at improving the performance of KNN selectors. Kaon PID control samples are obtained from D* decay: D* +  D 0 [K -  + ]  s + Track selection and cuts used to obtain the control sample is described in detail in BAD 1056 ( author : Sheila Mclachlin ). The original Kaon neural net (KNN) training was done by Giampiero Mancinelli & Stephen Sekula in circa 2000, analysis 3, using MC events ( they didn’t use PID control sample). They used 4 neural net input variables: likelihoods from SVT, DCH, DRC (global) and K momentum. I intend to use two additional input variables: track based DRC likelihood and polar angle (  ) of kaon track. I have started the training with PID control sample (Run 4). I will repeat the same exercise for MC sample and also truth-matched MC events. Due to higher statistics and better resolution in the control sample available now, I started with a purer sample ( by applying tighter cuts). Many thanks to Kevin Flood and Giampiero Mancinelli for helping me getting started and explaining the steps involved. Motivation

3 Kalanand Mishra Kaon Neural Net 3 K - π + invariant mass in control sample No P* cut Purity within 1  = 96 % P* > 1.5 GeV/c Purity within 1  = 97 % Conclusion : P* cut improves signal purity. We will go ahead with this cut. Other cuts: K - π + vertex prob > 0.01 and require DIRC acceptance.

4 Kalanand Mishra Kaon Neural Net 4 | m D* - m D 0 | distribution in control sample | m D* - m D 0 | distribution in control sample Conclusion : P* cut doesn’t affect ∆m resolution.

5 Kalanand Mishra Kaon Neural Net 5 Momentum and cos  distributions Momentum and cos  distributions Kaon P Pion P cos  Kaon cos  cos  Pion cos  Very similar distributions for K and π Almost identical dist. for K and π

6 Kalanand Mishra Kaon Neural Net 6 P lab vs cos  distribution P lab vs cos  distribution Kaon Pion Conclusion : Almost identical distributions for Kaon and Pion except on the vertical left edge where soft pions make slightly fuzzy boundary.

7 Kalanand Mishra Kaon Neural Net 7 Purity as a function of Kaon momentum Purity = 93 % Purity = 97 % Purity = 98 %

8 Kalanand Mishra Kaon Neural Net 8 NN input variables scaled P scaled  scaled scaled  scaled  not a input var SVT lh Inputs vars are: P, , svt-lh, dch-lh, glb-lh, trk-lh.

9 Kalanand Mishra Kaon Neural Net 9 NN input variables DCH lh DRC-glb lh DRC-trk lh Inputs vars are: P, , svt-lh, dch-lh, glb-lh, trk-lh.

10 Kalanand Mishra Kaon Neural Net 10 NN output at optimal point A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and 

11 Kalanand Mishra Kaon Neural Net 11 Signal performance A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and 

12 Kalanand Mishra Kaon Neural Net 12 Background performance A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and 

13 Kalanand Mishra Kaon Neural Net 13 Performance vs number of hidden nodes A sample of 120,000 events with inputs : svt-lh, dch-lh, glb-lh, trk-lh, P and  Saturates at around 18

14 Kalanand Mishra Kaon Neural Net 14 è I have set up the machinery and started training K neural net. èOne way to proceed is to include P and  as input variables after flattening the sample in P -  plane ( to get rid of the in-built kinematic bias spread across this plane). èThe other way is to do training in bins of P and cos . This approach seems more robust but comes at the cost of more overheads and requires more time and effort. Also, this approach may or may not have performance advantage over the first approach. èBy analyzing the performance of neural net over a sample using both of these approaches, we will decide which way to go. èThe performance of the neural net will be analyzed in terms of kaon efficiency vs. pion rejection [ and also kaon eff vs. pion rej as a function of both momentum and  ]. èStay tuned ! Summary


Download ppt "Kalanand Mishra Kaon Neural Net 1 Retraining Kaon Neural Net Kalanand Mishra University of Cincinnati."

Similar presentations


Ads by Google