MCMC reconstruction of the 2 HE cascade events Dmitry Chirkin, UW Madison.

Slides:



Advertisements
Similar presentations
Introduction to Monte Carlo Markov chain (MCMC) methods
Advertisements

Bayesian Estimation in MARK
CHAPTER 8 A NNEALING- T YPE A LGORITHMS Organization of chapter in ISSO –Introduction to simulated annealing –Simulated annealing algorithm Basic algorithm.
Sampling Distributions (§ )
Gibbs Sampling Qianji Zheng Oct. 5th, 2010.
SPICE Mie [mi:] Dmitry Chirkin, UW Madison. Updates to ppc and spice PPC: Randomized the simulation based on system time (with us resolution) Added the.
Markov-Chain Monte Carlo
Approaches to Data Acquisition The LCA depends upon data acquisition Qualitative vs. Quantitative –While some quantitative analysis is appropriate, inappropriate.
CHAPTER 16 MARKOV CHAIN MONTE CARLO
BAYESIAN INFERENCE Sampling techniques
Computing the Posterior Probability The posterior probability distribution contains the complete information concerning the parameters, but need often.
Transit Analysis Package Zach Gazak John Tonry John Johnson.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem, random variables, pdfs 2Functions.
Computational statistics 2009 Random walk. Computational statistics 2009 Random walk with absorbing barrier.
Bayesian Analysis of X-ray Luminosity Functions A. Ptak (JHU) Abstract Often only a relatively small number of sources of a given class are detected in.
The University of Texas at Austin, CS 395T, Spring 2008, Prof. William H. Press IMPRS Summer School 2009, Prof. William H. Press 1 4th IMPRS Astronomy.
End of Chapter 8 Neil Weisenfeld March 28, 2005.
Using ranking and DCE data to value health states on the QALY scale using conventional and Bayesian methods Theresa Cain.
7/12/2015 Top Pairs Meeting 1 A template fit technique to measure the top quark mass in the l+jets channel Ulrich Heintz, Vivek Parihar.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
G. Cowan RHUL Physics Bayesian Higgs combination page 1 Bayesian Higgs combination using shapes ATLAS Statistics Meeting CERN, 19 December, 2007 Glen Cowan.
Bayes Factor Based on Han and Carlin (2001, JASA).
Bayesian parameter estimation in cosmology with Population Monte Carlo By Darell Moodley (UKZN) Supervisor: Prof. K Moodley (UKZN) SKA Postgraduate conference,
Introduction to MCMC and BUGS. Computational problems More parameters -> even more parameter combinations Exact computation and grid approximation become.
G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 1 Lecture 3 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
1 Physical Fluctuomatics 5th and 6th Probabilistic information processing by Gaussian graphical model Kazuyuki Tanaka Graduate School of Information Sciences,
W  eν The W->eν analysis is a phi uniformity calibration, and only yields relative calibration constants. This means that all of the α’s in a given eta.
Markov Random Fields Probabilistic Models for Images
A Comparison of Two MCMC Algorithms for Hierarchical Mixture Models Russell Almond Florida State University College of Education Educational Psychology.
Bayesian Reasoning: Tempering & Sampling A/Prof Geraint F. Lewis Rm 560:
G. Cowan RHUL Physics Bayesian Higgs combination page 1 Bayesian Higgs combination based on event counts (follow-up from 11 May 07) ATLAS Statistics Forum.
Determination of True Attenuation Lengths using SPASE-AMANDA Coincidence Data Tim Miller JHU/APL.
Data collected during the year 2006 by the first 9 strings of IceCube can be used to measure the energy spectrum of the atmospheric muon neutrino flux.
MCMC in practice Start collecting samples after the Markov chain has “mixed”. How do you know if a chain has mixed or not? In general, you can never “proof”
Ice model update Dmitry Chirkin, UW Madison IceCube Collaboration meeting, Calibration session, March 2014.
Seminar on random walks on graphs Lecture No. 2 Mille Gandelsman,
Lecture #9: Introduction to Markov Chain Monte Carlo, part 3
Bayesian Travel Time Reliability
Reducing MCMC Computational Cost With a Two Layered Bayesian Approach
Results from data unblinding on point source analysis with the EM algorithm AWG videoconference meeting 24/03/2011(II) J.P Gómez-González AWG videoconference.
Tutorial I: Missing Value Analysis
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
CS Statistical Machine learning Lecture 25 Yuan (Alan) Qi Purdue CS Nov
Bayesian Modelling Harry R. Erwin, PhD School of Computing and Technology University of Sunderland.
06/2006I.Larin PrimEx Collaboration meeting  0 analysis.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
IceCube simulation with PPC Dmitry Chirkin, UW Madison, 2010 effective scattering coefficient (from Ryan Bay)
DirectFit reconstruction of the Aya’s two HE cascade events Dmitry Chirkin, UW Madison Method of the fit: exhaustive search simulate cascade events with.
Kevin Stevenson AST 4762/5765. What is MCMC?  Random sampling algorithm  Estimates model parameters and their uncertainty  Only samples regions of.
HW7: Evolutionarily conserved segments ENCODE region 009 (beta-globin locus) Multiple alignment of human, dog, and mouse 2 states: neutral (fast-evolving),
Muon Energy reconstruction in IceCube and neutrino flux measurement Dmitry Chirkin, University of Wisconsin at Madison, U.S.A., MANTS meeting, fall 2009.
SIR method continued. SIR: sample-importance resampling Find maximum likelihood (best likelihood × prior), Y Randomly sample pairs of r and N 1973 For.
Hierarchical Models. Conceptual: What are we talking about? – What makes a statistical model hierarchical? – How does that fit into population analysis?
10 October, 2007 University of Glasgow 1 EM Algorithm with Markov Chain Monte Carlo Method for Bayesian Image Analysis Kazuyuki Tanaka Graduate School.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Markov Chain Monte Carlo in R
MCMC Output & Metropolis-Hastings Algorithm Part I
Advanced Statistical Computing Fall 2016
ERGM conditional form Much easier to calculate delta (change statistics)
Bayesian inference Presented by Amir Hadadi
Markov chain monte carlo
Remember that our objective is for some density f(y|) for observations where y and  are vectors of data and parameters,  being sampled from a prior.
Graduate School of Information Sciences, Tohoku University
Multidimensional Integration Part I
Constraining the symmetry energy with heavy-ion collisions and Bayesian analysis Chun Yuen Tsang.
Outline Texture modeling - continued Julesz ensemble.
Computing and Statistical Data Analysis / Stat 7
Slope measurements from test-beam irradiations
Sampling Distributions (§ )
Presentation transcript:

MCMC reconstruction of the 2 HE cascade events Dmitry Chirkin, UW Madison

Markov Chain Monte Carlo 1.Start with some initial values, e.g., x,y,z=COG; ,  =0,0; E=1.e5, t=0 or even x,y,z=0,0,0 (as in the example on first slide). 2.Simulate EM cascade with these parameters (point 1). 3.Compute the likelihood L 1 quantifying the difference between this simulation and data (here: same as in SPICE fits, or can be of your choice, e.g.,  2 ). 4.Sample the next point 2 from a proposal distribution, e.g., gaussian centered on the cascade parameters at point 1. 5.Calculate the likelihood L 2 at new point 2. If L 2 >L 1 jump to the new point 2, otherwise stay at 1. 6.Repeat from step 2 until chain converges to stationary state.

Configuration choices Simulation: ppc with SPICE Lea Likelihood: as in SPICE fit depends on number of data events n d and simulated events n s combine 25 ns. time bins with bayesian blocks algorithm Start with COG; calculate best time offset t and scale energy to maximize likelihood after each simulation (this reduces the number of parameters that are varied in MCMC). The energy scaling is achieved by fitting for n s.

Uncertainties and systematics The result of MCMC is a set of points that are distributed near the best fit set of parameters. The spread of MCMC points characterizes the parameter uncertainties. This is how to include systematic uncertainties, e.g., ice model: before running the simulation at points 1 and 2 first pick the model with pre-determined probabilities, e.g., 60% Spice Lea and 40% WHAM or sample directly from the error ellipse in a and e. Only statistical uncertainties are presented here.

Results: xy Event Event Blue: 1…500 Red: 501…1000

Results:  Event Event Blue: 1…500 Red: 501…1000

Results: E Event Event Blue: 1…1000 Red: 501…1000

Results: x Event Event Blue: 1…1000 Red: 501…1000

Results: y Event Event Blue: 1…1000 Red: 501…1000

Results: z Event Event Blue: 1…1000 Red: 501…1000

Results:  Event Event Blue: 1…1000 Red: 501…1000

Results:  Event Event Blue: 1…1000 Red: 501…1000

Summary Event Event x = y = z = th = ph = E = % x = y = z = th = ph = E = %

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Simulation at best point vs. data Event Event

Concluding remarks These results are preliminary. Will run for other ice models. The likelihood value at minimum can be used to rank ice models. More plots at