Using car4ams, the Bayesian AMS data-analysis code V. Palonen, P. Tikkanen, and J. Keinonen Department of Physics, Division of Materials Physics.

Slides:



Advertisements
Similar presentations
Introduction to Monte Carlo Markov chain (MCMC) methods
Advertisements

ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #21.
Random variable Distribution. 200 trials where I flipped the coin 50 times and counted heads no_of_heads in a trial.
1 Copyright © 2010, 2007, 2004 Pearson Education, Inc. All Rights Reserved. Section 7.3 Estimating a Population mean µ (σ known) Objective Find the confidence.
Bayesian statistics – MCMC techniques
Chapter 11 Problems of Estimation
Error Propagation. Uncertainty Uncertainty reflects the knowledge that a measured value is related to the mean. Probable error is the range from the mean.
Bayesian Analysis of X-ray Luminosity Functions A. Ptak (JHU) Abstract Often only a relatively small number of sources of a given class are detected in.
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
Descriptive statistics Experiment  Data  Sample Statistics Experiment  Data  Sample Statistics Sample mean Sample mean Sample variance Sample variance.
Lecture 2 Data Processing, Errors, Propagation of Uncertainty.
The Basics  A population is the entire group on which we would like to have information.  A sample is a smaller group, selected somehow from.
Chapter 7 Probability and Samples: The Distribution of Sample Means
A P STATISTICS LESSON 9 – 1 ( DAY 1 ) SAMPLING DISTRIBUTIONS.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
Chapter 8 Introduction to Hypothesis Testing
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Inferences for Regression
Confidence Intervals (Chapter 8) Confidence Intervals for numerical data: –Standard deviation known –Standard deviation unknown Confidence Intervals for.
Statistics for Engineer Week II and Week III: Random Variables and Probability Distribution.
Introduction to MCMC and BUGS. Computational problems More parameters -> even more parameter combinations Exact computation and grid approximation become.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 8-1 Confidence Interval Estimation.
Copyright © 2012 by Nelson Education Limited. Chapter 7 Hypothesis Testing I: The One-Sample Case 7-1.
2 nd Order CFA Byrne Chapter 5. 2 nd Order Models The idea of a 2 nd order model (sometimes called a bi-factor model) is: – You have some latent variables.
PARAMETRIC STATISTICAL INFERENCE
H1H1 H1H1 HoHo Z = 0 Two Tailed test. Z score where 2.5% of the distribution lies in the tail: Z = Critical value for a two tailed test.
LECTURER PROF.Dr. DEMIR BAYKA AUTOMOTIVE ENGINEERING LABORATORY I.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Conjugate Priors Multinomial Gaussian MAP Variance Estimation Example.
Fast Simulators for Assessment and Propagation of Model Uncertainty* Jim Berger, M.J. Bayarri, German Molina June 20, 2001 SAMO 2001, Madrid *Project of.
Statistical Methods II&III: Confidence Intervals ChE 477 (UO Lab) Lecture 5 Larry Baxter, William Hecker, & Ron Terry Brigham Young University.
Exploratory Data Analysis Observations of a single variable.
Chapter 7 Probability and Samples: The Distribution of Sample Means
BUS304 – Chapter 6 Sample mean1 Chapter 6 Sample mean  In statistics, we are often interested in finding the population mean (µ):  Average Household.
Stat 112: Notes 2 Today’s class: Section 3.3. –Full description of simple linear regression model. –Checking the assumptions of the simple linear regression.
A taste of statistics Normal error (Gaussian) distribution  most important in statistical analysis of data, describes the distribution of random observations.
- 1 - Bayesian inference of binomial problem Estimating a probability from binomial data –Objective is to estimate unknown proportion (or probability of.
Week 41 Estimation – Posterior mean An alternative estimate to the posterior mode is the posterior mean. It is given by E(θ | s), whenever it exists. This.
ME Mechanical and Thermal Systems Lab Fall 2011 Chapter 3: Assessing and Presenting Experimental Data Professor: Sam Kassegne, PhD, PE.
The Practice of Statistics Chapter 9: 9.1 Sampling Distributions Copyright © 2008 by W. H. Freeman & Company Daniel S. Yates.
Statistics 300: Elementary Statistics Sections 7-2, 7-3, 7-4, 7-5.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Chapter 5 Sampling Distributions. The Concept of Sampling Distributions Parameter – numerical descriptive measure of a population. It is usually unknown.
- 1 - Calibration with discrepancy Major references –Calibration lecture is not in the book. –Kennedy, Marc C., and Anthony O'Hagan. "Bayesian calibration.
Statistics Outline I.Types of Error A. Systematic vs. random II. Statistics A. Ways to describe a population 1. Distribution 1. Distribution 2. Mean, median,
© 2002 Prentice-Hall, Inc.Chap 8-1 Basic Business Statistics (8 th Edition) Chapter 8 Confidence Interval Estimation.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Statistical Significance Hypothesis Testing.
Bayesian statistics named after the Reverend Mr Bayes based on the concept that you can estimate the statistical properties of a system after measuting.
Anders Nielsen Technical University of Denmark, DTU-Aqua Mark Maunder Inter-American Tropical Tuna Commission An Introduction.
Chapter 7: The Distribution of Sample Means
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
WARM UP: Penny Sampling 1.) Take a look at the graphs that you made yesterday. What are some intuitive takeaways just from looking at the graphs?
Chapter 9 Sampling Distributions 9.1 Sampling Distributions.
Binomial Distribution Possion Distribution Gaussian Distribution Lorentzian Distribution Error Analysis.
Generalization Performance of Exchange Monte Carlo Method for Normal Mixture Models Kenji Nagata, Sumio Watanabe Tokyo Institute of Technology.
Markov Chain Monte Carlo in R
Confidence Intervals Cont.
9-2 Testing μ.
Lecture 1.31 Criteria for optimal reception of radio signals.
(5) Notes on the Least Squares Estimate
LECTURE 33: STATISTICAL SIGNIFICANCE AND CONFIDENCE (CONT.)
Analyzing Redistribution Matrix with Wavelet
Michael Epstein, Ben Calderhead, Mark A. Girolami, Lucia G. Sivilotti 
Discrete Event Simulation - 4
Ch13 Empirical Methods.
Test Drop Rules: If not:
#22 Uncertainty of Derived Parameters
Simulation Berlin Chen
1/2555 สมศักดิ์ ศิวดำรงพงศ์
Presentation transcript:

Using car4ams, the Bayesian AMS data-analysis code V. Palonen, P. Tikkanen, and J. Keinonen Department of Physics, Division of Materials Physics

AMS data In AMS, several measurements are made of each cathode. Each measurement has intrinsic uncertainty from the 14 C counting. Additional (instrumental) error is possible. What is a reliable uncertainty estimate?

AMS data analysis Four ways Counting statistical uncertainty Usually the main component of measurement uncertainty Additional instrumental error possible. Hence, using this only is too optimistic. Standard error of the mean (SDOM) Has negative bias. Has significant random scatter from sampling → >5σ errors. Not Gaussian. Combination of the above Better but not optimal. Bayesian CAR model Small scatter, best detection of instrumental error, accurate. Applies same kind of instrumental error for all cathodes.

SDOM: the random scatter Sampling causes random scatter to SDOM. May be too small or too large simulated results 10 runs per cathode z-score

SDOM: deviations not normally distributed The random scatter of the SDOM leads to a more tailed distribution for the z-scores (true error/uncertainty).

Combination 1: Max(sampling, counting) Counting statistical uncertainty Overestimates when no instrumental error. May underestimate when instrumental error present.

Combination 2: Chi 2 test (NEC) Good when no instrumental error. Underestimates when instrumental error present.

The CAR model Known measurement uncertainties from the Poisson distribution of the 14 C counts. Main assumption: Unknown (instrumental) error is described by a continuous autoregressive (CAR) process. The process can describe both white noise and random walk noise (trend). Adapts to the most probable magnitude and type of instrumental error.

CAR results Small scatter and (usually) Gaussian results. Much better control on instrumental error. Uncertainties increase continuously with increasing additional error. Slightly more accurate.

The usage car4ams, a Linux/Unix/Windows implementation of the CAR model for AMS is available, along with preprint of an article on the usage, at: beam.acclab.helsinki.fi/~vpalonen/car4ams/ The usage: 1. δ 13 C correct each measured ratio prior to CAR analysis. 2. Make a Ratios.in file of the data. 3. Run car4ams to get a MCMC chain. 4. Summarize car4ams output with cAnalyze.R (an R script). Summaries are given in a spreadsheet file, all graphs in a.pdf file.

δ 13 C correction Prior to CAR analysis, the measured ratios and the stable isotope currents are corrected with where, for 14 C/ 13 C measurements and for 14 C/ 12 C measurements

Form of the Ratios.in file Each measured 14 C / 13 C or 14 C / 12 C ratio is given on one line: n(i) t i R i I i τ i with the four columns: n(i) Cathode number t i Time of the measurement of the ratio. In hours from an arbitrary starting point (from 0:00 of the first day convenient). R i The ion current ratio. The number of 14 C counts is converted to ion current. R i = rare isotope current / stable isotope current. I i τ i The product of stable-isotope current and the duration of 14 C counting. Values are given in coulombs (A· s).

Run car 4 ams car 4 ams output an MCMC chain. The chain consists of parameter-space points distributed as the posterior pdf. (For example, the histogram of the values of the parameter O 1 is the probability density function for the 14 C concentration of cathode 1) car 4 ams outputs the MCMC chain to stdout, which is directed to a file by $ car4ams > c.txt

Check run and summarize results Start R, the free environment for statistical computing. In R, run the analysis script > source(’cAnalyze.R’) Check convergence from the trace plots If trace plots ok, use the outputs. (If not, run car4ams again.) No convergenceConvergence OK

Output Numerical results in a speadsheet file Plots 15

Thank you for the attention The program is at beam.acclab.helsinki.fi/~vpalonen/car4ams/ Future plans: Include automatic convergence analysis and outlier detection to the code. Improve the user interface.