Byron Smith December 11, 2013. 1. What is Quantum State Tomography? 2. What is Bayesian Statistics? 1.Conditional Probabilities 2.Bayes’ Rule 3.Frequentist.

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

Inference in the Simple Regression Model
Point Estimation Notes of STAT 6205 by Dr. Fan.
CS479/679 Pattern Recognition Dr. George Bebis
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
LECTURE 11: BAYESIAN PARAMETER ESTIMATION
1 Methods of Experimental Particle Physics Alexei Safonov Lecture #21.
Likelihood and entropy for quantum tomography Z. Hradil, J. Řeháček Department of Optics Palacký University,Olomouc Czech Republic Work was supported by.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 10: The Bayesian way to fit models Geoffrey Hinton.
Bayesian inference Gil McVean, Department of Statistics Monday 17 th November 2008.
An Introduction to Bayesian Inference Michael Betancourt April 8,
Predictive Automatic Relevance Determination by Expectation Propagation Yuan (Alan) Qi Thomas P. Minka Rosalind W. Picard Zoubin Ghahramani.
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
Presenting: Assaf Tzabari
Machine Learning CMPT 726 Simon Fraser University
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Introduction to Bayesian Parameter Estimation
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Statistical Decision Theory
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Dr. Gary Blau, Sean HanMonday, Aug 13, 2007 Statistical Design of Experiments SECTION I Probability Theory Review.
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
Introduction to Bayesian statistics Yves Moreau. Overview The Cox-Jaynes axioms Bayes’ rule Probabilistic models Maximum likelihood Maximum a posteriori.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Yung-Kyun Noh and Joo-kyung Kim Biointelligence.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
G. Cowan Lectures on Statistical Data Analysis Lecture 1 page 1 Lectures on Statistical Data Analysis London Postgraduate Lectures on Particle Physics;
Practical Statistics for Particle Physicists Lecture 3 Harrison B. Prosper Florida State University European School of High-Energy Physics Anjou, France.
Generalised method of moments approach to testing the CAPM Nimesh Mistry Filipp Levin.
Statistical Decision Theory Bayes’ theorem: For discrete events For probability density functions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 07: BAYESIAN ESTIMATION (Cont.) Objectives:
Mathematical Foundations Elementary Probability Theory Essential Information Theory Updated 11/11/2005.
Ch15: Decision Theory & Bayesian Inference 15.1: INTRO: We are back to some theoretical statistics: 1.Decision Theory –Make decisions in the presence of.
The generalization of Bayes for continuous densities is that we have some density f(y|  ) where y and  are vectors of data and parameters with  being.
8.4.2 Quantum process tomography 8.5 Limitations of the quantum operations formalism 量子輪講 2003 年 10 月 16 日 担当:徳本 晋
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
Univariate Gaussian Case (Cont.)
Maximum likelihood estimators Example: Random data X i drawn from a Poisson distribution with unknown  We want to determine  For any assumed value of.
Machine Learning CUNY Graduate Center Lecture 6: Linear Regression II.
Can small quantum systems learn? NATHAN WIEBE & CHRISTOPHER GRANADE, DEC
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Outline Historical note about Bayes’ rule Bayesian updating for probability density functions –Salary offer estimate Coin trials example Reading material:
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Canadian Bioinformatics Workshops
CS Statistical Machine learning Lecture 7 Yuan (Alan) Qi Purdue CS Sept Acknowledgement: Sargur Srihari’s slides.
Bayesian Estimation and Confidence Intervals Lecture XXII.
Univariate Gaussian Case (Cont.)
Chapter 3: Maximum-Likelihood Parameter Estimation
Bayesian Estimation and Confidence Intervals
Probability Theory and Parameter Estimation I
Bayesian data analysis
ICS 280 Learning in Graphical Models
Ch3: Model Building through Regression
Special Topics In Scientific Computing
Filtering and State Estimation: Basic Concepts
Quantum State and Process Measurement and Characterization
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
LECTURE 09: BAYESIAN LEARNING
LECTURE 07: BAYESIAN ESTIMATION
CS 594: Empirical Methods in HCC Introduction to Bayesian Analysis
Learning From Observed Data
CS639: Data Management for Data Science
Mathematical Foundations of BME Reza Shadmehr
Presentation transcript:

Byron Smith December 11, 2013

1. What is Quantum State Tomography? 2. What is Bayesian Statistics? 1.Conditional Probabilities 2.Bayes’ Rule 3.Frequentist vs. Bayesian 3. Example: Schrodinger’s Cat 1.Interpretation 2.Analysis with a Non-informative Prior 3.Analysis with an Informative Prior 4. Sources of Error in Tomography 5. Error Reduction via Bayesian Analysis 6. Adaptive Tomography 7. Conclusion 8. References 9. Supplementary Information

 Tomography comes from tomos meaning section.  Classically, tomography refers to analyzing a 3- dimensional trajectory using 2-dimensional slices.  Quantum State Tomography refers to identifying a particular wave function using a series of measurements.

is the Likelihood is the Prior Probability (or just prior) is the Posterior Probability (or just posterior)

FrequentistBayesian  Inference on probability arises from the frequency that some outcome is measured and that measurement is random.  In other words, a measurement is random only due to our ignorance.  Inference on probability arises from a prior probability assumption weighted by empirical evidence.  Measurements are inherently random.

Frequentist: Given N cats, there are some that are alive and some that are dead. The probability that the cat is alive is associated with the fact that we sample randomly and a*N of the cats are alive. Bayesian: The probability a is a random variable (unknown) and therefore there is an inherent probability associated with each particular cat.

NFrequentistBayesian True value of a = 0.3

 Flourine-18, half- life= Hours  Cat has been in for 3 hours NFrequentistBayesian

 Error in the measurement basis.  Error in the counting statistics.  Error associated with stability. ◦ Detector efficiency ◦ Source intensity  Model Error

FrequentistBayesian (α=3.1, β=6.9) NVar(a) NVar(a)

 There are several measures of “lack of fit.”  Likelihood  Infidelity  Shannon Entropy

 The number of measurements required to sufficiently identify ρ can be reduced when using a basis which diagonalizes ρ.  To do so, measure N 0 particles first, change the measurement basis, then finish the total measurement.  Naturally this can be improved with Bayesian by using the first measurements as a Prior.

 Quantum State Tomography is a tool used to identify a density matrix.  There are several metrics of density identification.  Bayesian statistics can improve efficiency while providing a new interpretation of quantum states.

1. J. B. Altepeter, D. F. V. James, and P. G. Kwiat, Qubit Quantum State Tomography, in Quantum State Estimation (Lecture Notes in Physics), M. Paris and J. Rehacek (editors), Springer (2004). 2. D. H. Mahler, L. A. Rozema, A. Darabi, C. Ferrie, R. Blume-Kohout, and A. M. Steinberg, Phys. Rev. Lett. 111, (2013). 3. F. Huszár and N. M. T. Houlsby, Phys. Rev. A 85, (2012). 4. R. Blume-Kohout, New J. Phys. 12, (2012).

 For density operators which are not diagonal, we use a basis of spin matrices:  If choose an instrument orientation such that the first parameter is 1, we are left with the Stoke’s Parameters, S i :  The goal is then to identify the S i.

 Used to visualize a superposition of photon polarizations.  The x-axis is 45 degree polarizations.  The y-axis is horizontal or vertical polarizations.  The z-axis is circular polarizations.

 Using a matrix basis similar to that for Stoke’s Parameters, one can exactly identify a polarization with three measurements.  Note that an orthogonal basis is not necessary.

 Errors in measurement and instrumentation will manifest themselves as a disc on the Poincare Sphere:  (a) Errors in measurement basis. (b) Errors in intensity or detector stability.

 Because there is a finite sample size, states are not characterized exactly.  Each measurement constrains the sample space.

 Optimization can lead to states which lie outside the Poincare Sphere.  Unobserved values a registered as zeros in the density matrix optimization. This can be unrealistic.  There is no direct measure of uncertainty within maximum likelihood estimators.

 Parameter space can be constrained through the prior.  Unobserved values can still have some small probability through the prior.  Uncertainty analysis can come directly from the variance of the posterior distribution, regardless of an analytical form.