Download presentation

Presentation is loading. Please wait.

Published byAdriana Holloman Modified over 2 years ago

1
Tutorial on Bayesian Techniques for Inference A.Asensio Ramos Instituto de Astrofísica de Canarias

2
Outline General introduction The Bayesian approach to inference Examples Conclusions

3
The Big Picture Predictions Observation Data Hypothesis testing Parameter estimation Testable Hypothesis (theory) Statistical Inference Deductive Inference

4
The Big Picture Available information is always incomplete Our knowledge of nature is necessarily probabilistic Cox & Jaynes demonstrated that probability calculus fulfilling the rules can be used to do statistical inference

5
Probabilistic inference H 1, H 2, H 3, …., H n are hypothesis that we want to test The Bayesian way is to estimate p(H i |…) and select depending on the comparison of their probabilities What are the p(H i |…)??? But…

6
What is probability? (Frequentist) In frequentist approach, probability describes “randomness” If we carry out the experiment many times, which is the distribution of events (frequentist) p(x) is the histogram of random variable x

7
What is probability? (Bayesian) In Bayesian approach, probability describes “uncertainty” p(x) gives how probability is distributed among the possible choice of x We observe this value Everything can be a random variable as we will see later

8
Bayes theorem It is trivially derived from the product rule Hi proposition asserting the truth of a hypothesis I proposition representing prior information D proposition representing data

9
Bayes theorem - Example Model M 1 predicts a star at d=100 ly Model M 2 predicts a star at d=200 ly Uncertainty in measurement is Gaussian with =40 ly Measured distance is d=120 ly Likelihood Posteriors

10
Bayes theorem – Another example 1.4% false negative (98.6% reliability) 2.3% false positive

11
Bayes theorem – Another example H you have the disease H you don’t have the disease D 1 your test is positive You take the test and you get it positive. What is the probability that you have the disease if the incidence is 1:10000?

12
Bayes theorem – Another example 10 -4 0.986 0.02310 -4 0.9999

13
What is usually known as inversion All inversion methods work by adjusting the parameters of the model with the aim of minimizing a merit function that compares observations with the synthesis from the model One proposes a model to explain observations Least-squares solution (maximum-likelihood) is the solution to the inversion problem

14
Defects of standard inversion codes Solution is given as a set of model parameters (max. likelihood) Not necessary the optimal solution Sensitive to noise Error bars or confidence regions are scarce Gaussian errors Not easy to propagate errors Ambiguities, degeneracies, correlations are not detected Assumptions are not explicit Cannot compare models

15
Inversion as a probabilistic inference problem Observations Parameter 1Parameter 2Parameter 3 ModelNoise Likelihood Prior Evidence Posterior Use Bayes theorem to propagate information from data to our final state of knowledge

16
Priors Typical priors Top-hat function (flat prior) ii max min Gaussian prior (we know some values are more probable than others) ii Assuming statistical independence for all parameters the total prior can be calculated as Contain information about model parameters that we know before presenting the data

17
Likelihood Assuming normal (gaussian) noise, the likelihood can be calculated as where the 2 function is defined as usual In this case, the 2 function is specific for the the case of Stokes profiles

18
Visual example of Bayesian inference

19
Advantages of Bayesian approach “Best fit” values of parameters are e.g., mode/median of the posterior Uncertainties are credible regions of the posterior Correlation between variables of the model are captured Generalized error propagation (not only Gaussian and including correl.) Integration over nuissance parameters (marginalization)

20
Bayesian inference – an example Hinode

21
Beautiful posterior distributions Field strengthField inclination Field azimuthFilling factor

22
Not so beautiful posterior distributions - degeneracies Field inclination

23
Inversion with local stray-light – be careful i is the variance of the numerator But… what happens if we propose a model like Orozco Suárez et al. (2007) with a stray-light contamination obtained from a local average on the surrounding pixels From observations

24
Variance becomes dependent on stray-light contamination It is usual to carry out inversions with a stray-light contamination obtained from a local average on the surrounding pixels

25
Spatial correlations: use global stray-light It is usual to carry out inversions with a stray-light contamination obtained from a local average on the surrounding pixels If M correlations tend to zero

26
Spatial correlations

27
Lesson: use global stray-light contamination

29
But… the most general inversion method is… Observations Model 1 Model 2 Model 3 Model 4 Model 5

30
Model comparison Choose among the selected models the one that is preferred by the data Posterior for model M i Model likelihood is just the evidence

31
Model comparison (compare evidences)

32
Model comparison – a worked example H 0 : simple Gaussian H 1 : two Gaussians of equal width but unknown amplitude ratio

33
H 0 : simple Gaussian H 1 : two Gaussians of equal width but unknown amplitude ratio Model comparison – a worked example

35
Model H 1 is 9.2 times more probable

36
Model comparison – an example Model 1 1 magnetic component Model 2 1 magnetic+1 non-magnetic component Model 3 2 magnetic components Model 4 2 magnetic components with (v 2 =0, a 2 =0)

37
Model comparison – an example Model 1 1 magnetic component 9 free parameters Model 2 1 magnetic+1 non-magnetic component 17 free parameters Model 3 2 magnetic components 20 free parameters Model 4 2 magnetic components with (v 2 =0, a 2 =0) 18 free parameters Model 2 is preferred by the data “Best fit with the smallest number of parameters”

38
Model averaging. One step further Models {M i, i=1..N} have a common subset of parameters of interest but each model depends on a different set of parameters or have different priors over these parameters What all models have to say about parameters All of them give a “weighted vote” Posterior for including all models

39
Model averaging – an example

40
Hierarchical models In the Bayesian approach, everything can be considered a random variable DATA MODELLIKELIHOOD MARGINALIZATION NUISANCE PAR. PRIOR INFERENCE PRIOR PAR.

41
Hierarchical models In the Bayesian approach, everything can be considered a random variable DATA MODELLIKELIHOOD MARGINALIZATION NUISANCE PAR. PRIOR INFERENCE PRIOR PAR. PRIOR

42
Bayesian Weak-field Bayes theorem Advantage: everything is close to analytic

43
Bayesian Weak-field – Hierarchical priors Priors depend on some hyperparameters over which we can again set priors and marginalize them

44
Bayesian Weak-field - Data IMaX data

45
Bayesian Weak-field - Posteriors Joint posteriors

46
Bayesian Weak-field - Posteriors Marginal posteriors

47
Hierarchical priors - Distribution of longitudinal B

48
Hierarchical priors – Distribution of longitudinal B We want to infer the distribution of longitudinal B from many observed pixels taking into account uncertainties Parameterize the distribution in terms of a vector Mean+variance if Gaussian Height of bins if general

49
Hierarchical priors – Distribution of longitudinal B

50
We generate N synthetic profiles with noise with longitudinal field sampled from a Gaussian distribution with standard deviation 25 Mx cm -2

51
Hierarchical priors – Distribution of any quantity

52
Bayesian image deconvolution

53
PSF blurring using linear expansion Image is sparse in any basis Maximum-likelihood solution (phase-diversity, MOMFBD,…)

54
Inference in a Bayesian framework Solution is given as a probability over model parameters Error bars or confidence regions can be easily obtained, including correlations, degeneracies, etc. Assumptions are explicit on prior distributions Model comparison and model averaging is easily accomplished Hierarchical model is powerful for extracting information from data

55
Hinode data Continuum Total polarization Asensio Ramos (2009) Observations of Lites et al. (2008)

56
How much information? – Kullback-Leibler divergence Field strength (37% larger than 1) Field inclination (34% larger than 1) Measures “distance” between posterior and prior distributions

57
Posteriors Field strength Field azimuth Field inclination Stray-light

58
Field inclination – Obvious conclusion Linear polarization is fundamental to obtain reliable inclinations

59
Field inclination – Quasi-isotropic Isotropic field Our prior

60
Field inclination – Quasi-isotropic

61
Representation Marginal distribution for each parameter Sample N values from the posterior and all values are compatible with observations

62
Field strength – Representation All maps compatible with observations!!!

63
Field inclination All maps compatible with observations!!!

64
In a galaxy far far away… (the future) RAW DATA POSTERIOR+ MARGINALIZATION NON-IMPORTANT PARAMETERS INFERENCE INSTRUMENTS WITH SYSTEMATICS PRIORS MODELPRIORS

65
Conclusions Inversion is not an easy task and has to be considered as a probabilistic inference problem Bayesian theory gives us the tools for inference Expand our view of inversion as a model comparison/averaging problem (no model is the absolute truth!)

66
Thank you and be Bayesian, my friend!

Similar presentations

OK

Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.

Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on electricity from waste water Ppt on natural resources and conservation lesson Ppt on hotel management career Ppt on applied operational research jobs Ppt on polytene chromosomes of drosophila Ppt on revolt of 1857 map Ppt on self development activities Ppt on leverages business Doc convert to ppt online viewer Resource allocation ppt on cloud computing