Download presentation

Presentation is loading. Please wait.

Published byCaleb Moran Modified over 3 years ago

1
Bayesian Inference Lee Harrison York Neuroimaging Centre 14 / 05 / 2010

2
realignmentsmoothing normalisation general linear model template Gaussian field theory p <0.05 statisticalinference Bayesian segmentation and normalisation Bayesian segmentation and normalisation Spatial priors on activation extent Spatial priors on activation extent Dynamic Causal Modelling Dynamic Causal Modelling Posterior probability maps (PPMs) Posterior probability maps (PPMs)

3
ParadigmResults Attention to Motion Büchel & Friston 1997, Cereb. Cortex Büchel et al. 1998, Brain V5+ SPC V3A Attention – No attention - fixation only - observe static dots+ photic V1 - observe moving dots+ motion V5 - task on moving dots+ attention V5 + parietal cortex

4
Paradigm - fixation only - observe static dots - observe moving dots - task on moving dots Dynamic Causal Models Attention to Motion V1 V5 SPC Motion Photic Attention V1 V5 SPC Motion Photic Attention Model 1 (forward): attentional modulation of V1V5: forward Model 2 (backward) : attentional modulation of SPCV5: backward Bayesian model selection: Which model is optimal?

5
Responses to Uncertainty Long term memory Short term memory

6
Responses to Uncertainty Paradigm Stimuli sequence of randomly sampled discrete events Model simple computational model of an observers response to uncertainty based on the number of past events (extent of memory) Question which regions are best explained by short / long term memory model? … 1 2 40 trials 1432 ? ?

7
Overview Introductory remarks Some probability densities/distributions Probabilistic (generative) models Bayesian inference A simple example – Bayesian linear regression SPM applications –Segmentation –Dynamic causal modeling –Spatial models of fMRI time series

8
k=2 Probability distributions and densities

9
k=2

10
Probability distributions and densities k=2

11
Probability distributions and densities k=2

12
Probability distributions and densities k=2

13
Probability distributions and densities k=2

14
Probability distributions and densities k=2

15
Generative models estimation time space generation space

16
posterior likelihood prior Bayes theorem allows one to formally incorporate prior knowledge into computing statistical probabilities. The posterior probability of the parameters given the data is an optimal combination of prior knowledge and new data, weighted by their relative precision. new data prior knowledge Bayesian statistics

17
Given data y and parameters, their joint probability can be written in 2 ways: Eliminating p(y, ) gives Bayes rule: Likelihood Prior Evidence Posterior Bayes rule

18
y y Observation of data likelihood p(y| ) prior distribution p( ) likelihood p(y| ) prior distribution p( ) Formulation of a generative model Update of beliefs based upon observations, given a prior state of knowledge Principles of Bayesian inference

19
Normal densities Univariate Gaussian Posterior mean = precision-weighted combination of prior mean and data mean

20
Normal densities Bayesian GLM: univariate case

21
Normal densities Bayesian GLM: multivariate case β2β2 β1β1 One step if C e and C p are known. Otherwise iterative estimation.

22
Approximate inference: optimization True posterior Approximate posterior iteratively improve free energy mean-field approximation Value of parameter Objective function

23
Simple example – linear regression DataOrdinary least squares

24
Simple example – linear regression Bases (explanatory variables) Data and model fit Bases (explanatory variables)Sum of squared errors Ordinary least squares

25
Simple example – linear regression Data and model fit Bases (explanatory variables) Ordinary least squares Sum of squared errors

26
Simple example – linear regression Data and model fit Bases (explanatory variables)Sum of squared errors Ordinary least squares

27
Simple example – linear regression Data and model fit Bases (explanatory variables)Sum of squared errors Over-fitting: model fits noise Inadequate cost function: blind to overly complex models Solution: include uncertainty in model parameters Ordinary least squares

28
Bayesian linear regression: priors and likelihood Model:

29
Bayesian linear regression: priors and likelihood Model: Prior:

30
Sample curves from prior (before observing any data) Mean curve Bayesian linear regression: priors and likelihood Model: Prior:

31
Bayesian linear regression: priors and likelihood Model: Prior: Likelihood:

32
Bayesian linear regression: priors and likelihood Model: Prior: Likelihood:

33
Bayesian linear regression: priors and likelihood Model: Prior: Likelihood:

34
Bayesian linear regression: posterior Model: Prior: Likelihood: Bayes Rule:

35
Bayesian linear regression: posterior Model: Prior: Likelihood: Bayes Rule: Posterior:

36
Bayesian linear regression: posterior Model: Prior: Likelihood: Bayes Rule: Posterior:

37
Bayesian linear regression: posterior Model: Prior: Likelihood: Bayes Rule: Posterior:

38
Posterior distribution: probability of the effect given the data Posterior probability map: images of the probability (confidence) that an activation exceeds some specified threshold s th, given the data y Two thresholds: activation threshold s th : percentage of whole brain mean signal (physiologically relevant size of effect) probability p th that voxels must exceed to be displayed (e.g. 95%) Two thresholds: activation threshold s th : percentage of whole brain mean signal (physiologically relevant size of effect) probability p th that voxels must exceed to be displayed (e.g. 95%) mean: size of effect precision: variability Posterior Probability Maps (PPMs)

39
Bayesian linear regression: model selection Bayes Rule: normalizing constant Model evidence:

40
grey matterPPM of belonging to … CSFwhite matter aMRI segmentation

41
Neural state equation: Electric/magnetic forward model: neural activity EEG MEG LFP Neural model: 1 state variable per region bilinear state equation no propagation delays Neural model: 8 state variables per region nonlinear state equation propagation delays fMRIERPs inputs Hemodynamic forward model: neural activity BOLD Dynamic Causal Modelling: generative model for fMRI and ERPs

42
m2m2 m1m1 m3m3 m4m4 V1 V5 stim PPC attention V1 V5 stim PPC attention V1 V5 stim PPC attention V1 V5 stim PPC attention [Stephan et al., Neuroimage, 2008] m1m1 m2m2 m3m3 m4m4 15 10 5 0 V1 V5 stim PPC attention 1.25 0.13 0.46 0.39 0.26 0.10 estimated effective synaptic strengths for best model (m 4 ) models marginal likelihood Bayesian Model Selection for fMRI

43
AR coeff (correlated noise) prior precision of AR coeff VB estimate of β ML estimate of β aMRI Smooth Y (RFT) fMRI time series analysis with spatial priors observations GLM coeff prior precision of GLM coeff prior precision of data noise Penny et al 2005 degree of smoothnessSpatial precision matrix

44
Mean (Cbeta_*.img) Std dev (SDbeta_*.img) activation threshold Posterior density q(β n ) Probability mass p n fMRI time series analysis with spatial priors: posterior probability maps probability of getting an effect, given the data mean: size of effect covariance: uncertainty Display only voxels that exceed e.g. 95% PPM (spmP_*.img)

45
fMRI time series analysis with spatial priors: Bayesian model selection Compute log-evidence for each model/subject model 1 model K subject 1 subject N Log-evidence maps

46
fMRI time series analysis with spatial priors: Bayesian model selection BMS maps PPM EPM model k Joao et al, 2009 Compute log-evidence for each model/subject model 1 model K subject 1 subject N Log-evidence maps Probability that model k generated data

47
Reminder… Long term memory Short term memory

48
Short-term memory model long-term memory model onsets Missed trials IT indices: H,h,I,i H=entropy; h=surprise; I=mutual information; i=mutual surprise Compare two models IT indices are smoother

49
primary visual cortex Regions best explained by short- term memory model Regions best explained by long-term memory model frontal cortex (executive control) Group data: Bayesian Model Selection maps

50
Thank-you

Similar presentations

OK

Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research University of Zurich With many thanks for slides & images to: FIL Methods.

Multiple testing Justin Chumbley Laboratory for Social and Neural Systems Research University of Zurich With many thanks for slides & images to: FIL Methods.

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on db2 mainframes learning Ppt on self awareness and values Ppt on revolution of the earth and seasons quiz Ppt on main distribution frame A ppt on right to education Ppt on rural entrepreneurship Ppt on manufacturing of turbo generators manufacturers Ppt on time response analysis Ppt on biodiesel from algae Ppt on conservation of species