Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mixture Models with Adaptive Spatial Priors

Similar presentations


Presentation on theme: "Mixture Models with Adaptive Spatial Priors"— Presentation transcript:

1 Mixture Models with Adaptive Spatial Priors
Will Penny Karl Friston Acknowledgments: Stefan Kiebel and John Ashburner The Wellcome Department of Imaging Neuroscience, UCL http//:

2 Data transformations p <0.05 Statistical parametric map (SPM)
Image time-series Kernel Design matrix Realignment Smoothing General linear model Statistical inference Gaussian field theory Normalisation p <0.05 Template Parameter estimates

3 Data transformations p <0.05 Statistical parametric map (SPM)
Image time-series Design matrix Realignment General linear model Statistical inference Gaussian field theory Normalisation p <0.05 Template Parameter estimates

4 Data transformations p <0.05 Statistical parametric map (SPM)
Image time-series Design matrix/matrices Mixtures of General linear models Realignment Statistical inference Gaussian field theory Normalisation p <0.05 Template Size, Position and Shape

5 Data transformations Image time-series Design matrix/matrices
Posterior Probability Map (PPM) Mixtures of General linear models Realignment Normalisation Template Size, Position and Shape

6 Overview Parameter estimation - EM algorithm
Overall Framework - Generative model Parameter estimation - EM algorithm Inference - Posterior Probability Maps (PPMs) Model order selection - How many clusters ? Auditory and face processing data

7 Cluster-Level Analysis
The fundamental quantities of interest are the properties of spatial clusters of activation

8 Generative Model We have ACTIVE components which describe spatially localised clusters of activity with a temporal signature correlated with the activation paradigm. We have NULL components which describe spatially distributed background activity temporally uncorrelated with the paradigm. At each voxel and time point fMRI data is a mixture of ACTIVE and NULL components.

9 Generative Model S1 r0 m1 S2 r1 m2 r2

10 Generative Model At each voxel i and time point t
1. Select component k with probability

11 Generative Model At each voxel i and time point t
1. Select component k with probability Spatial Prior

12 Generative Model At each voxel i and time point t
1. Select component k with probability Spatial Prior 2. Draw a sample from component k’s temporal model

13 Generative Model At each voxel i and time point t
1. Select component k with probability Spatial Prior 2. Draw a sample from component k’s temporal model General Linear Model

14 Generative Model At each voxel i and time point t
1. Select component k with probability Spatial Prior 2. Draw a sample from component k’s temporal model General Linear Model

15 Generative Model Scan 3

16 Generative Model Scan 4

17 Generative Model Scan 8

18 Generative Model Scan 9

19 Expectation-Maximisation (EM) algorithm
Parameter Estimation Expectation-Maximisation (EM) algorithm

20 Expectation-Maximisation (EM) algorithm
Parameter Estimation Expectation-Maximisation (EM) algorithm E-Step

21 Expectation-Maximisation (EM) algorithm
Parameter Estimation Expectation-Maximisation (EM) algorithm E-Step

22 Expectation-Maximisation (EM) algorithm
Parameter Estimation Expectation-Maximisation (EM) algorithm Temporal E-Step Spatial Posterior Normalizer

23 Expectation-Maximisation (EM) algorithm
Parameter Estimation Expectation-Maximisation (EM) algorithm M-Step Prototype time series for component k

24 Parameter Estimation Expectation-Maximisation (EM) algorithm M-Step
Prototype time series for component k Variant of Iteratively Reweighted Least Squares

25 Parameter Estimation Expectation-Maximisation (EM) algorithm M-Step
Prototype time series for component k Variant of Iteratively Reweighted Least Squares mk and Sk updated using line search

26 Auditory Data SPM MGLM (K=1)

27 Auditory Data SPM MGLM (K=2)

28 Auditory Data SPM MGLM (K=3)

29 Auditory Data SPM MGLM (K=4)

30 How many components ? Integrate out dependence on model parameters, q
This can be approximated using the Bayesian Information Criterion (BIC) Then use Baye’s rule to pick optimal model order

31 How many components ? Log L BIC P(D|K) K K

32 Auditory Data MGLM (K=2) Diffuse Activation t=15

33 Auditory Data MGLM (K=3) Focal Activations t=20 t=14

34 Smoothing can remove signal
Smoothing will remove signal here Spatial priors adapt to shape

35 Conclusions SPM is a special case of our model
We don’t need to smooth the data and risk losing signal Principled method for pooling data Effective connectivity Temporal as well as spatial clustering Spatial hypothesis testing (eg. stroke) Extension to multiple subjects


Download ppt "Mixture Models with Adaptive Spatial Priors"

Similar presentations


Ads by Google