Download presentation

Presentation is loading. Please wait.

Published byTeresa Powers Modified over 3 years ago

1
Pre-processing for “Voxel- Based Morphometry” John Ashburner The Wellcome Trust Centre for Neuroimaging 12 Queen Square, London, UK.

2
Contents Introduction Segmentation DARTEL Registration

3
Voxel-based Morphometry Pre-process the images of lots of subjects, to generate spatially normalised grey matter maps of each subject. Smooth spatially. Perform voxel-wise statistics. Try to interpret the findings in terms of volumetric differences.

4
Segment into different tissue classes Spatially Normalize – with scaling by Jacobian determinant Smooth Spatially Mass-univariate statistical testing Inference via Random Field Theory

5
Smoothing Before convolution Convolved with a circle Convolved with a Gaussian Each voxel after smoothing effectively becomes the result of applying a weighted region of interest (ROI).

6
Possible Explanations for Findings Thickening Thinning Folding Mis-classify Mis-register

7
Contents Introduction Segmentation –Mixture of Gaussians –Bias correction –Warping to match tissue probability maps DARTEL Registration

8
Tissue Segmentation Circularity: –Registration is helped by tissue classification or bias correction. –Tissue classification helped by registration and bias correction. –Bias correction is helped by registration and tissue classification. The solution is to put everything in the same generative model. –A MAP solution is found by repeatedly alternating among classification, bias correction and registration steps. Should produce “better” results than simple serial applications of each component.

9
A Generative Model A model of how the data may have been generated, which comprises: –Mixture of Gaussians (MOG) –Bias Correction –Non-linear Inter-subject Registration y1y1 c1c1 y2y2 y3y3 c2c2 c3c3 CC CC yIyI cIcI

10
Mixture of Gaussians (MOG) Tissue classification is based on a Mixture of Gaussians model (MOG), which represents the intensity probability density by a number of Gaussian distributions. Image Intensity Frequency

11
Belonging Probabilities Belonging probabilities are assigned by normalising to one.

12
Non-Gaussian Intensity Distributions Multiple Gaussians per tissue class allow non- Gaussian intensity distributions to be modelled. –E.g. accounting for partial volume effects

13
Modelling a Bias Field Corrupted image Corrected image Bias Field

14
Tissue Probability Maps Tissue probability maps (TPMs) are used instead of the proportion of voxels in each Gaussian as the prior. ICBM Tissue Probabilistic Atlases. These tissue probability maps are kindly provided by the International Consortium for Brain Mapping, John C. Mazziotta and Arthur W. Toga.

15
Deforming the Tissue Probability Maps Tissue probability images are deformed so that they can be overlaid on top of the image to segment.

16
Optimisation The “best” parameters are those that maximise the log-probability. Optimisation involves finding them. Begin with starting estimates, and repeatedly change them so that the objective function decreases each time.

17
Steepest Descent Start Optimum Alternate between optimising different groups of parameters

18
Tissue probability maps of GM and WM Spatially normalised BrainWeb phantoms (T1, T2 and PD) Cocosco, Kollokian, Kwan & Evans. “BrainWeb: Online Interface to a 3D MRI Simulated Brain Database”. NeuroImage 5(4):S425 (1997)

19
Contents Introduction Segmentation DARTEL Registration –Scaling and squaring –Optimisation –Warping GM and WM images to their average

20
Parameterization Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra Deformations parameterized by a single flow field, which is considered to be constant in time. Not really a proper Lie Group. Often referred to as a one parameter subgroup.

21
Euler Integration Parameterising the deformation φ (0) (x) = x φ (1) (x) = ∫ u ( φ (t) (x) ) dt u is a flow field to be estimated Scaling and squaring is used to generate deformations. –c.f. matrix exponentiation t=0 1

22
Euler integration The differential equation is dφ(x)/dt = u ( φ (t) (x) ) By Euler integration φ (t+h) = φ (t) + hu(φ (t) ) Equivalent to φ (t+h) = (x + hu) o φ (t)

23
For (e.g) 8 time steps Simple integration φ (1/8) = x + u/8 φ (2/8) = φ (1/8) o φ (1/8) φ (3/8) = φ (1/8) o φ (2/8) φ (4/8) = φ (1/8) o φ (3/8) φ (5/8) = φ (1/8) o φ (4/8) φ (6/8) = φ (1/8) o φ (5/8) φ (7/8) = φ (1/8) o φ (6/8) φ (8/8) = φ (1/8) o φ (7/8) 7 compositions Scaling and squaring φ (1/8) = x + u/8 φ (2/8) = φ (1/8) o φ (1/8) φ (4/8) = φ (2/8) o φ (2/8) φ (8/8) = φ (4/8) o φ (4/8) 3 compositions Similar procedure used for the inverse. Starts with φ (-1/8) = x - u/8

24
Scaling and squaring example

25
Deformations at different times

26
Jacobians Jacobian fields can also be obtained by scaling and squaring. If warps are composed by: ϕ C = ϕ B ○ ϕ A then Jacobian matrices are obtained by: J ϕ C =(J ϕ B ○ ϕ A ) J ϕ A

27
Jacobian determinants remain positive (almost)

28
See also… C. Moler and C. van Loan. “Nineteen Dubious Ways to Compute the Exponential of a Matrix, Twenty-Five Years Later”. SIAM Review 45(1):3-49 (2003). V. Arsigny, O. Commowick, X. Pennec and N. Ayache. “A Log- Euclidean Polyaffine Framework for Locally Rigid or Affine Registration”. Proc. Of the 3rd International Workshop on Biomedical Image Registration (WBIR'06), 2006, pp. 120-127. LNCS vol 4057. Springer-Verlag, Utrecht, NL. V. Arsigny, O. Commowick, X. Pennec and N. Ayache. “A Log- Euclidean Framework for Statistics on Diffeomorphisms”. Proc. of the 9th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI'06), 2006, pp. 924-931. LNCS 4190. Springer-Verlag, Berlin, Germany. M. Hernandez, M. N. Bossa, and S. Olmos. “Registration of anatomical images using geodesic paths of diffeomorphisms parameterized with stationary vector fields”. IEEE workshop on Math. Meth. in Biom. Image Anal. (MMBIA’07), 2007.

29
Contents Introduction Segmentation DARTEL Registration –Scaling and squaring –Optimisation –Warping GM and WM images to their average

30
Multinomial Likelihood Term Model is multinomial for matching tissue class images. -log p(t|μ, ϕ ) = -Σ j Σ k t jk log(μ k ( ϕ j )) t – individual GM, WM and background μ – template GM, WM and background ϕ – deformation A general purpose template should not have regions where log( μ ) is –Inf.

31
Prior Term ½u T Hu DARTEL has three different models for H –Membrane energy –Linear elasticity –Bending energy H is very sparse An example H for 2D registration of 6x6 images (linear elasticity)

32
Regularization models “Membrane energy” “Bending energy” Images registered using a small deformation approximation

33
Optimization Uses Gauss-Newton –Requires a matrix solution to a very large set of equations at each iteration u (k+1) = u (k) - (H+A) -1 b –b are the first derivatives of objective function –A is a sparse matrix of second derivatives –Computed efficiently, making use of scaling and squaring

34
Relaxation To solve Mx = c Split M into E and F, where E is easy to invert F is more difficult If M is diagonally dominant (membrane energy): x (k+1) = E -1 (c – F x (k) ) Otherwise regularize (bending or linear elastic energy): x (k+1) = x (k) + (E+sI) -1 (c – M x (k) ) –Diagonal dominance is when |m ii | > Σ i≠j |m ij |

35
M = H+A = E+F 2 nd derivs of prior term 2 nd derivs of likelihood term Easy to invert Difficult to invert

36
Highest resolution Lowest resolution Full Multi-Grid

37
A Prolongation of low resolution solution to current resolution. Add this to existing solution. Perform a few iterations of relaxation. Restrict residuals down to lower resolution.

38
B Prolongation of low resolution solution to current resolution. Add this to existing solution at current resolution. Perform a few iterations of relaxation. Prolongation of solution to higher resolution.

39
C Restrict high resolution residuals to current resolution. Perform a few iterations of relaxation. Restrict residuals down to lower resolution.

40
E Restrict higher resolution residuals to current resolution. Obtain exact solution by matrix inversion. Prolongation of solution to higher resolution.

41
See also… W. H. Press, S. A. Teukolsky, W. T. Vetterling and B. P. Flannery. Numerical Recipes in C (Second Edition). Cambridge University Press, Cambridge, UK. 1992. –Chapter 15, Section 5 explains Gauss- Newton optimization (Levenberg-Marquardt without the regularisation). –Chapter 19, Section 6 explains the basics of multi-grid methods.

42
Contents Introduction Segmentation DARTEL Registration –Scaling and squaring –Optimisation –Warping GM and WM images to their average

43
Template Generation Initial Average After a few iterations Final template Iteratively generated from 471 subjects. Began with rigidly aligned tissue probability maps. Regularization lighter for later iterations.

44
Generative Model p( ϕ 1,t 1, ϕ 2,t 2, ϕ 3,t 3,… μ) = p(t 1, ϕ 1 |μ) p(t 2, ϕ 2 |μ) p(t 3, ϕ 3 |μ) … p(μ) = p(t 1 | ϕ 1,μ) p( ϕ 1 ) p(t 2 | ϕ 2,μ) p( ϕ 2 )… p(μ) MAP solution obtained for template. Requires p(μ) μ t1t1 ϕ1ϕ1 t2t2 ϕ2ϕ2 t3t3 ϕ3ϕ3 t4t4 ϕ4ϕ4 t5t5 ϕ5ϕ5

45
Laplacian Smoothness Priors on template 2D 3D

46
Template modelled as softmax of a Gaussian process μ k (x) = exp(a k (x))/(Σ j exp(a j (x))) MAP solution determined for a, by Gauss-Newton optimisation, using multi-grid.

47
ML and MAP templates from 6 subjects Nonlinearly aligned Rigidly aligned log MAP ML

48
471 Subject Average

51
Subject 1

52
471 Subject Average

53
Subject 2

54
471 Subject Average

55
Subject 3

56
471 Subject Average

57
Preprocessing with DARTEL

59
u Hu

60
“Initial momentum” Variable velocity framework (as in LDDMM)

61
“Initial momentum” Variable velocity framework (as in LDDMM)

62
Determining amount of regularisation Matrices too big for Bayesian variance component estimation. Used cross- validation. Smooth an image by different amounts, see how well it predicts other images: Rigidly aligned Nonlinear registered log p(t|μ) = Σ j Σ k t jk log(μ jk )

Similar presentations

OK

CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.

CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 11: Bayesian learning continued Geoffrey Hinton.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on computer networking basics Download ppt on mind controlled robotic arms Ppt on 1200 kv ac transmission line Ppt on electric meter testing model Ppt on idiopathic thrombocytopenia purpura treatment Ppt on the road not taken audio Atoms for kids ppt on batteries Ppt on general topics in india Ppt on bank lending 2016 Ppt on elections in india download film