Presentation is loading. Please wait.

Presentation is loading. Please wait.

DARTEL John Ashburner 2008. Overview Motivation –Dimensionality –Inverse-consistency Principles Geeky stuff Example Validation Future directions.

Similar presentations


Presentation on theme: "DARTEL John Ashburner 2008. Overview Motivation –Dimensionality –Inverse-consistency Principles Geeky stuff Example Validation Future directions."— Presentation transcript:

1 DARTEL John Ashburner 2008

2 Overview Motivation –Dimensionality –Inverse-consistency Principles Geeky stuff Example Validation Future directions

3 Motivation More precise inter-subject alignment –Improved fMRI data analysis Better group analysis More accurate localization –Improve computational anatomy More easily interpreted VBM Better parameterization of brain shapes –Other applications Tissue segmentation Structure labeling

4 Image Registration Figure out how to warp one image to match another Normally, all subjects scans are matched with a common template

5 Current SPM approach Only about 1000 parameters. –Unable model detailed deformations

6 A one-to-one mapping Many models simply add a smooth displacement to an identity transform –One-to-one mapping not enforced Inverses approximately obtained by subtracting the displacement –Not a real inverse Small deformation approximation

7 Overview Motivation Principles Optimisation Group-wise Registration Validation Future directions

8 Principles Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra Deformations parameterized by a single flow field, which is considered to be constant in time.

9 DARTEL Parameterising the deformation φ (0) (x) = x φ (1) (x) = u ( φ (t) (x) ) dt u is a flow field to be estimated t=0 1

10 Euler integration The differential equation is dφ(x)/dt = u ( φ (t) (x) ) By Euler integration φ (t+h) = φ (t) + hu(φ (t) ) Equivalent to φ (t+h) = (x + hu) o φ (t)

11 Flow Field

12 For (e.g) 8 time steps Simple integration φ (1/8) = x + u/8 φ (2/8) = φ (1/8) o φ (1/8) φ (3/8) = φ (1/8) o φ (2/8) φ (4/8) = φ (1/8) o φ (3/8) φ (5/8) = φ (1/8) o φ (4/8) φ (6/8) = φ (1/8) o φ (5/8) φ (7/8) = φ (1/8) o φ (6/8) φ (8/8) = φ (1/8) o φ (7/8) 7 compositions Scaling and squaring φ (1/8) = x + u/8 φ (2/8) = φ (1/8) o φ (1/8) φ (4/8) = φ (2/8) o φ (2/8) φ (8/8) = φ (4/8) o φ (4/8) 3 compositions Similar procedure used for the inverse. Starts with φ (-1/8) = x - u/8

13 Scaling and squaring example

14 DARTEL

15 Jacobian determinants remain positive

16 Overview Motivation Principles Optimisation –Multi-grid Group-wise Registration Validation Future directions

17 Registration objective function Simultaneously minimize the sum of –Likelihood component From the sum of squares difference ½ i ( g(x i ) – f(φ (1) (x i )) ) 2 φ (1) parameterized by u –Prior component A measure of deformation roughness ½u T Hu

18 Regularization model DARTEL has three different models for H –Membrane energy –Linear elasticity –Bending energy H is very sparse An example H for 2D registration of 6x6 images (linear elasticity)

19 Regularization models

20 Optimisation Uses Levenberg-Marquardt –Requires a matrix solution to a very large set of equations at each iteration u (k+1) = u (k) - (H+A) -1 b –b are the first derivatives of objective function –A is a sparse matrix of second derivatives –Computed efficiently, making use of scaling and squaring

21 Relaxation To solve Mx = c Split M into E and F, where E is easy to invert F is more difficult Sometimes: x (k+1) = E -1 (c – F x (k) ) Otherwise: x (k+1) = x (k) + (E+sI) -1 (c – M x (k) ) Gauss-Siedel when done in place. Jacobis method if not Fits high frequencies quickly, but low frequencies slowly

22 H+A = E+F

23 Highest resolution Lowest resolution Full Multi-Grid

24 Overview Motivation Principles Optimisation Group-wise Registration –Simultaneous registration of GM & WM –Tissue probability map creation Validation Future directions

25 Generative Models for Images Treat the template as a deformable probability density. –Consider the intensity distribution at each voxel of lots of aligned images. Each point in the template represents a probability distribution of intensities. –Spatially deform this intensity distribution to the individual brain images. Likelihood of the deformations given by the template (assuming spatial independence of voxels).

26 Generative models of anatomy Work with tissue class images. Brains of differing shapes and sizes. Need strategies to encode such variability. Automatically segmented grey matter images.

27 Simultaneous registration of GM to GM and WM to WM Grey matter White matter Grey matter White matter Grey matter White matter Grey matter White matter Grey matter White matter Template Subject 1 Subject 2 Subject 3 Subject 4

28 Template Creation Template is an average shaped brain. –Less bias in subsequent analysis. Iteratively created mean using DARTEL algorithm. –Generative model of data. –Multinomial noise model. Grey matter average of 471 subjects White matter average of 471 subjects μ t1t1 ϕ1ϕ1 t2t2 ϕ2ϕ2 t3t3 ϕ3ϕ3 t4t4 ϕ4ϕ4 t5t5 ϕ5ϕ5

29 Average Shaped Template For CA, work in the tangent space of the manifold, using linear approximations. –Average-shaped templates give less bias, as the tangent-space at this point is a closer approximation. For spatial normalisation of fMRI, warping to a more average shaped template is less likely to cause signal to disappear. –If a structure is very small in the template, then it will be very small in the spatially normalised individuals. Smaller deformations are needed to match with an average-shaped template. –Smaller errors.

30 Average shaped templates Linear Average Average on Riemannian manifold (Not on Riemannian manifold)

31 Template Initial Average After a few iterations Final template Iteratively generated from 471 subjects Began with rigidly aligned tissue probability maps Used an inverse consistent formulation

32 Grey matter average of 452 subjects – affine

33 Grey matter average of 471 subjects

34 Multinomial Model Current DARTEL model is multinomial for matching tissue class images. log p(t|μ, ϕ ) = Σ j Σ k t jk log(μ k ( ϕ j )) t – individual GM, WM and background μ – template GM, WM and background ϕ – deformation A general purpose template should not have regions where log( μ ) is –Inf.

35 Laplacian Smoothness Priors on template 2D Nicely scale invariant 3D Not quite scale invariant – but probably close enough

36 Smoothing by solving matrix equations using multi-grid Template modelled as softmax of a Gaussian process μ k (x) = exp(a k (x))/(Σ j exp(a j (x))) Rather than compute mean images and convolve with a Gaussian, the smoothing is done by maximising a log-likelihood for a MAP solution. Note that Jacobian transformations are required (cf modulated VBM) to properly account for expansion/contraction during warping.

37 Determining amount of regularisation Matrices too big for REML estimates. Used cross- validation. Smooth an image by different amounts, see how well it predicts other images: Rigidly aligned Nonlinear registered log p(t|μ) = Σ j Σ k t jk log(μ jk )

38 ML and MAP templates from 6 subjects Nonlinear Registered Rigid registered log MAP ML

39

40

41

42

43

44

45

46

47

48

49 Overview Motivation Principles Optimisation Group-wise Registration Validation –Sex classification –Age regression Future directions

50 Validation There is no ground truth Looked at predictive accuracy –Can information encoded by the method make predictions? Registration method blind to the predicted information Could have used an overlap of fMRI results –Chose to see whether ages and sexes of subjects could be predicted from the deformations Comparison with small deformation model

51 Training and Classifying Control Training Data Patient Training Data ? ? ? ?

52 Classifying Controls Patients ? ? ? ? y=f(a T x+b)

53 Support Vector Classifier

54 Support Vector Classifier (SVC) Support Vector Support Vector Suppor t Vector a is a weighted linear combination of the support vectors

55 Nonlinear SVC

56 Support-vector classification Guess sexes of 471 subjects from brain shapes –207 Females / 264 Males Use a random sample of 400 for training. Test on the remaining 71. Repeat 50 times.

57 Sex classification results Small Deformation –Linear classifier 87.0% correct Kappa = –RBF classifier 87.1% correct Kappa = DARTEL –Linear classifier 87.7% correct Kappa = –RBF classifier 87.6% correct Kappa = An unconvincing improvement

58 Regression

59 Relevance-vector regression A Bayesian method, related to SVMs –Developed by Mike Tipping Guess ages of 471 subjects from brain shapes. Use a random sample of 400 for training. Test on the remaining 71. Repeat 50 times.

60 Age regression results Small deformation –Linear regression RMS error= 7.55 Correlation= –RBF regression RMS error= 6.68 Correlation= DARTEL –Linear regression RMS error= 7.90 Correlation= –RBF regression RMS error= 6.50 Correlation= An unconvincing improvement (slightly worse for linear regression)

61

62 Overview Motivation Principles Optimisation Group-wise Registration Validation Future directions

63 Compare with variable velocity methods –Begs LDDMM algorithm. Classification/regression from initial momentum. Combine with tissue classification model. Develop a proper EM framework for generating tissue probability maps.

64

65 u Hu

66 Initial momentum Variable velocity framework (as in LDDMM)

67 Initial momentum Variable velocity framework (as in LDDMM)

68 Thank you


Download ppt "DARTEL John Ashburner 2008. Overview Motivation –Dimensionality –Inverse-consistency Principles Geeky stuff Example Validation Future directions."

Similar presentations


Ads by Google