Presentation is loading. Please wait.

Presentation is loading. Please wait.

Random Field Theory Mkael Symmonds, Bahador Bahrami.

Similar presentations


Presentation on theme: "Random Field Theory Mkael Symmonds, Bahador Bahrami."— Presentation transcript:

1 Random Field Theory Mkael Symmonds, Bahador Bahrami

2 Random Field Theory Mkael Symmonds, Bahador Bahrami

3 Overview Spatial smoothing Spatial smoothing Statistical inference Statistical inference The multiple comparison problem The multiple comparison problem …and what to do about it …and what to do about it

4 Overview Spatial smoothing Spatial smoothing Statistical inference Statistical inference The multiple comparison problem The multiple comparison problem …and what to do about it …and what to do about it

5 Statistical inference Aim Aim –to decide if the data represents convincing evidence of the effect we are interested in. How How –perform a statistical test across the whole brain volume to tell us how likely our data are to have come about by chance (the null distribution).

6 Inference at a single voxel  = p(t>t-value|H 0 ) NULL hypothesis, H 0 : activation is zero t-value = 2.42 t-distribution p-value: probability of getting a value of t at least as extreme as 2.42 from the t distribution (= 0.01). t-value = 2.42 alpha = As p < α, we reject the null hypothesist-value = 2.02

7 Sensitivity and Specificity H 0 True TN FP – type I error H 0 False FN TP Don’t Reject ACTION Chance Specificity = TN/(# H True) = TN/(TN+FP)= 1 -  Sensitivity = TP/(# H False) = TP/(TP+FN)=  power Not by chance

8 Many statistical tests In functional imaging, there are many voxels, therefore many statistical tests In functional imaging, there are many voxels, therefore many statistical tests If we do not know where in the brain our effect will occur, the hypothesis relates to the whole volume of statistics in the brain If we do not know where in the brain our effect will occur, the hypothesis relates to the whole volume of statistics in the brain We would reject H 0 if the entire family of statistical values is unlikely to have arisen from a null distribution – a family- wise hypothesis We would reject H 0 if the entire family of statistical values is unlikely to have arisen from a null distribution – a family- wise hypothesis The risk of error we are prepared to accept is called the Family-Wise Error (FWE) rate – what is the likelihood that the family of voxel values could have arisen by chance The risk of error we are prepared to accept is called the Family-Wise Error (FWE) rate – what is the likelihood that the family of voxel values could have arisen by chance

9 How to test a family-wise hypothesis? Height thresholding This can localise significant test results

10 How to set the threshold? Should we use the same alpha as when we perform inference at a single voxel? Should we use the same alpha as when we perform inference at a single voxel?

11 Overview Spatial smoothing Spatial smoothing Statistical inference Statistical inference The multiple comparison problem The multiple comparison problem …and what to do about it …and what to do about it

12 How to set the threshold? LOTS OF SIGNIFICANT ACTIVATIONS OUTSIDE OF OUR SIGNAL BLOB!

13 How to set the threshold? So, if we see 1 t-value above our uncorrected threshold in the family of tests, this is not good evidence against the family- wise null hypothesis So, if we see 1 t-value above our uncorrected threshold in the family of tests, this is not good evidence against the family- wise null hypothesis If we are prepared to accept a false positive rate of 5%, we need a threshold such that, for the entire family of statistical tests, there is a 5% chance of there being one or more t values above that threshold. If we are prepared to accept a false positive rate of 5%, we need a threshold such that, for the entire family of statistical tests, there is a 5% chance of there being one or more t values above that threshold.

14 Bonferroni Correction For one voxel (all values from a null distribution) – –Probability of a result greater than the threshold = α – –Probability of a result less than the threshold = 1- α For n voxels (all values from a null distribution) – –Probability of all n results being less than the threshold = (1- α) n – –Probability of one (or more) tests being greater than the threshold: = 1- (1- α) n ~= n.α (as alpha is small) FAMILY WISE ERROR RATE

15 Bonferroni Correction So, So, Set the P FWE < n. Set the P FWE < n. α Gives a threshold α = P FWE / n Should we use the Bonferroni correction for imaging data?

16 NULL HYPOTHESIS TRUE 10,000 tests  5% FWE rate Apply Bonferroni correction to give threshold of 0.05/10000 = This corresponds to a z-score of 4.42 We expect only 5 out of 100 such images to have one or more z-scores > x 100 voxels – normally distributed independent random numbers 100 x 100 voxels averaged Now only 10 x 10 independent numbers in our image The appropriate Bonferroni correction is 0.05/100= This corresponds to z-score = 3.29 Only 5/100 such images will have one or more z-scores > 3.29 by chance

17 Independent Voxels Spatially Correlated Voxels Bonferroni is too conservative for brain images, but how to tell how many independent observations there are? Spatial correlation Assumes Independent Voxels Spatial pre-processing Physiological Correlation Smoothing

18 Overview Spatial smoothing Spatial smoothing Statistical inference Statistical inference The multiple comparison problem The multiple comparison problem …and what to do about it …and what to do about it

19 Spatial smoothing Increases signal-to-noise ratio Increases signal-to-noise ratio Enables averaging across subjects Enables averaging across subjects Allows use of Gaussian Random Field Theory for thresholding Allows use of Gaussian Random Field Theory for thresholding Why do you want to do it?

20 Spatial Smoothing Reduces effect of high frequency variation in functional imaging data, “blurring sharp edges” Reduces effect of high frequency variation in functional imaging data, “blurring sharp edges” What does it do?

21 Spatial Smoothing Typically in functional imaging, a Gaussian smoothing kernel is used Typically in functional imaging, a Gaussian smoothing kernel is used –Shape similar to normal distribution bell curve –Width usually described using “full width at half maximum” (FWHM) measure e.g., for kernel at 10mm FWHM: How is it done? 05-5

22 Spatial Smoothing Gaussian kernel defines shape of function used successively to calculate weighted average of each data point with respect to its neighbouring data points Gaussian kernel defines shape of function used successively to calculate weighted average of each data point with respect to its neighbouring data points How is it done? Raw data Gaussian function Smoothed data x=

23 Spatial Smoothing Gaussian kernel defines shape of function used successively to calculate weighted average of each data point with respect to its neighbouring data points Gaussian kernel defines shape of function used successively to calculate weighted average of each data point with respect to its neighbouring data points How is it done? Raw data Gaussian function Smoothed data x=

24 Independent Voxels Spatially Correlated Voxels Bonferroni is too conservative for brain images, but how to tell how many independent observations there are? Spatial correlation Assumes Independent Voxels Spatial pre-processing Physiological Correlation Smoothing

25 Overview Spatial smoothing Spatial smoothing Statistical inference Statistical inference The multiple comparison problem The multiple comparison problem …and what to do about it …and what to do about it

26 References Previous MfD slides Previous MfD slides An Introduction to Random Field Theory, from Human Brain Mapping, Matthew Brett, Will Penny, Stefan Kiebel An Introduction to Random Field Theory, from Human Brain Mapping, Matthew Brett, Will Penny, Stefan Kiebel Statistical Parametric Mapping short course lecture on RFT, Tom Nichols Statistical Parametric Mapping short course lecture on RFT, Tom Nichols

27 Random Field Theory (ii) Methods for Dummies 2008 Mkael Symmonds Bahador Bahrami

28 What is a random field? A random field is a list of random numbers whose values are mapped onto a space (of n dimensions). Values in a random field are usually spatially correlated in one way or another, in its most basic form this might mean that adjacent values do not differ as much as values that are further apart. A random field is a list of random numbers whose values are mapped onto a space (of n dimensions). Values in a random field are usually spatially correlated in one way or another, in its most basic form this might mean that adjacent values do not differ as much as values that are further apart.random numbersdimensionsrandom numbersdimensions

29 Why random field? To characterise the properties our study’s statistical parametric map under the NULL hypothesis To characterise the properties our study’s statistical parametric map under the NULL hypothesis –NULL hypothesis = if all predictions were wrong if all predictions were wrong all activations were merely driven by chance all activations were merely driven by chance each voxel value was a random number each voxel value was a random number –What would the probability of getting a certain z-score for a voxel in this situation be?

30 Random Field

31 Zero one

32 three Measurement 1 Number of blobs = 4 Measurement 2 Number of blobs = 0 Measurement 3 Number of blobs = 1 Measurement Number of blobs = 2 Average number of blobs = ( … + 2)/ The probability of getting a z-score>3 by chance

33 Therefore, for every z-score, the expected value of number of blobs = probability of rejecting the null hypothesis erroneously ( α )

34 The million-dollar question is: thresholding the random field at which Z- score produces average number of blobs < 0.05? thresholding the random field at which Z- score produces average number of blobs < 0.05? Or, Which Z-score has a probability = 0.05 of rejecting the null hypothesis erroneously? Or, Which Z-score has a probability = 0.05 of rejecting the null hypothesis erroneously? –Any z-scores above that will be significant!

35 So, it all comes down to estimating the average number of blobs (that you expect by chance) in your SPM Random field theory does that for you!

36 Expected number of blobs in a random field depends on … Chosen threshold z-score Chosen threshold z-score Volume of search region Volume of search region Roughness (i.e.,1/smoothness) of the search region: Spatial extent of correlation among values in the field; it is described by FWHM Roughness (i.e.,1/smoothness) of the search region: Spatial extent of correlation among values in the field; it is described by FWHM –Volume and Roughness are combined into RESELs –Where does SPM get R from: it is calculated from the residuals (RPV.img) –Given the R and Z, RFT calculates the expected number of blobs for you: E(EC) = R (4 ln 2) (2π) -3/2 z exp(-z 2 /2)

37 P FWE = average number of blobs under null hypothesis Probability of Family Wise Error α = P = R (4 ln 2) (2π) -3/2 z exp(-z 2 /2) α = P FWE = R (4 ln 2) (2π) -3/2 z exp(-z 2 /2)

38

39

40 Thank you References: References: –Brett, Penny & Keibel. An introduction to Random Field Theory. Chapter from Human Brain Mapping –Will Penny’s slides (http://www.fil.ion.ucl.ac.uk/spm/course/slides0 5/ppt/infer.ppt#324,1,Random Field Theory) 5/ppt/infer.ppt#324,1,Random Field Theoryhttp://www.fil.ion.ucl.ac.uk/spm/course/slides0 5/ppt/infer.ppt#324,1,Random Field Theory –Jean-Etienne Poirrier’s slides (http://www.poirrier.be/~jean- etienne/presentations/rft/spm-rft-slides- poirrier06.pdf) etienne/presentations/rft/spm-rft-slides- poirrier06.pdfhttp://www.poirrier.be/~jean- etienne/presentations/rft/spm-rft-slides- poirrier06.pdf –Tom Nichol’s lecture in SPM Short Course (2006)

41 False Discovery Rate H True (o)TN=7FP=3 H False (x)FN=0TP=10 Don’t Reject ACTION TRUTH u1 FDR=3/13=23%  =3/10=30% At u1 o o o o o o o x x x o o x x x o x x x x Eg. t-scores from regions that truly do and do not activate FDR = FP/(# Reject)  = FP/(# H True)

42 False Discovery Rate H True (o)TN=9FP=1 H False (x)FN=3TP=7 Don’t Reject ACTION TRUTH u2 o o o o o o o x x x o o x x x o x x x x Eg. t-scores from regions that truly do and do not activate FDR=1/8=13%  =1/10=10% At u2 FDR = FP/(# Reject)  = FP/(# H True)

43 False Discovery Rate Signal+Noise Noise

44

45 Cluster Level Inference We can increase sensitivity by trading off anatomical specificity We can increase sensitivity by trading off anatomical specificity Given a voxel level threshold u, we can compute Given a voxel level threshold u, we can compute the likelihood (under the null hypothesis) of getting a cluster containing at least n voxels the likelihood (under the null hypothesis) of getting a cluster containing at least n voxels CLUSTER-LEVEL INFERENCE CLUSTER-LEVEL INFERENCE Similarly, we can compute the likelihood of getting c Similarly, we can compute the likelihood of getting c clusters each having at least n voxels clusters each having at least n voxels SET-LEVEL INFERENCE SET-LEVEL INFERENCE

46 Levels of inference set-level P(c  3 | n  12, u  3.09) = cluster-level P(c  1 | n  82, t  3.09) = (corrected) n=82 n=32 n=1 2 voxel-level P(c  1 | n > 0, t  4.37) = (corrected) At least one cluster with unspecified number of voxels above threshold At least one cluster with at least 82 voxels above threshold At least 3 clusters above threshold


Download ppt "Random Field Theory Mkael Symmonds, Bahador Bahrami."

Similar presentations


Ads by Google