Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistics for fMRI.

Similar presentations


Presentation on theme: "Statistics for fMRI."— Presentation transcript:

1 Statistics for fMRI

2 Statistics and science
Quick poll: When was statistics (p-values for rejecting a hypothesis, t-tests etc.) invented? A: Early 1700s B: Early 1800s C: Early 1900s

3 Statistics and science
Answer: Early 1900s t-test: 1908 (Gosset / “Student” ) General principles: 1925 “Statistical Methods for Research Workers” Ronald A. Fisher

4 Statistics and science
A puzzle: Lots of great science was done before statistics came along! t-test: 1908 (Gosset / “Student” ) General principles: 1925 “Statistical Methods for Research Workers” Ronald A. Fisher Some discoveries which (today) seem unthinkable without statistics: Development of vaccines (Jenner, 1796) Genetics and inheritance (Mendel, 1866) Almost all of the rest of science!

5 A metaphor: doing stats right is like putting on your seatbelt
What really matters in science is finding a true underlying regularity in nature Gold standard test: independent replication Neither necessary nor sufficient, but helpful: doing the stats right What really matters in driving is getting home safely Neither necessary nor sufficient, but helpful: wearing your seatbelt For a long time, cars didn’t even have seatbelts. But people still got home safely (usually)

6 Multiple comparisons test
There are lots of voxels in the brain (~30,000) You can test each one separately to see whether it is active If you run lots of tests, some of them will give positive results even if nothing is there

7

8

9

10

11 Mouse-float-over title text:
Mouse-float-over title text: So, uh, we did the green study again and got no link. It was probably a-- "RESEARCH CONFLICTED ON GREEN JELLY BEAN/ACNE LINK; MORE STUDY RECOMMENDED!"

12 Correlation does not imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing "look over there."

13 What does “Result X is statistically significant (p<0
What does “Result X is statistically significant (p<0.05)” actually mean? Take a vote! A: Given the data that we found, there is only a 5% chance that Result X is false B: If Result X were false, then there would only be a 5% chance of us finding the data that we did find C: If some other people now go off and look for Result X, there is only a 5% chance that they will fail to find it D: All of the above answers all mean basically the same thing

14 Base rates and false positives
Sensitivity: probability of detecting a real effect Specificity: prob. of not getting a false positive Jacob Cohen “The Earth is round (p<0.05)”

15 What does “Result X is statistically significant (p<0
What does “Result X is statistically significant (p<0.05)” actually mean? A: Given the data that we found, there is only a 5% chance that Result X is false p( Hypothesis | Data ) B: If Result X were false, then there would only be a 5% chance of us finding the data that we did find p( Data | Hypothesis ) C: If some other people now go off and look for Result X, there is only a 5% chance that they will fail to find it D: All of the above answers all mean basically the same thing

16 Example: testing for a rare disease
Suppose 0.1% of population has bubonic plague Suppose test for plague has 99% specificity (false positive rate) Test 1000 people: 1 person has plague 10 people test positive 9 of those are false positives!

17 What does “probability” actually mean?
True or false? There is a single widely-agreed upon answer to the question of what exactly it means to say “The probability that it will rain tomorrow is 0.1” A1: True, there is an agreed upon answer A2: False, lots of sensible people disagree on this

18 Different ways of correcting for multiple comparisons
Bonferroni correction Family-Wise Error False Discovery Rate

19 The infamous “dead salmon” study
Bennett et al. (2009) If you scan a dead salmon and don’t do any multiple comparison correction, then you get some active voxels This is not exactly news, but it is at least memorable Like saying “Big discovery in traffic safety: remember to wear your seatbelt!”

20 Different ways of correcting for multiple comparisons
Raw signal intensity Uncorrected results Family-Wise Error correction False Discovery Rate correction

21 Voodoo correlations – Vul et al (2009)
ROI analysis Correlation with behaviour within that ROI Problem: It is tempting (and easy) to select the ROI, and to look for correlations within the ROI, using the same data: non-independent

22 An example highlighted in the voodoo correlations paper
Eisenberger, Naomi I., Matthew D. Lieberman, and Kipling D. Williams. "Does rejection hurt? An fMRI study of social exclusion." Science (2003):

23 Problem: correlations become inflated
Vul, E. et al. (2009) "Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition." Perspectives on Psychological Science, 4(3): Similar argument made simultaneously in: Kriegeskorte, N. et al. (2009). Circular analysis in systems neuroscience: the dangers of double dipping. Nature Neuroscience, 12(5),

24 The solution: Cross-validation
The solution: Cross-validation. Splitting up your data into independent parts Training set and testing set Independent parts of the data Just like in using classifiers

25 This Thursday Computer Lab: Meliora Rm 178.
Registered students only (sorry) Explore hands-on the preprocessing steps involved in fMRI, using SPM: Motion correction Spatial normalisation to a template Smoothing You are welcome to bring your own laptop, but you’ll need to pre-load the Haxby data, Matlab and SPM8


Download ppt "Statistics for fMRI."

Similar presentations


Ads by Google