Presentation is loading. Please wait.

Presentation is loading. Please wait.

(Epoch and) Event-related fMRI Karl Friston Rik Henson Oliver Josephs Wellcome Department of Cognitive Neurology & Institute of Cognitive Neuroscience.

Similar presentations


Presentation on theme: "(Epoch and) Event-related fMRI Karl Friston Rik Henson Oliver Josephs Wellcome Department of Cognitive Neurology & Institute of Cognitive Neuroscience."— Presentation transcript:

1 (Epoch and) Event-related fMRI Karl Friston Rik Henson Oliver Josephs Wellcome Department of Cognitive Neurology & Institute of Cognitive Neuroscience University College London UK

2 Epoch vs Event-related fMRI “PET Blocked conception” (scans assigned to conditions) Boxcar function … ABAB… ……… Condition A Scans 21-30Scans 1-10 Condition B Scans 11-20… “fMRI Epoch conception” (scans treated as timeseries) Convolved with HRF => “fMRI Event-related conception” Delta functions Design Matrix

3 OverviewOverview 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications

4 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) Advantages of Event-related fMRI

5 Randomised O1N1 O3 O2 N2 Data Model O = Old Words N = New Words Blocked O1O2O3 N1N2N3

6 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) Advantages of Event-related fMRI

7 RR R F F R = Words Later Remembered F = Words Later Forgotten Event-Related ~4s Data Model

8 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) Advantages of Event-related fMRI

9

10 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 4. Some trials cannot be blocked e.g, “oddball” designs (Clark et al., 2000) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 4. Some trials cannot be blocked e.g, “oddball” designs (Clark et al., 2000) Advantages of Event-related fMRI

11 Time … “Oddball”

12 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 4. Some trials cannot be blocked e.g, “oddball” designs (Clark et al., 2000) 5. More accurate models even for blocked designs? e.g, “state-item” interactions (Chawla et al, 1999) 5. More accurate models even for blocked designs? e.g, “state-item” interactions (Chawla et al, 1999) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 1. Randomised trial order c.f. confounds of blocked designs (Johnson et al 1997) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 2. Post hoc / subjective classification of trials e.g, according to subsequent memory (Wagner et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 3. Some events can only be indicated by subject (in time) e.g, spontaneous perceptual changes (Kleinschmidt et al 1998) 4. Some trials cannot be blocked e.g, “oddball” designs (Clark et al., 2000) 5. More accurate models even for blocked designs? e.g, “state-item” interactions (Chawla et al, 1999) 5. More accurate models even for blocked designs? e.g, “state-item” interactions (Chawla et al, 1999) Advantages of Event-related fMRI

13 N1N2N3 “Event” model O1O2O3 Blocked Design “Epoch” model Data Model O1O2O3 N1N2N3

14 1. Less efficient for detecting effects than are blocked designs (see later…) 1. Less efficient for detecting effects than are blocked designs (see later…) 2. Some psychological processes may be better blocked (eg task-switching, attentional instructions) 1. Less efficient for detecting effects than are blocked designs (see later…) 1. Less efficient for detecting effects than are blocked designs (see later…) 2. Some psychological processes may be better blocked (eg task-switching, attentional instructions) (Disadvantages of Randomised Designs)

15 OverviewOverview 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications

16 Function of blood oxygenation, flow, volume (Buxton et al, 1998)Function of blood oxygenation, flow, volume (Buxton et al, 1998) Peak (max. oxygenation) 4-6s poststimulus; baseline after 20-30sPeak (max. oxygenation) 4-6s poststimulus; baseline after 20-30s Initial undershoot can be observed (Malonek & Grinvald, 1996)Initial undershoot can be observed (Malonek & Grinvald, 1996) Similar across V1, A1, S1…Similar across V1, A1, S1… … but differences across: other regions (Schacter et al 1997) individuals (Aguirre et al, 1998)… but differences across: other regions (Schacter et al 1997) individuals (Aguirre et al, 1998) Function of blood oxygenation, flow, volume (Buxton et al, 1998)Function of blood oxygenation, flow, volume (Buxton et al, 1998) Peak (max. oxygenation) 4-6s poststimulus; baseline after 20-30sPeak (max. oxygenation) 4-6s poststimulus; baseline after 20-30s Initial undershoot can be observed (Malonek & Grinvald, 1996)Initial undershoot can be observed (Malonek & Grinvald, 1996) Similar across V1, A1, S1…Similar across V1, A1, S1… … but differences across: other regions (Schacter et al 1997) individuals (Aguirre et al, 1998)… but differences across: other regions (Schacter et al 1997) individuals (Aguirre et al, 1998) BOLD Impulse Response Brief Stimulus Undershoot Initial Undershoot Peak

17 Early event-related fMRI studies used a long Stimulus Onset Asynchrony (SOA) to allow BOLD response to return to baselineEarly event-related fMRI studies used a long Stimulus Onset Asynchrony (SOA) to allow BOLD response to return to baseline However, if the BOLD response is explicitly modelled, overlap between successive responses at short SOAs can be accommodated…However, if the BOLD response is explicitly modelled, overlap between successive responses at short SOAs can be accommodated… … particularly if responses are assumed to superpose linearly… particularly if responses are assumed to superpose linearly Short SOAs are more sensitive…Short SOAs are more sensitive… Early event-related fMRI studies used a long Stimulus Onset Asynchrony (SOA) to allow BOLD response to return to baselineEarly event-related fMRI studies used a long Stimulus Onset Asynchrony (SOA) to allow BOLD response to return to baseline However, if the BOLD response is explicitly modelled, overlap between successive responses at short SOAs can be accommodated…However, if the BOLD response is explicitly modelled, overlap between successive responses at short SOAs can be accommodated… … particularly if responses are assumed to superpose linearly… particularly if responses are assumed to superpose linearly Short SOAs are more sensitive…Short SOAs are more sensitive… BOLD Impulse Response Brief Stimulus Undershoot Initial Undershoot Peak

18 OverviewOverview 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications

19 General Linear (Convolution) Model GLM for a single voxel: Y(t) = x(t)  h(t) +  x(t) = stimulus train (delta functions) x(t) =   (t - nT) h(t) = hemodynamic (BOLD) response h(t) =  ß i f i (t) f i (t) = temporal basis functions Y(t) =   ß i f i (t - nT) +  GLM for a single voxel: Y(t) = x(t)  h(t) +  x(t) = stimulus train (delta functions) x(t) =   (t - nT) h(t) = hemodynamic (BOLD) response h(t) =  ß i f i (t) f i (t) = temporal basis functions Y(t) =   ß i f i (t - nT) +  Design Matrix convolution T 2T 3T... x(t) h(t)=  ß i f i (t) sampled each scan

20 General Linear Model (in SPM) Auditory words every 20s SPM{F} 0 time {secs} 30 Sampled every TR = 1.7s Design matrix, X [x(t)  ƒ 1 (  ) | x(t)  ƒ 2 (  ) |...] … Gamma functions ƒ i (  ) of peristimulus time  (Orthogonalised)

21 OverviewOverview 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications

22 Temporal Basis Functions Fourier SetFourier Set Windowed sines & cosines Any shape (up to frequency limit) Inference via F-test Fourier SetFourier Set Windowed sines & cosines Any shape (up to frequency limit) Inference via F-test

23 Temporal Basis Functions Finite Impulse ResponseFinite Impulse Response Mini “timebins” (selective averaging) Any shape (up to bin-width) Inference via F-test Finite Impulse ResponseFinite Impulse Response Mini “timebins” (selective averaging) Any shape (up to bin-width) Inference via F-test

24 Temporal Basis Functions Fourier SetFourier Set Windowed sines & cosines Any shape (up to frequency limit) Inference via F-test Gamma FunctionsGamma Functions Bounded, asymmetrical (like BOLD) Set of different lags Inference via F-test Fourier SetFourier Set Windowed sines & cosines Any shape (up to frequency limit) Inference via F-test Gamma FunctionsGamma Functions Bounded, asymmetrical (like BOLD) Set of different lags Inference via F-test

25 Temporal Basis Functions Fourier SetFourier Set Windowed sines & cosines Any shape (up to frequency limit) Inference via F-test Gamma FunctionsGamma Functions Bounded, asymmetrical (like BOLD) Set of different lags Inference via F-test “Informed” Basis Set“Informed” Basis Set Best guess of canonical BOLD response Variability captured by Taylor expansion “Magnitude” inferences via t-test…? Fourier SetFourier Set Windowed sines & cosines Any shape (up to frequency limit) Inference via F-test Gamma FunctionsGamma Functions Bounded, asymmetrical (like BOLD) Set of different lags Inference via F-test “Informed” Basis Set“Informed” Basis Set Best guess of canonical BOLD response Variability captured by Taylor expansion “Magnitude” inferences via t-test…?

26 Temporal Basis Functions

27 “Informed” Basis Set (Friston et al. 1998) (Friston et al. 1998) Canonical HRF (2 gamma functions)Canonical HRF (2 gamma functions) “Informed” Basis Set (Friston et al. 1998) (Friston et al. 1998) Canonical HRF (2 gamma functions)Canonical HRF (2 gamma functions) Canonical

28 Temporal Basis Functions “Informed” Basis Set (Friston et al. 1998) (Friston et al. 1998) Canonical HRF (2 gamma functions)Canonical HRF (2 gamma functions) plus Multivariate Taylor expansion in: plus Multivariate Taylor expansion in: time (Temporal Derivative) “Informed” Basis Set (Friston et al. 1998) (Friston et al. 1998) Canonical HRF (2 gamma functions)Canonical HRF (2 gamma functions) plus Multivariate Taylor expansion in: plus Multivariate Taylor expansion in: time (Temporal Derivative) Canonical Temporal

29 Temporal Basis Functions “Informed” Basis Set (Friston et al. 1998) (Friston et al. 1998) Canonical HRF (2 gamma functions)Canonical HRF (2 gamma functions) plus Multivariate Taylor expansion in: plus Multivariate Taylor expansion in: time (Temporal Derivative) width (Dispersion Derivative) “Informed” Basis Set (Friston et al. 1998) (Friston et al. 1998) Canonical HRF (2 gamma functions)Canonical HRF (2 gamma functions) plus Multivariate Taylor expansion in: plus Multivariate Taylor expansion in: time (Temporal Derivative) width (Dispersion Derivative) Canonical Temporal Dispersion

30 Temporal Basis Functions “Informed” Basis Set (Friston et al. 1998) (Friston et al. 1998) Canonical HRF (2 gamma functions)Canonical HRF (2 gamma functions) plus Multivariate Taylor expansion in: plus Multivariate Taylor expansion in: time (Temporal Derivative) width (Dispersion Derivative) “Magnitude” inferences via t-test on canonical parameters (providing canonical is a good fit…more later)“Magnitude” inferences via t-test on canonical parameters (providing canonical is a good fit…more later) “Informed” Basis Set (Friston et al. 1998) (Friston et al. 1998) Canonical HRF (2 gamma functions)Canonical HRF (2 gamma functions) plus Multivariate Taylor expansion in: plus Multivariate Taylor expansion in: time (Temporal Derivative) width (Dispersion Derivative) “Magnitude” inferences via t-test on canonical parameters (providing canonical is a good fit…more later)“Magnitude” inferences via t-test on canonical parameters (providing canonical is a good fit…more later) Canonical Temporal Dispersion

31 Temporal Basis Functions “Informed” Basis Set (Friston et al. 1998) (Friston et al. 1998) Canonical HRF (2 gamma functions)Canonical HRF (2 gamma functions) plus Multivariate Taylor expansion in: plus Multivariate Taylor expansion in: time (Temporal Derivative) width (Dispersion Derivative) “Magnitude” inferences via t-test on canonical parameters (providing canonical is a good fit…more later)“Magnitude” inferences via t-test on canonical parameters (providing canonical is a good fit…more later) “Latency” inferences via tests on ratio of derivative : canonical parameters (more later…)“Latency” inferences via tests on ratio of derivative : canonical parameters (more later…) “Informed” Basis Set (Friston et al. 1998) (Friston et al. 1998) Canonical HRF (2 gamma functions)Canonical HRF (2 gamma functions) plus Multivariate Taylor expansion in: plus Multivariate Taylor expansion in: time (Temporal Derivative) width (Dispersion Derivative) “Magnitude” inferences via t-test on canonical parameters (providing canonical is a good fit…more later)“Magnitude” inferences via t-test on canonical parameters (providing canonical is a good fit…more later) “Latency” inferences via tests on ratio of derivative : canonical parameters (more later…)“Latency” inferences via tests on ratio of derivative : canonical parameters (more later…) Canonical Temporal Dispersion

32 (Other Approaches) Long Stimulus Onset Asychrony (SOA) Can ignore overlap between responses (Cohen et al 1997) … but long SOAs are less sensitive Fully counterbalanced designs Assume response overlap cancels (Saykin et al 1999) Include fixation trials to “selectively average” response even at short SOA (Dale & Buckner, 1997) … but unbalanced when events defined by subject Define HRF from pilot scan on each subject May capture intersubject variability (Zarahn et al, 1997) … but not interregional variability Numerical fitting of highly parametrised response functions Separate estimate of magnitude, latency, duration (Kruggel et al 1999) … but computationally expensive for every voxel

33 Temporal Basis Sets: Which One? + FIR+ Dispersion+ TemporalCanonical …canonical + temporal + dispersion derivatives appear sufficient …may not be for more complex trials (eg stimulus-delay-response) …but then such trials better modelled with separate neural components (ie activity no longer delta function) + constrained HRF (Zarahn, 1999) In this example (rapid motor response to faces, Henson et al, 2001)…

34 OverviewOverview 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications

35 Timing Issues : Practical Typical TR for 48 slice EPI at 3mm spacing is ~ 4sTypical TR for 48 slice EPI at 3mm spacing is ~ 4s Scans TR=4s

36 Timing Issues : Practical Typical TR for 48 slice EPI at 3mm spacing is ~ 4sTypical TR for 48 slice EPI at 3mm spacing is ~ 4s Sampling at [0,4,8,12…] post- stimulus may miss peak signalSampling at [0,4,8,12…] post- stimulus may miss peak signal Typical TR for 48 slice EPI at 3mm spacing is ~ 4sTypical TR for 48 slice EPI at 3mm spacing is ~ 4s Sampling at [0,4,8,12…] post- stimulus may miss peak signalSampling at [0,4,8,12…] post- stimulus may miss peak signal Scans Stimulus (synchronous) TR=4s SOA=8s Sampling rate=4s

37 Timing Issues : Practical Typical TR for 48 slice EPI at 3mm spacing is ~ 4sTypical TR for 48 slice EPI at 3mm spacing is ~ 4s Sampling at [0,4,8,12…] post- stimulus may miss peak signalSampling at [0,4,8,12…] post- stimulus may miss peak signal Higher effective sampling by: 1. Asynchrony eg SOA=1.5TRHigher effective sampling by: 1. Asynchrony eg SOA=1.5TR Typical TR for 48 slice EPI at 3mm spacing is ~ 4sTypical TR for 48 slice EPI at 3mm spacing is ~ 4s Sampling at [0,4,8,12…] post- stimulus may miss peak signalSampling at [0,4,8,12…] post- stimulus may miss peak signal Higher effective sampling by: 1. Asynchrony eg SOA=1.5TRHigher effective sampling by: 1. Asynchrony eg SOA=1.5TR Stimulus (asynchronous) Scans TR=4s SOA=6s Sampling rate=2s

38 Timing Issues : Practical Typical TR for 48 slice EPI at 3mm spacing is ~ 4sTypical TR for 48 slice EPI at 3mm spacing is ~ 4s Sampling at [0,4,8,12…] post- stimulus may miss peak signalSampling at [0,4,8,12…] post- stimulus may miss peak signal Higher effective sampling by: 1. Asynchrony eg SOA=1.5TR 2. Random Jitter eg SOA=(2±0.5)TRHigher effective sampling by: 1. Asynchrony eg SOA=1.5TR 2. Random Jitter eg SOA=(2±0.5)TR Typical TR for 48 slice EPI at 3mm spacing is ~ 4sTypical TR for 48 slice EPI at 3mm spacing is ~ 4s Sampling at [0,4,8,12…] post- stimulus may miss peak signalSampling at [0,4,8,12…] post- stimulus may miss peak signal Higher effective sampling by: 1. Asynchrony eg SOA=1.5TR 2. Random Jitter eg SOA=(2±0.5)TRHigher effective sampling by: 1. Asynchrony eg SOA=1.5TR 2. Random Jitter eg SOA=(2±0.5)TR Stimulus (random jitter) Scans TR=4s Sampling rate=2s

39 Timing Issues : Practical Typical TR for 48 slice EPI at 3mm spacing is ~ 4sTypical TR for 48 slice EPI at 3mm spacing is ~ 4s Sampling at [0,4,8,12…] post- stimulus may miss peak signalSampling at [0,4,8,12…] post- stimulus may miss peak signal Higher effective sampling by: 1. Asynchrony eg SOA=1.5TR 2. Random Jitter eg SOA=(2±0.5)TRHigher effective sampling by: 1. Asynchrony eg SOA=1.5TR 2. Random Jitter eg SOA=(2±0.5)TR Better response characterisation (Miezin et al, 2000)Better response characterisation (Miezin et al, 2000) Typical TR for 48 slice EPI at 3mm spacing is ~ 4sTypical TR for 48 slice EPI at 3mm spacing is ~ 4s Sampling at [0,4,8,12…] post- stimulus may miss peak signalSampling at [0,4,8,12…] post- stimulus may miss peak signal Higher effective sampling by: 1. Asynchrony eg SOA=1.5TR 2. Random Jitter eg SOA=(2±0.5)TRHigher effective sampling by: 1. Asynchrony eg SOA=1.5TR 2. Random Jitter eg SOA=(2±0.5)TR Better response characterisation (Miezin et al, 2000)Better response characterisation (Miezin et al, 2000) Stimulus (random jitter) Scans TR=4s Sampling rate=2s

40 Timing Issues : Practical …but “Slice-timing Problem”…but “Slice-timing Problem” (Henson et al, 1999) Slices acquired at different times, yet model is the same for all slices Slices acquired at different times, yet model is the same for all slices …but “Slice-timing Problem”…but “Slice-timing Problem” (Henson et al, 1999) Slices acquired at different times, yet model is the same for all slices Slices acquired at different times, yet model is the same for all slices

41 Timing Issues : Practical …but “Slice-timing Problem”…but “Slice-timing Problem” (Henson et al, 1999) Slices acquired at different times, yet model is the same for all slices Slices acquired at different times, yet model is the same for all slices => different results (using canonical HRF) for different reference slices …but “Slice-timing Problem”…but “Slice-timing Problem” (Henson et al, 1999) Slices acquired at different times, yet model is the same for all slices Slices acquired at different times, yet model is the same for all slices => different results (using canonical HRF) for different reference slices Bottom Slice Top Slice SPM{t} TR=3s

42 Timing Issues : Practical …but “Slice-timing Problem”…but “Slice-timing Problem” (Henson et al, 1999) Slices acquired at different times, yet model is the same for all slices Slices acquired at different times, yet model is the same for all slices => different results (using canonical HRF) for different reference slices Solutions:Solutions: 1. Temporal interpolation of data … but less good for longer TRs …but “Slice-timing Problem”…but “Slice-timing Problem” (Henson et al, 1999) Slices acquired at different times, yet model is the same for all slices Slices acquired at different times, yet model is the same for all slices => different results (using canonical HRF) for different reference slices Solutions:Solutions: 1. Temporal interpolation of data … but less good for longer TRs Interpolated SPM{t} Bottom Slice Top Slice SPM{t} TR=3s

43 Timing Issues : Practical …but “Slice-timing Problem”…but “Slice-timing Problem” (Henson et al, 1999) Slices acquired at different times, yet model is the same for all slices Slices acquired at different times, yet model is the same for all slices => different results (using canonical HRF) for different reference slices Solutions:Solutions: 1. Temporal interpolation of data … but less good for longer TRs 2. More general basis set (e.g., with temporal derivatives) … but inferences via F-test …but “Slice-timing Problem”…but “Slice-timing Problem” (Henson et al, 1999) Slices acquired at different times, yet model is the same for all slices Slices acquired at different times, yet model is the same for all slices => different results (using canonical HRF) for different reference slices Solutions:Solutions: 1. Temporal interpolation of data … but less good for longer TRs 2. More general basis set (e.g., with temporal derivatives) … but inferences via F-test Derivative SPM{F} Interpolated SPM{t} Bottom Slice Top Slice SPM{t} TR=3s

44 Timing Issues : Latency Assume the real response, r(t), is a scaled (by  ) version of the canonical, f(t), but delayed by a small amount dt: r(t) =  f(t+dt) ~  f(t) +  f ´(t) dt 1 st -order Taylor R(t) = ß 1 f(t) + ß 2 f ´(t) GLM fit   = ß 1 dt = ß 2 / ß 1 ie, Latency can be approximated by the ratio of derivative-to-canonical parameter estimates (within limits of first-order approximation, +/-1s) (Henson et al, 2002) (Liao et al, 2002) If the fitted response, R(t), is modelled by the canonical + temporal derivative: Then canonical and derivative parameter estimates, ß 1 and ß 2, are such that :

45 Timing Issues : Latency Delayed Responses (green/ yellow) Canonical ß 2 /ß 1 Actual latency, dt, vs. ß 2 / ß 1 Canonical Derivative Basis Functions Face repetition reduces latency as well as magnitude of fusiform response ß1ß1 ß1ß1 ß1ß1 ß2ß2 ß2ß2 ß2ß2 Parameter Estimates

46 A. Decreased B. Advanced C. Shortened (same integrated) D. Shortened (same maximum) A. Smaller Peak B. Earlier Onset C. Earlier Peak D. Smaller Peak and earlier Peak Timing Issues : Latency NeuralBOLD

47 OverviewOverview 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications

48 Design Efficiency Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999)

49 Design Efficiency Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Events (A-B)

50 Design Efficiency Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Convolved with HRF

51 Design Efficiency Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Sampled every TR

52 Design Efficiency Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Smoothed (lowpass filter)

53 Design Efficiency Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Highpass filtered

54 Design Efficiency Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999) Maximise detectable signal (assume noise independent)Maximise detectable signal (assume noise independent) Minimise variability of parameter estimates (efficient estimation)Minimise variability of parameter estimates (efficient estimation) Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)Efficiency of estimation  covariance of (contrast of) covariates (Friston et al. 1999)  trace { c T (X T X) -1 c } -1  trace { c T (X T X) -1 c } -1 = maximise bandpassed energy (Josephs & Henson, 1999)= maximise bandpassed energy (Josephs & Henson, 1999)

55 Design Efficiency HRF can be viewed as a filter (Josephs & Henson, 1999)HRF can be viewed as a filter (Josephs & Henson, 1999) Want to maximise the signal passed by this filterWant to maximise the signal passed by this filter Dominant frequency of canonical HRF is ~0.04 HzDominant frequency of canonical HRF is ~0.04 Hz So most efficient design is a sinusoidal modulation of neural activity with period ~24sSo most efficient design is a sinusoidal modulation of neural activity with period ~24s (eg, boxcar with 12s on/ 12s off)(eg, boxcar with 12s on/ 12s off) HRF can be viewed as a filter (Josephs & Henson, 1999)HRF can be viewed as a filter (Josephs & Henson, 1999) Want to maximise the signal passed by this filterWant to maximise the signal passed by this filter Dominant frequency of canonical HRF is ~0.04 HzDominant frequency of canonical HRF is ~0.04 Hz So most efficient design is a sinusoidal modulation of neural activity with period ~24sSo most efficient design is a sinusoidal modulation of neural activity with period ~24s (eg, boxcar with 12s on/ 12s off)(eg, boxcar with 12s on/ 12s off)

56 Efficiency - Single Event-type Design parametrised by:Design parametrised by: SOA min Minimum SOA Design parametrised by:Design parametrised by: SOA min Minimum SOA

57 Efficiency - Single Event-type Design parametrised by:Design parametrised by: SOA min Minimum SOA p(t) Probability of event at each SOA min p(t) Probability of event at each SOA min Design parametrised by:Design parametrised by: SOA min Minimum SOA p(t) Probability of event at each SOA min p(t) Probability of event at each SOA min

58 Efficiency - Single Event-type Design parametrised by:Design parametrised by: SOA min Minimum SOA p(t) Probability of event at each SOA min p(t) Probability of event at each SOA min Deterministic p(t)=1 iff t=nTDeterministic p(t)=1 iff t=nT Design parametrised by:Design parametrised by: SOA min Minimum SOA p(t) Probability of event at each SOA min p(t) Probability of event at each SOA min Deterministic p(t)=1 iff t=nTDeterministic p(t)=1 iff t=nT

59 Efficiency - Single Event-type Design parametrised by:Design parametrised by: SOA min Minimum SOA p(t) Probability of event at each SOA min p(t) Probability of event at each SOA min Deterministic p(t)=1 iff t=nSOA minDeterministic p(t)=1 iff t=nSOA min Stationary stochastic p(t)=constant<1Stationary stochastic p(t)=constant<1 Design parametrised by:Design parametrised by: SOA min Minimum SOA p(t) Probability of event at each SOA min p(t) Probability of event at each SOA min Deterministic p(t)=1 iff t=nSOA minDeterministic p(t)=1 iff t=nSOA min Stationary stochastic p(t)=constant<1Stationary stochastic p(t)=constant<1

60 Efficiency - Single Event-type Design parametrised by:Design parametrised by: SOA min Minimum SOA p(t) Probability of event at each SOA min p(t) Probability of event at each SOA min Deterministic p(t)=1 iff t=nTDeterministic p(t)=1 iff t=nT Stationary stochastic p(t)=constantStationary stochastic p(t)=constant Dynamic stochasticDynamic stochastic p(t) varies (eg blocked) Design parametrised by:Design parametrised by: SOA min Minimum SOA p(t) Probability of event at each SOA min p(t) Probability of event at each SOA min Deterministic p(t)=1 iff t=nTDeterministic p(t)=1 iff t=nT Stationary stochastic p(t)=constantStationary stochastic p(t)=constant Dynamic stochasticDynamic stochastic p(t) varies (eg blocked) Blocked designs most efficient! (with small SOAmin)

61 4s smoothing; 1/60s highpass filtering Efficiency - Multiple Event-types Design parametrised by:Design parametrised by: SOA min Minimum SOA p i (h) Probability of event-type i given history h of last m events With n event-types p i (h) is a n m  n Transition MatrixWith n event-types p i (h) is a n m  n Transition Matrix Example: Randomised ABExample: Randomised AB AB A0.50.5 AB A0.50.5 B0.50.5 => ABBBABAABABAAA... Design parametrised by:Design parametrised by: SOA min Minimum SOA p i (h) Probability of event-type i given history h of last m events With n event-types p i (h) is a n m  n Transition MatrixWith n event-types p i (h) is a n m  n Transition Matrix Example: Randomised ABExample: Randomised AB AB A0.50.5 AB A0.50.5 B0.50.5 => ABBBABAABABAAA... Differential Effect (A-B) Common Effect (A+B)

62 4s smoothing; 1/60s highpass filtering Efficiency - Multiple Event-types Example: Alternating ABExample: Alternating AB AB A01 AB A01 B10 => ABABABABABAB... Example: Alternating ABExample: Alternating AB AB A01 AB A01 B10B10B10B10 => ABABABABABAB... Alternating (A-B) Permuted (A-B) Example: Permuted ABExample: Permuted AB AB AB AA0 1 AB0.50.5 BA0.50.5 BB1 0 => ABBAABABABBA...

63 4s smoothing; 1/60s highpass filtering Efficiency - Multiple Event-types Example: Null eventsExample: Null events AB AB A0.330.33 B0.330.33 => AB -BAA--B---ABB... Efficient for differential and main effects at short SOAEfficient for differential and main effects at short SOA Equivalent to stochastic SOA (Null Event like third unmodelled event-type)Equivalent to stochastic SOA (Null Event like third unmodelled event-type) Selective averaging of data (Dale & Buckner 1997)Selective averaging of data (Dale & Buckner 1997) Example: Null eventsExample: Null events AB AB A0.330.33 B0.330.33 => AB -BAA--B---ABB... Efficient for differential and main effects at short SOAEfficient for differential and main effects at short SOA Equivalent to stochastic SOA (Null Event like third unmodelled event-type)Equivalent to stochastic SOA (Null Event like third unmodelled event-type) Selective averaging of data (Dale & Buckner 1997)Selective averaging of data (Dale & Buckner 1997) Null Events (A+B) Null Events (A-B)

64 Efficiency - Conclusions Optimal design for one contrast may not be optimal for anotherOptimal design for one contrast may not be optimal for another Blocked designs generally most efficient with short SOAs (but earlier restrictions and problems of interpretation…)Blocked designs generally most efficient with short SOAs (but earlier restrictions and problems of interpretation…) With randomised designs, optimal SOA for differential effect (A-B) is minimal SOA (assuming no saturation), whereas optimal SOA for main effect (A+B) is 16-20sWith randomised designs, optimal SOA for differential effect (A-B) is minimal SOA (assuming no saturation), whereas optimal SOA for main effect (A+B) is 16-20s Inclusion of null events improves efficiency for main effect at short SOAs (at cost of efficiency for differential effects)Inclusion of null events improves efficiency for main effect at short SOAs (at cost of efficiency for differential effects) If order constrained, intermediate SOAs (5-20s) can be optimal; If SOA constrained, pseudorandomised designs can be optimal (but may introduce context-sensitivity)If order constrained, intermediate SOAs (5-20s) can be optimal; If SOA constrained, pseudorandomised designs can be optimal (but may introduce context-sensitivity) Optimal design for one contrast may not be optimal for anotherOptimal design for one contrast may not be optimal for another Blocked designs generally most efficient with short SOAs (but earlier restrictions and problems of interpretation…)Blocked designs generally most efficient with short SOAs (but earlier restrictions and problems of interpretation…) With randomised designs, optimal SOA for differential effect (A-B) is minimal SOA (assuming no saturation), whereas optimal SOA for main effect (A+B) is 16-20sWith randomised designs, optimal SOA for differential effect (A-B) is minimal SOA (assuming no saturation), whereas optimal SOA for main effect (A+B) is 16-20s Inclusion of null events improves efficiency for main effect at short SOAs (at cost of efficiency for differential effects)Inclusion of null events improves efficiency for main effect at short SOAs (at cost of efficiency for differential effects) If order constrained, intermediate SOAs (5-20s) can be optimal; If SOA constrained, pseudorandomised designs can be optimal (but may introduce context-sensitivity)If order constrained, intermediate SOAs (5-20s) can be optimal; If SOA constrained, pseudorandomised designs can be optimal (but may introduce context-sensitivity)

65 SOA=4s SOA=16s A+B Alt A-B Ran A-B TimeFreqTimeFreq

66 OverviewOverview 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications

67 Volterra series - a general nonlinear input-output model y(t) = 1 [u(t)] + 2 [u(t)] +.... + n [u(t)] +.... y(t) =  1 [u(t)] +  2 [u(t)] +.... +  n [u(t)] +.... n [u(t)] = ....  h n (t 1,..., t n )u(t - t 1 ).... u(t - t n )dt 1.... dt n  n [u(t)] = ....  h n (t 1,..., t n )u(t - t 1 ).... u(t - t n )dt 1.... dt n Volterra series - a general nonlinear input-output model y(t) = 1 [u(t)] + 2 [u(t)] +.... + n [u(t)] +.... y(t) =  1 [u(t)] +  2 [u(t)] +.... +  n [u(t)] +.... n [u(t)] = ....  h n (t 1,..., t n )u(t - t 1 ).... u(t - t n )dt 1.... dt n  n [u(t)] = ....  h n (t 1,..., t n )u(t - t 1 ).... u(t - t n )dt 1.... dt n Nonlinear Model  [u(t)] response y(t) input u(t) Stimulus function kernels (h) estimate

68 Nonlinear Model Friston et al (1997) SPM{F} testing H 0 : kernel coefficients, h = 0 kernel coefficients - h SPM{F} p < 0.001

69 Nonlinear Model Friston et al (1997) SPM{F} testing H 0 : kernel coefficients, h = 0 Significant nonlinearities at SOAs 0-10s: (e.g., underadditivity from 0-5s) (e.g., underadditivity from 0-5s) kernel coefficients - h SPM{F} p < 0.001

70 Nonlinear Effects Underadditivity at short SOAs Linear Prediction Volterra Prediction

71 Nonlinear Effects Underadditivity at short SOAs Linear Prediction Volterra Prediction

72 Nonlinear Effects Underadditivity at short SOAs Linear Prediction Volterra Prediction Implications for Efficiency

73 OverviewOverview 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications 1. Advantages of efMRI 2. BOLD impulse response 3. General Linear Model 4. Temporal Basis Functions 5. Timing Issues 6. Design Optimisation 7. Nonlinear Models 8. Example Applications

74 Example 1: Intermixed Trials (Henson et al 2000) Short SOA, fully randomised, with 1/3 null eventsShort SOA, fully randomised, with 1/3 null events Faces presented for 0.5s against chequerboard baseline, SOA=(2 ± 0.5)s, TR=1.4sFaces presented for 0.5s against chequerboard baseline, SOA=(2 ± 0.5)s, TR=1.4s Factorial event-types: 1. Famous/Nonfamous (F/N) 2. 1st/2nd Presentation (1/2)Factorial event-types: 1. Famous/Nonfamous (F/N) 2. 1st/2nd Presentation (1/2) Short SOA, fully randomised, with 1/3 null eventsShort SOA, fully randomised, with 1/3 null events Faces presented for 0.5s against chequerboard baseline, SOA=(2 ± 0.5)s, TR=1.4sFaces presented for 0.5s against chequerboard baseline, SOA=(2 ± 0.5)s, TR=1.4s Factorial event-types: 1. Famous/Nonfamous (F/N) 2. 1st/2nd Presentation (1/2)Factorial event-types: 1. Famous/Nonfamous (F/N) 2. 1st/2nd Presentation (1/2)

75 Lag=3 FamousNonfamous(Target)...

76 Example 1: Intermixed Trials (Henson et al 2000) Short SOA, fully randomised, with 1/3 null eventsShort SOA, fully randomised, with 1/3 null events Faces presented for 0.5s against chequerboard baseline, SOA=(2 ± 0.5)s, TR=1.4sFaces presented for 0.5s against chequerboard baseline, SOA=(2 ± 0.5)s, TR=1.4s Factorial event-types: 1. Famous/Nonfamous (F/N) 2. 1st/2nd Presentation (1/2)Factorial event-types: 1. Famous/Nonfamous (F/N) 2. 1st/2nd Presentation (1/2) Interaction (F1-F2)-(N1-N2) masked by main effect (F+N)Interaction (F1-F2)-(N1-N2) masked by main effect (F+N) Right fusiform interaction of repetition priming and familiarityRight fusiform interaction of repetition priming and familiarity Short SOA, fully randomised, with 1/3 null eventsShort SOA, fully randomised, with 1/3 null events Faces presented for 0.5s against chequerboard baseline, SOA=(2 ± 0.5)s, TR=1.4sFaces presented for 0.5s against chequerboard baseline, SOA=(2 ± 0.5)s, TR=1.4s Factorial event-types: 1. Famous/Nonfamous (F/N) 2. 1st/2nd Presentation (1/2)Factorial event-types: 1. Famous/Nonfamous (F/N) 2. 1st/2nd Presentation (1/2) Interaction (F1-F2)-(N1-N2) masked by main effect (F+N)Interaction (F1-F2)-(N1-N2) masked by main effect (F+N) Right fusiform interaction of repetition priming and familiarityRight fusiform interaction of repetition priming and familiarity

77 Example 2: Post hoc classification (Henson et al 1999) Subjects indicate whether studied (Old) words:Subjects indicate whether studied (Old) words: i) evoke recollection of prior occurrence (R) i) evoke recollection of prior occurrence (R) ii) feeling of familiarity without recollection (K) ii) feeling of familiarity without recollection (K) iii) no memory (N) iii) no memory (N) Random Effects analysis on canonical parameter estimate for event-typesRandom Effects analysis on canonical parameter estimate for event-types Fixed SOA of 8s => sensitive to differential but not main effect (de/activations arbitrary)Fixed SOA of 8s => sensitive to differential but not main effect (de/activations arbitrary) Subjects indicate whether studied (Old) words:Subjects indicate whether studied (Old) words: i) evoke recollection of prior occurrence (R) i) evoke recollection of prior occurrence (R) ii) feeling of familiarity without recollection (K) ii) feeling of familiarity without recollection (K) iii) no memory (N) iii) no memory (N) Random Effects analysis on canonical parameter estimate for event-typesRandom Effects analysis on canonical parameter estimate for event-types Fixed SOA of 8s => sensitive to differential but not main effect (de/activations arbitrary)Fixed SOA of 8s => sensitive to differential but not main effect (de/activations arbitrary) SPM{t} SPM{t} R-K SPM{t} SPM{t} K-R

78 Example 3: Subject-defined events (Portas et al 1999) Subjects respond when “pop-out” of 3D percept from 2D stereogramSubjects respond when “pop-out” of 3D percept from 2D stereogram

79

80 Example 3: Subject-defined events (Portas et al 1999) Subjects respond when “pop-out” of 3D percept from 2D stereogramSubjects respond when “pop-out” of 3D percept from 2D stereogram Popout response also produces tonePopout response also produces tone Control event is response to tone during 3D perceptControl event is response to tone during 3D percept Subjects respond when “pop-out” of 3D percept from 2D stereogramSubjects respond when “pop-out” of 3D percept from 2D stereogram Popout response also produces tonePopout response also produces tone Control event is response to tone during 3D perceptControl event is response to tone during 3D percept Temporo-occipital differential activation Pop-out Control

81 Example 4: Oddball Paradigm (Strange et al, 2000) 16 same-category words every 3 secs, plus …16 same-category words every 3 secs, plus … … 1 perceptual, 1 semantic, and 1 emotional oddball… 1 perceptual, 1 semantic, and 1 emotional oddball 16 same-category words every 3 secs, plus …16 same-category words every 3 secs, plus … … 1 perceptual, 1 semantic, and 1 emotional oddball… 1 perceptual, 1 semantic, and 1 emotional oddball

82 WHEAT BARLEY OATS HOPS RYE … ~3s CORN Perceptual Oddball PLUG Semantic Oddball RAPE Emotional Oddball

83 Example 4: Oddball Paradigm (Strange et al, 2000) 16 same-category words every 3 secs, plus …16 same-category words every 3 secs, plus … … 1 perceptual, 1 semantic, and 1 emotional oddball… 1 perceptual, 1 semantic, and 1 emotional oddball 16 same-category words every 3 secs, plus …16 same-category words every 3 secs, plus … … 1 perceptual, 1 semantic, and 1 emotional oddball… 1 perceptual, 1 semantic, and 1 emotional oddball Right Prefrontal Cortex Parameter Estimates Controls Oddballs 3 nonoddballs randomly matched as controls3 nonoddballs randomly matched as controls Conjunction of oddball vs. control contrast images: generic deviance detectorConjunction of oddball vs. control contrast images: generic deviance detector

84 Epochs of attention to: 1) motion, or 2) colourEpochs of attention to: 1) motion, or 2) colour Events are target stimuli differing in motion or colourEvents are target stimuli differing in motion or colour Randomised, long SOAs to decorrelate epoch and event- related covariatesRandomised, long SOAs to decorrelate epoch and event- related covariates Interaction between epoch (attention) and event (stimulus) in V4 and V5Interaction between epoch (attention) and event (stimulus) in V4 and V5 Epochs of attention to: 1) motion, or 2) colourEpochs of attention to: 1) motion, or 2) colour Events are target stimuli differing in motion or colourEvents are target stimuli differing in motion or colour Randomised, long SOAs to decorrelate epoch and event- related covariatesRandomised, long SOAs to decorrelate epoch and event- related covariates Interaction between epoch (attention) and event (stimulus) in V4 and V5Interaction between epoch (attention) and event (stimulus) in V4 and V5 Example 5: Epoch/Event Interactions (Chawla et al 1999) attention to motion attention to colour Interaction between attention and stimulus motion change in V5

85

86 BOLD Response Latency (Iterative) Numerical fitting of explicitly parameterised canonical HRF (Henson et al, 2001)Numerical fitting of explicitly parameterised canonical HRF (Henson et al, 2001) Distinguishes between Onset and Peak latency…Distinguishes between Onset and Peak latency… …unlike temporal derivative… …unlike temporal derivative… …and which may be important for interpreting neural changes (see previous slide) …and which may be important for interpreting neural changes (see previous slide) Distribution of parameters tested nonparametrically (Wilcoxon’s T over subjects)Distribution of parameters tested nonparametrically (Wilcoxon’s T over subjects) Numerical fitting of explicitly parameterised canonical HRF (Henson et al, 2001)Numerical fitting of explicitly parameterised canonical HRF (Henson et al, 2001) Distinguishes between Onset and Peak latency…Distinguishes between Onset and Peak latency… …unlike temporal derivative… …unlike temporal derivative… …and which may be important for interpreting neural changes (see previous slide) …and which may be important for interpreting neural changes (see previous slide) Distribution of parameters tested nonparametrically (Wilcoxon’s T over subjects)Distribution of parameters tested nonparametrically (Wilcoxon’s T over subjects) Height Peak Delay Onset Delay

87 BOLD Response Latency (Iterative) No difference in Onset Delay, wT(11)=35 240ms Peak Delay wT(11)=14, p<.05 0.34% Height Change wT(11)=5, p<.001 Most parsimonious account is that repetition reduces duration of neural activity… D. Shortened (same maximum) Neural D. Smaller Peak and earlier Peak BOLD

88 Four-parameter HRF, nonparametric Random Effects (SNPM99) Advantages of iterative vs linear: 1.Height “independent” of shape Canonical “height” confounded by latency (e.g, different shapes across subjects); no slice-timing error 2. Distinction of onset/peak latency Allowing better neural inferences? Disadvantages of iterative: 1. Unreasonable fits (onset/peak tension) Priors on parameter distributions? (Bayesian estimation) 2. Local minima, failure of convergence? 3. CPU time (~3 days for above) Height Peak Delay Onset Delay Dispersion BOLD Response Latency (Iterative) Different fits across subjects Height p<.05 (cor) 1-2 SNPM FIR used to deconvolve data, before nonlinear fitting over PST

89 Temporal Basis Sets: Inferences How can inferences be made in hierarchical models (eg, “Random Effects” analyses over, for example, subjects)? 1. Univariate T-tests on canonical parameter alone? may miss significant experimental variability canonical parameter estimate not appropriate index of “magnitude” if real responses are non-canonical (see later) 2. Univariate F-tests on parameters from multiple basis functions? need appropriate corrections for nonsphericity (Glaser et al, 2001) 3. Multivariate tests (eg Wilks Lambda, Henson et al, 2000) not powerful unless ~10 times as many subjects as parameters

90 Efficiency – Detection vs Estimation “Detection power” vs “Estimation efficiency” (Liu et al, 2001)“Detection power” vs “Estimation efficiency” (Liu et al, 2001) Detect response, or characterise shape of response?Detect response, or characterise shape of response? Maximal detection power in blocked designs;Maximal detection power in blocked designs; Maximal estimation efficiency in randomised designs => simply corresponds to choice of basis functions: detection = canonical HRF detection = canonical HRF estimation = FIR estimation = FIR “Detection power” vs “Estimation efficiency” (Liu et al, 2001)“Detection power” vs “Estimation efficiency” (Liu et al, 2001) Detect response, or characterise shape of response?Detect response, or characterise shape of response? Maximal detection power in blocked designs;Maximal detection power in blocked designs; Maximal estimation efficiency in randomised designs => simply corresponds to choice of basis functions: detection = canonical HRF detection = canonical HRF estimation = FIR estimation = FIR

91  =  =  = s(t) r(  ) u(t) h(  ) x(t) u(t) h(  ) x(t) Time (s) PST (s)

92 x T0=9 T=16, TR=2s x T0=16 Scan 0 1 Figure 2 x1x1 x2x2 x3x3

93 C A B Initial Dip Undershoot Peak Dispersion

94 Effective sampling rate = 0.25Hz Effective sampling rate=0.5 Hz Stimulus (synchronous) Scans TR=4s Stimulus (random jitter) Stimulus (asynchronous) SOA=6s SOA=8s SOA=4-8s Time

95 SOA=2s SOA=16s


Download ppt "(Epoch and) Event-related fMRI Karl Friston Rik Henson Oliver Josephs Wellcome Department of Cognitive Neurology & Institute of Cognitive Neuroscience."

Similar presentations


Ads by Google