Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hidden Process Models with applications to fMRI data Rebecca Hutchinson Oregon State University Joint work with Tom M. Mitchell Carnegie Mellon University.

Similar presentations


Presentation on theme: "Hidden Process Models with applications to fMRI data Rebecca Hutchinson Oregon State University Joint work with Tom M. Mitchell Carnegie Mellon University."— Presentation transcript:

1 Hidden Process Models with applications to fMRI data Rebecca Hutchinson Oregon State University Joint work with Tom M. Mitchell Carnegie Mellon University August 2, 2009 Joint Statistical Meetings, Washington DC

2 Introduction Hidden Process Models (HPMs): –A probabilistic model for time series data. –Designed for data generated by a collection of latent processes. Example domain: –Modeling cognitive processes (e.g. making a decision) in functional Magnetic Resonance Imaging time series. Characteristics of potential domains: –Processes with spatial-temporal signatures. –Uncertainty about temporal location of processes. –High-dimensional, sparse, noisy. 2

3 fMRI Data … Signal Amplitude Time (seconds) Hemodynamic Response Neural activity Features: 5k-15k voxels, imaged every second. Training examples: 10-40 trials (task repetitions). 3

4 Study: Pictures and Sentences Task: Decide whether sentence describes picture correctly, indicate with button press. 13 normal subjects, 40 trials per subject. Sentences and pictures describe 3 symbols: *, +, and $, using ‘above’, ‘below’, ‘not above’, ‘not below’. Images are acquired every 0.5 seconds. Read Sentence View PictureRead Sentence View PictureFixation Press Button 4 sec.8 sec.t=0 Rest 4

5 Goals for fMRI To track cognitive processes over time. –Estimate hemodynamic response signatures. –Estimate process timings. Modeling processes that do not directly correspond to the stimuli timing is a key contribution of HPMs! To compare hypotheses of cognitive behavior. 5

6 Process 1: ReadSentence Response signature W: Duration d: 11 sec. Offsets  : {0,1} P(  ): {  0,  1 } One configuration c of process instances  1,  2, …  k : Predicted mean: Input stimulus  : 11  Timing landmarks : 2 1 22 Process instance:  2 Process h: 2 Timing landmark: 2 Offset O: 1 (Start time: 2 + O) sentence picture v1 v2 Process 2: ViewPicture Response signature W: Duration d: 11 sec. Offsets  : {0,1} P(  ): {  0,  1 } v1 v2 Processes of the HPM: v1 v2 + N(0,  1 ) + N(0,  2 ) 6

7 HPM Formalism HPM = H =, a set of processes (e.g. ReadSentence) h =, a process W = response signature d = process duration  = allowable offsets  = multinomial parameters over values in  C =, a set of possible configurations c =, a set of process instances  =, a process instance (e.g. ReadSentence(S1)) h = process ID = timing landmark (e.g. stimulus presentation of S1) O = offset (takes values in  h ) C= a latent variable indicating the correct configuration  =, standard deviation for each voxel 7

8 HPMs: the graphical model Offset o Process Type h Start Time s observed unobserved Timing Landmark Y t,v  1,…,  k t=[1,T], v=[1,V]  The set C of configurations constrains the joint distribution on {h(k),o(k)}  k. Configuration c 8

9 Encoding Experiment Design Configuration 1: Input stimulus  : Timing landmarks : 2 1 ViewPicture = 2 ReadSentence = 1 Decide = 3 Configuration 2: Configuration 3: Configuration 4: Constraints Encoded: h(  1 ) = {1,2} h(  2 ) = {1,2} h(  1 ) != h(  2 ) o(  1 ) = 0 o(  2 ) = 0 h(  3 ) = 3 o(  3 ) = {1,2} Processes: 9

10 Inference Over C, the latent indicator of the correct configuration Choose the most likely configuration, where: Y=observed data,  =input stimuli, HPM=model 10

11 Learning Parameters to learn: –Response signature W for each process –Timing distribution  for each process –Standard deviation  for each voxel Expectation-Maximization (EM) algorithm to estimate W and . –E step: estimate the probability distribution over C. –M step: update estimates of W (using reweighted least squares), , and  (using standard MLEs) based on the E step. 11

12 Process Response Signatures Standard: Each process has a matrix of parameters, one for each point in space and time for the duration of the response (e.g. 24). Regularized: Same as standard, but learned with penalties for deviations from temporal and/or spatial smoothness. Basis functions: Each process has a small number (e.g. 3) weights for each voxel that are combined with a basis to get the response. 12

13 Models HPM-GNB: ReadSentence and ViewPicture, duration=8sec. (no overlap) –an approximation of Gaussian Naïve Bayes classifier, with HPM assumptions and noise model HPM-2: ReadSentence and ViewPicture, duration=12sec. (temporal overlap) HPM-3: HPM-2 + Decide (offsets=[0,7] images following second stimulus) HPM-4: HPM-3 + PressButton (offsets = {-1,0} following button press)

14 Evaluation Select 1000 most active voxels. Compute improvement in test data log-likelihood as compared with predicting the mean training trial for all test trials (a baseline). 5-fold cross-validation per subject; mean over 13 subjects. StandardRegularizedBasis functions HPM-GNB-29325902010 HPM-2-115039103740 HPM-3-200049604710 HPM-4-449048104770 14

15 Interpretation and Visualization Timing for the third (Decide) process in HPM-3: (Values have been rounded.) For each subject, average response signatures for each voxel over time, plot result in each spatial location. Compare time courses for the same voxel. Offset:01234567 Stand.0.30.080.10.05 0.20.080.15 Reg.0.30.080.10.05 0.20.080.15 Basis0.50.1 0.080.050.030.050.08 15

16 Standard 16

17 Regularized 17

18 Basis functions 18

19 Time courses Standard Regularized Basis functions The basis set (Hossein-Zadeh03) 19

20 Related Work fMRI –General Linear Model (Dale99) Must assume timing of process onset to estimate hemodynamic response. –Computer models of human cognition (Just99, Anderson04) Predict fMRI data rather than learning parameters of processes from the data. Machine Learning –Classification of windows of fMRI data (overview in Haynes06) Does not typically model overlapping hemodynamic responses. –Dynamic Bayes Networks (Murphy02, Ghahramani97) HPM assumptions/constraints can be encoded by extending factorial HMMs with links between the Markov chains. 20

21 Conclusions Take-away messages: –HPMs are a probabilistic model for time series data generated by a collection of latent processes. –In the fMRI domain, HPMs can simultaneously estimate the hemodynamic response and localize the timing of cognitive processes. Future work: –Automatically discover the number of latent processes. –Learn process durations. –Apply to open cognitive science problems. 21

22 References John R. Anderson, Daniel Bothell, Michael D. Byrne, Scott Douglass, Christian Lebiere, and Yulin Qin. An integrated theory of the mind. Psychological Review, 111(4):1036–1060, 2004. http://act-r.psy.cmu.edu/about/. Anders M. Dale. Optimal experimental design for event-related fMRI. Human Brain Mapping, 8:109–114, 1999. Zoubin Ghahramani and Michael I. Jordan. Factorial hidden Markov models. Machine Learning, 29:245–275, 1997. John-Dylan Haynes and Geraint Rees. Decoding mental states from brain activity in humans. Nature Reviews Neuroscience, 7:523–534, July 2006. Gholam-Ali Hossein-Zadeh, Babak A. Ardekani, and Hamid Soltanian-Zadeh. A signal subspace approach for modeling the hemodynamic response function in fmri. Magnetic Resonance Imaging, 21:835–843, 2003. Marcel Adam Just, Patricia A. Carpenter, and Sashank Varma. Computational modeling of high- level cognition and brain function. Human Brain Mapping, 8:128–136, 1999. http://www.ccbi.cmu.edu/project 10modeling4CAPS.htm. Kevin P. Murphy. Dynamic bayesian networks. To appear in Probabilistic Graphical Models, M. Jordan, November 2002. 22


Download ppt "Hidden Process Models with applications to fMRI data Rebecca Hutchinson Oregon State University Joint work with Tom M. Mitchell Carnegie Mellon University."

Similar presentations


Ads by Google