Presentation is loading. Please wait.

Presentation is loading. Please wait.

Adaptive multi-voxel representation of stimuli, rules and responses

Similar presentations


Presentation on theme: "Adaptive multi-voxel representation of stimuli, rules and responses"— Presentation transcript:

1 Adaptive multi-voxel representation of stimuli, rules and responses
Alexandra Woolgar, John Duncan

2 Background Frontoparietal “multiple-demand” (MD) regions respond to a variety of task demands Single cells code a variety of different task features Stimuli, responses and stimulus-response mapping rules are represented in voxel-wise pattern of BOLD response in MD regions Visual stimulus-response task All relevant task features represented Task rules most strongly represented across the system MDs thought to represent a flexible system, adapting to represent relevant features of current task

3 Design

4 Design “Low perceptual difficulty” “High perceptual difficulty”

5 Alternating blocks of high and low perceptual difficulty
Random order of stimuli within block, self paced 1000ms ISI

6 Analysis Rule (low difficulty) Rule (high difficulty)
Position (low) Position (high) Response (low) Response (high)

7 Test/Train Support Vector Machine
Beta at each voxel: First go through basic idea of classification

8 Test/Train Support Vector Machine
Blocks: test train For one chunk of time – dots are betas (multivariate patterns for simplicity shown in 2D distance space)

9 Test/Train Support Vector Machine
Blocks: test train Support Vector Machine: For one chunk of time – dots are betas (multivariate patterns for simplicity shown in 2D distance space)

10 Test/Train Support Vector Machine
Blocks: test train Support Vector Machine: For one chunk of time – dots are betas (multivariate patterns for simplicity shown in 2D distance space)

11 Test/Train Support Vector Machine
Blocks: test train Support Vector Machine: For one chunk of time – dots are betas (multivariate patterns for simplicity shown in 2D distance space)

12 Test/Train Support Vector Machine
Blocks: test train Support Vector Machine: For one chunk of time – dots are betas (multivariate patterns for simplicity shown in 2D distance space)

13 Test/Train Support Vector Machine
error score Blocks: For one chunk of time – dots are betas (multivariate patterns for simplicity shown in 2D distance space)

14 Test/Train Support Vector Machine
Blocks: average error score Average error score for this roi for this contrast for this subject Then of course you can repeat for each ROI, for each subject and for each contrast Average error score per subject for that region for that task feature (statistics then carried out at group level)

15 Whole brain MVPA MVPA: percent correct
Results – not the same story everywhere… (spotlight) MVPA: percent correct 5mm radius spotlight = ~16 voxels

16 Details - design High/low perceptual difficulty blocks last 2 minutes, 20 s interblock interval 2 runs of 10 blocks each (~46 mins EPI) Standard CBU acquisition (TR=2s, 3x3x3.75 resolution)

17 Details - analysis Estimate betas: standard multiple regression approach (SPM5) 128s high pass filter Block and run means Only correct trials estimated (av. 92% correct) 20 blocks over 2 runs Each trial contributes to estimation of 3 betas: rule (1 or 2), stimulus position (inner or outer), response (inner or outer) Average 482 trials correct / subject = 482 / (2*20) instances per beta = 12.1 trials per beta 10 blocks per perceptual difficulty condition = 20 betas for each classification

18 More trials/beta or more betas for classification?
More trials contributing to each beta estimate -> more stable beta estimate More betas for classification -> more data for classifier to be trained on Previous design… Estimate all 16 stimuli separately: 5.6 instances/beta, 160 betas for classification Each trial contributes to estimation of one of four stimulus positions, responses and colours: 22.5 instances/beta, 40 betas for classification Second design gave higher classification accuracies, but more variability: similar probability values

19 Results – low perceptual difficulty
[PICTURE FROM FRONTAL REGIONS (doing rule and response coding but not really interested in position)] ** p < 0.01

20 Adaptive coding? High vs. low perceptual difficulty
Position Coding [PICTURE SUMMARISING INCREASE IN POSITION CODING IN EACH MD REGION AND DECREASE IN BA17/18] Introduce bars bit by bit

21 What about the other task-features?
A lot on this slide. Main point to take away is reshuffling of representation of info across regions (adaptive, flexible coding!) N/B rule still most strongly represented feature across conditions

22 Conclusions Data suggest adaptive coding in the MD system in response to changing task demands Dynamic (block to block) Increased MD representation of feature requiring more attention (position) Decreased representation in lower level sensory areas (in line with physical change in stimulus)

23

24 Further details of previous experiment
532 scans 12 * 10 columns 128s high pass filter Block and run means Average 898 trials correct / subject (97%) Only correct trials estimated (average 97% correct) = 898 / 10 * 4 instances per beta = 22.5 instances per beta 10 blocks, 2 runs Previous model estimated all 16 stimuli separately, giving 5.6 instances per beta but 160 betas for classification (train on 144, test on 16) Each trial contributes to estimate of 3 betas: position (1,2,3 or 4), response (1,2,3 or 4) and colour (blue, yellow, green, or pink)

25 High vs. Low difficulty – univariate results


Download ppt "Adaptive multi-voxel representation of stimuli, rules and responses"

Similar presentations


Ads by Google