Task Design EX: Participants were required to fixate on a plus sign in the middle of a screen. Sentences missing the last word were presented auditorily.

Slides:



Advertisements
Similar presentations
When zero is not zero: The problem of ambiguous baseline conditions in fMRI Stark & Squire (2001) By Mike Toulis November 12, 2002.
Advertisements

Accessing spoken words: the importance of word onsets
Detecting Conflict-Related Changes in the ACC Judy Savitskaya 1, Jack Grinband 1,3, Tor Wager 2, Vincent P. Ferrera 3, Joy Hirsch 1,3 1.Program for Imaging.
Evaluating the Effect of Neighborhood Size on Chinese Word Naming and Lexical Decision Meng-Feng Li 1, Jei-Tun WU 1*, Wei-Chun Lin 1 and Fu-Ling Yang 1.
Modality-specific interaction between phonology and semantics Gail Moroschan & Chris Westbury Department of Psychology, University of Alberta, Edmonton,
INTRODUCTION Assessing the size of objects rapidly and accurately clearly has survival value. Thus, a central multi-sensory module for magnitude assessment.
 The results of Experiment 2 replicated those of Experiment 1. Error rates were comparable for younger adults (2.4%) and older adults (2.1%).  Again,
Experiment 2: MEG Study Materials and Methods: 11 right-handed subjects with 20:20 vision were run. 3 subjects’ data was discarded because of poor performance.
Chapter 6: Visual Attention. Scanning a Scene Visual scanning – looking from place to place –Fixation –Saccadic eye movement Overt attention involves.
PS: Introduction to Psycholinguistics Winter Term 2005/06 Instructor: Daniel Wiechmann Office hours: Mon 2-3 pm Phone:
Neural correlates of continuous and categorical sinewave speech perception: An FMRI study Rutvik Desai, Einat Liebenthal, Eric Waldron, and Jeffrey R.
Hemispheric asymmetries in the resolution of lexical ambiguity Jeffrey Coney, Kimberly David Evans Presented by Chris Evans May 17, 2006.
Memory Span A Comparison Between Major Types Amy Bender, Jeremy Owens, and Jared Smith Hanover College 2007.
Word Retrieval in a Stem Completion Task: Influence of Number of Potential Responses Christine Chiarello 1, Laura K. Halderman 1, Cathy S. Robinson 1 &
Sex Differences in Visual Field Lateralization: Where are they? Christine Chiarello 1, Laura K. Halderman 1, Suzanne Welcome 1, Janelle Julagay 1 & Christiana.
Suzanne E. Welcome 1, Laura K. Halderman 1, Janelle Julagay 1, Christiana Leonard 2, & Christine Chiarello 1 1 University of California, Riverside 2 University.
Influence of Word Class Proportion on Cerebral Asymmetries for High and Low Imagery Words Christine Chiarello 1, Connie Shears 2, Stella Liu 3, and Natalie.
Change blindness and time to consciousness Professor: Liu Student: Ruby.
Attention Modulates Responses in the Human Lateral Geniculate Nucleus Nature Neuroscience, 2002, 5(11): Presented by Juan Mo.
Neural Activation and Attention Bias to Emotional Faces in Autism Spectrum Disorders S.J. Weng, H. Louro, S.J. Peltier, J. Zaccagnini, L.I. Dayton, P.
Introduction Pinker and colleagues (Pinker & Ullman, 2002) have argued that morphologically irregular verbs must be stored as full forms in the mental.
Susceptibility Induced Loss of Signal: Comparing PET and fMRI on a Semantic Task Devlin et al. (in press)
RESULTS The masking paradigm worked well (participants reported that they knew there was something in the prime position, but where unable to identify.
Neural Substrates of Phonemic Categorization in the Left Temporal Lobe Einat Liebenthal, Rutvik Desai, Eric J Waldron, Jeffrey R Binder Neurology Department,
 Participants Right-handed, community-dwelling individuals; 16 younger adults (19-28 years; 8 female); 16 older adults (60-82 years; 8 female). Participants.
FMRI Group Natasha Matthews, Ashley Parks, Destiny Miller, Ziad Safadi, Dana Tudorascu, Julia Sacher. Adviser: Mark Wheeler.
Studying Memory Encoding with fMRI Event-related vs. Blocked Designs Aneta Kielar.
Reicher (1969): Word Superiority Effect Dr. Timothy Bender Psychology Department Missouri State University Springfield, MO
Dissociating Semantic and Phonological Processing in the Left Inferior Frontal Gyrus PM Gough, AC Nobre, JT Devlin* Dept. of Experimental Psychology, Uni.
Experiment 2 (N=10) Purpose: Examine the ability of rare abrupt onsets (20% of trials) to capture attention away from a relevant cue. Design: Half of the.
Pattern Classification of Attentional Control States S. G. Robison, D. N. Osherson, K. A. Norman, & J. D. Cohen Dept. of Psychology, Princeton University,
Introduction Can you read the following paragraph? Can we derive meaning from words even if they are distorted by intermixing words with numbers? Perea,
INTRODUCTION ADULT AGE DIFFERENCES IN THE HEMODYNAMIC RESPONSE DURING VISUAL TARGET DETECTION MEASURED BY FUNCTIONAL MRI David J. Madden 1, Scott A. Huettel.
RIGHT PARIETAL CORTEX PLAYS A CRITICAL ROLE IN CHANGE BLINDNESS by Naser Aljundi.
Orienting Attention to Semantic Categories T Cristescu, JT Devlin, AC Nobre Dept. Experimental Psychology and FMRIB Centre, University of Oxford, Oxford,
Tonal Violations Interact with Lexical Processing: Evidence from Cross-modal Priming Meagan E. Curtis 1 and Jamshed J. Bharucha 2 1 Dept. of Psych. & Brain.
PET Count  Word Frequency effects (coefficients) were reliably related to activation in both the striate and ITG for older adults only.  For older adults,
Introduction Ruth Adam & Uta Noppeney Max Planck Institute for Biological Cybernetics, Tübingen Scientific Aim Experimental.
FMRI and Behavioral Studies of Human Face Perception Ronnie Bryan Vision Lab
Ken W.L. Chan, Alan H.S. Chan* Displays 26 (2005) 109–119 Spatial S–R compatibility of visual and auditory signals: implications for human–machine interface.
Models of Production and Comprehension [1] Ling4-437.
From: Cross-modal attention influences auditory contrast sensitivity: Decreasing visual load improves auditory thresholds for amplitude- and frequency-modulated.
The Effects of Musical Mood and Musical Arousal on Visual Attention
Volume 60, Issue 4, Pages (November 2008)
Lior Shmuelof, Ehud Zohary  Neuron 
GABAergic Modulation of Visual Gamma and Alpha Oscillations and Its Consequences for Working Memory Performance  Diego Lozano-Soldevilla, Niels ter Huurne,
Michael S Beauchamp, Kathryn E Lee, Brenna D Argall, Alex Martin 
Task-Guided Selection of the Dual Neural Pathways for Reading
Sheng Li, Stephen D. Mayhew, Zoe Kourtzi  Neuron 
A Core System for the Implementation of Task Sets
Volume 79, Issue 4, Pages (August 2013)
Paul E. Dux, Jason Ivanoff, Christopher L. Asplund, René Marois  Neuron 
The Generality of Parietal Involvement in Visual Attention
Jack Grinband, Joy Hirsch, Vincent P. Ferrera  Neuron 
Benedikt Zoefel, Alan Archer-Boyd, Matthew H. Davis  Current Biology 
Volume 45, Issue 4, Pages (February 2005)
Modality-Independent Coding of Spatial Layout in the Human Brain
Michael S Beauchamp, Kathryn E Lee, Brenna D Argall, Alex Martin 
Lior Shmuelof, Ehud Zohary  Neuron 
Contextual Feedback to Superficial Layers of V1
Christian Büchel, Jond Morris, Raymond J Dolan, Karl J Friston  Neuron 
Vahe Poghosyan, Andreas A. Ioannides  Neuron 
Predictive Neural Coding of Reward Preference Involves Dissociable Responses in Human Ventral Midbrain and Ventral Striatum  John P. O'Doherty, Tony W.
John T. Serences, Geoffrey M. Boynton  Neuron 
Selection, Integration, and Conflict Monitoring
Volume 18, Issue 19, Pages (October 2008)
Perceptual Classification in a Rapidly Changing Environment
Social Information Signaling by Neurons in Primate Striatum
A Neural Network Reflecting Decisions about Human Faces
Neurophysiology of the BOLD fMRI Signal in Awake Monkeys
Presentation transcript:

Task Design EX: Participants were required to fixate on a plus sign in the middle of a screen. Sentences missing the last word were presented auditorily. Following the sentence, a stimulus was flashed parafoveally. The stimulus was one of three types: 1)a word fitting the context of the sentence (in-context), 2)an out of context word (out-of-context), or 3)a nonword Participants then performed a lexical decision task based on the visual stimulus, and responded by pressing two buttons bimanually. Conclusions Reading of a word is modulated by the context of the sentence in which that word appears. This effect is shown behaviorally in both reaction time and accuracy data. The Interactive Activation Model (McClelland and Rumelhart, 1981) suggests that the context effect is due to top-down effects spreading downwards through the early areas representing words, letters, and even features of words. This top-down modulation interacts with data-driven, bottom-up processes in word recognition. In an fMRI, slow event-related study, across seven subjects: There is a trend to visual areas of the brain being modulated by the top-down influence of auditory sentence context. When a word is shown in context, there is a modulated (diminished) response than when a word is shown out of context. Although across all subjects, this effect was not significant, several subjects did show significant effects of context within regions activated by words and non-words in the upper or lower visual fields. Future Plans Future plans will expand the visual mapping of the context effect, examining the specific visual areas that are modulated by sentence context during word recognition. Plans are also in effect to increase power by running more subjects. Preliminary data suggests that this modulation may occur even at the earliest levels of processing in the visual stream. Introduction We present data from a study looking at top-down modulation of visual areas due to sentence context. Based on the Interactive Activation Model (McClelland and Rumelhart, 1981), word recognition results not only from bottom-up or data-driven processes, but also from top-down modulation due to context and semantics. Even early visual areas may be affected by this contextual modulation. This model was tested using an event-related fMRI paradigm in which sentence context was presented auditorily, and words and nonwords were shown visually. References McClelland, J. and Rumelhart, D. (1981). “An Interactive Activation Model of Context Effects in Letter Perception: Part1. An Account of Basic Findings” Psychological Review, 88(5), Also, see: McClelland, J. and O’Regan, J. (1981). “Expectations Increase the Benefit Derived From Parafoveal Visual Information in Reading Words Aloud” Journal of Experimental Psychology, 7(3), Task Timeline Methods  Task 5 sessions of 24 trials each (no stimuli repeated w/in subject) Type of trial counterbalanced back in time by condition Time between probe onsets: 16 sec.  Participants 7 students, ages Right-handed, English first language, not bilingual.  Imaging GE Signa 1.5 Tesla research scanner 1-shot gradient-echo spiral acquisitions, TR= oblique axial slices, AC-PC aligned, 3.8 mm  Processing Image reconstruction, movement correction  Analysis  Found visual regions responding to any stimulus shown in upper visual field, and to any stimulus shown in lower visual field.  Within these two activity maps for each subject, In Context and Out of Context trials were compared in a t-test contrast.  In this way, the sensitivity of visual areas to modulation by sentence context was tested. Details below. 2 regressors convolved with the hemodynamic response function: Visual Probe in Upper Field(UVF), Visual Probe in Lower Field(LVF) Regressor maps were formed using the four brain volumes following the probe, and these f-maps (UVF and LVF for each subject) were thresholded separately to obtain visual probe ROIs of at least 4 voxels. Voxel-wise p-values ranged from.05 to Time series from visual probe particles for each subject from the UVF and LVF maps were entered into a post-hoc t-test contrasting In Context and Out of Context conditions. Behavioral Results Participants were significantly faster to respond to words shown in context than words shown out of context (ANOVA: Context, p=.02, means: OC/IC /971.1ms, n=6*). There was no significant effect of position on screen and no significant interaction of position by context. Participants were faster to respond to words than to nonwords, but not significantly. * RT from correct trials only. Missing behavioral data for one subject. PARTICIPANT HEARS: PARTICIPANT SEES STIMULUS IN ONE OF SIX CONDITIONS BELOW: sugar + table + jayer + + sugar + table + jayer OUT OF CONTEXT WORD NONWORD IN CONTEXT WORD UPPER VISUAL FIELD LOWER VISUAL FIELD “I like my coffee with milk and...” Context Modulation Visual Mapping Gradient V1 Imaging Results An example of visual activity in an upper visual field F-map is shown below: Across all subjects, words shown in context activated visual areas less than words shown out of context (post- hoc t-test trend, p=.23). This indicates that an auditory sentence context can modulate visual activity due to word stimulation. The resulting time series is shown below: Although across subjects, this analysis was not significant, further analysis suggests that some subjects do have visual areas that are significantly affected by word context. Below is a figure representing visual mapping on one subject’s inflated left hemisphere with the calcarine fissure facing front right. The blue line outlines an ROI sensitive to context (GLM: In vs. Out) The visual mapping displayed underneath the outlines suggests that the context modulation occurs within early visual areas (most likely V1). TR= 2 sec. Brain Volumes Attention Tone Auditory Sentence Context Visual Probe UVF LVF Context t-test (In vs. Out) Subject 1 Subject 2 Etc. F-maps 180° 0°0° Word Extra Information: Behavioral Pilot Results During a behavioral pilot of the task (n=8), the context effect was found to be highly significant in reaction time (p<.0001; means IN=786, OC=998). Participants were much faster to respond if the word was shown in context, controlling for location. Location effect and the interaction of location and context were non-significant. There was also a strong trend towards higher accuracy for words shown in context (p=.052), controlling for location. Design Specifics Behavior: Words used for the study were balanced for frequency. Word length, imageability, concreteness, familiarity, and number of syllables were within restricted ranges for both sets. Sentence context fit was measured by cloze values. Words were shown 4 degrees visual angle above or below fixation, subtending 2 degrees visual angle in height. Visual Mapping: Hemifield flashing checkerboard rotating in 18 sec. cycles. Acknowledgements: Ed DeYoe, Kristi Clark, Clara Gomes, Amna Dermish, Kate Fissell, Jennifer Cochran 0 sec.18 sec.