Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Interactive Activation Model. Ubiquity of the Constraint Satisfaction Problem In sentence processing –I saw the grand canyon flying to New York –I.

Similar presentations


Presentation on theme: "The Interactive Activation Model. Ubiquity of the Constraint Satisfaction Problem In sentence processing –I saw the grand canyon flying to New York –I."— Presentation transcript:

1 the Interactive Activation Model

2 Ubiquity of the Constraint Satisfaction Problem In sentence processing –I saw the grand canyon flying to New York –I saw the sheep grazing in the field In comprehension –Margie was sitting on the front steps when she heard the familiar jingle of the “Good Humor” truck. She remembered her birthday money and ran into the house. In reaching, grasping, typing…

3

4 Graded and variable nature of neuronal responses

5 Lateral Inhibition in Eye of Limulus (Horseshoe Crab)

6 Findings Motivating the IA Model The word superiority effect (Reicher, 1969) –Subjects identify letters in words better than single letters or letters in scrambled strings. The pseudoword advantage –The advantage over single letters and scrambled strings extends to pronounceable non- words (e.g. LEAT LOAT…) The contextual enhancement effect –Increasing the duration of the context or of the target letter facilitates correct identification. Reicher’s experiment: –Used pairs of 4-letter words differing by one letter READ ROAD –The ‘critical letter’ is the letter that differs. –Critical letters occur in all four positions. –Same critical letters occur alone or in scrambled strings _E__ _O__ EADR EODR W PW Scr L Percent Correct

7 READ _E__ O

8 The Contextual Enhancement Effect Ratio Percent Correct

9 Questions Can we explain the Word Superiority Effect and the Contextual Enhancement Effect as a consequence of a synergistic combination of ‘top-down’ and ‘bottom-up’ influences? Can the same processes also explain the Pseudoword advantage? What specific assumptions are necessary to capture the data? What can we learn about these assumptions from the study of model variants and effects of parameter changes? Can we derive novel predictions? What do we learn about the limitations as well as the strengths of the model?

10 Approach Draw on ideas from the way neurons work Keep it as simple as possible

11 The Interactive Activation Model Feature, letter and word units. Activation is the system’s only ‘currency’ Mutually consistent items on adjacent levels excite each other Mutually exclusive alternatives inhibit each other. Response selected from the letter units in the cued location according to the Luce choice rule: where

12 IAC Activation Function Unit i Output from unit j w ij max min rest a 0 net i =  j o j w ij o j = [a j ]+ Calculate net input to each unit: Set outputs:

13 The Interactive Activation Model

14 How the Model Works: Words vs. Single Letters

15 Rest levels for features, letters = -.1 Rest level for words frequency dependent between -.001 and -.05

16 Word and Letter Level Activations for Words and Pseudowords Idea of ‘conspiracy effect’ rather than consistency with rules as a basis of performance on ‘regular’ items.

17 Role of Pronouncability vs. Neighbors Three kinds of pairs: –Pronounceable: SLET-SPET –Unpronouncable/good: SLCT-SPCT –Unpronouncable/bad: XLQJ-XPQJ

18 Simulation of Contextual Enhancement Effect

19 The Multinomial IA Model Very similar to Rumelhart’s 1977 forumulation Based on a simple generative model of displays in letter perception experiments. –Experimenter selects a word, –Selects letters based on word, but with possible random errors –Selects featues based on letters, again with possible random error AND/OR –Visual system registers features with some possibility of error –Some features may missing as in the WOR? example above Units without parents have biases equal to log of prior Weights defined ‘top down’: correspond to log of p(C|P) where C = child, P = parent Units take on probabilistic activations based on softmax function –only one unit allowed to be active within each set of mutually exclusive hypotheses A state corresponds to one active word unit and one active letter unit in each position, together with the provided set of feature activations. If the priors and weights correspond to those underlying the generative model, than states are ‘sampled’ in proportion to their posterior probability –State of entire system = sample from joint posterior –State of word or letter units in a given position = sample from marginal posterior Subscript i indexes one member of a set of mutually exclusive hypotheses; i’ runs over all members of the set of mutually exclusive alternatives.

20 Input and activation of units in PDP models General form of unit update: Simple version used in cube simulation: An activation function that links PDP models to Bayesian ideas: Or set activation to 1 probabilistically: unit i Input from unit j w ij net i max=1 a min=-.2 rest 0 a i or p i


Download ppt "The Interactive Activation Model. Ubiquity of the Constraint Satisfaction Problem In sentence processing –I saw the grand canyon flying to New York –I."

Similar presentations


Ads by Google