Presentation is loading. Please wait.

Presentation is loading. Please wait.

Does the Brain Use Symbols or Distributed Representations? James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford.

Similar presentations


Presentation on theme: "Does the Brain Use Symbols or Distributed Representations? James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford."— Presentation transcript:

1 Does the Brain Use Symbols or Distributed Representations? James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford University

2 Representation is a pattern of activation distributed over neurons within and across brain areas. Bidirectional propagation of activation underlies the ability to bring these representations to mind from given inputs. The knowledge underlying propagation of activation is in the connections. language Parallel Distributed Processing Approach to Semantic Cognition

3 Development and Degeneration Learned distributed representations in an appropriately structured distributed connectionist system underlies the development of conceptual knowledge. Gradual degradation of the representations constructed through this developmental process underlies the pattern of semantic disintegration seen in semantic dementia.

4 Differentiation, ‘Illusory Correlations’, and Overextension of Frequent Names in Development

5

6 The Rumelhart Model

7 The Training Data: All propositions true of items at the bottom level of the tree, e.g.: Robin can {grow, move, fly}

8 Target output for ‘robin can’ input

9 ajaj aiai w ij net i =  a j w ij w ki Forward Propagation of Activation

10  k ~ (t k -a k ) w ij  i ~   k w ki w ki ajaj Back Propagation of Error () Error-correcting learning: At the output layer:w ki =  k a i At the prior layer: w ij =  j a j … aiai

11

12

13 ExperienceExperience Early Later Later Still

14

15 Why Does the Model Show Progressive Differentiation? Learning in the model is sensitive to patterns of coherent covariation of properties Coherent Covariation: –The tendency for properties of objects to co-vary in clusters

16 Patterns of coherent covariation are reflected in the principal components of the property covariance matrix of the training patterns. Figure shows attribute loadings on the first three principal components: –1. Plants vs. animals –2. Birds vs. fish –3. Trees vs. flowers Same color = features covary in component Diff color = anti-covarying features Patterns of Coherent Covariation in the Training Set

17 Illusory Correlations Rochel Gelman found that children think that all animals have feet. –Even animals that look like small furry balls and don’t seem to have any feet at all. A tendency to over-generalize properties typical of a superordinate category at an intermediate point in development is characteristic of the PDP network.

18 A typical property that a particular object lacks e.g., pine has leaves An infrequent, atypical property

19 PropertyOne-Class Model1 st class in two-class model 2 nd class in two-class model Can Grow1.0 0 Is Living1.0 0 Has Roots0.51.00 Has Leaves0.43750.8750 Has Branches0.250.50 Has Bark0.250.50 Has Petals0.250.50 Has Gills0.2500.5 Has Scales0.2500.5 Can Swim0.2500.5 Can Fly0.2500.5 Has Feathers0.2500.5 Has Legs0.2500.5 Has Skin0.501.0 Can See0.501.0 A One-Class and a Two-Class Naïve Bayes Classifier Model

20 Accounting for the network’s representations with classes at different levels of granularity Regression Beta Weight Epochs of Training Living Thing Plant Tree Pine Bias

21 Overgeneralization of Frequent Names to Similar Objects “dog” “goat” “tree”

22 Why Does Overgeneralization of Frequent Names Increase and then decrease? In the simulation shown, dogs are experienced 10 times as much as any other animal, and there are 4 other mammals, 8 other animals, and ten plants. In a one-class model, goat is a living thing: –P(name is ‘Dog’|living thing) = 10/32 = ~.3 In a two-class model, goat is an animal: –P(name is ‘Dog’|animal) = 10/22 ~.5 In a five class model, goat is a mammal: –P(name is ‘Dog’|mammal) = 10/15 =.67 In a 23 class model, goat is in a category by itself: –P(name is ‘Dog’|goat) = 0

23

24 Sensitivity to Coherence Requires Convergence A A A

25 Inference and Generalization in the PDP Model A semantic representation for a new item can be derived by error propagation from given information, using knowledge already stored in the weights. Crucially: –The similarity structure, and hence the pattern of generalization depends on the knowledge already stored in the weights.

26 Start with a neutral representation on the representation units. Use backprop to adjust the representation to minimize the error.

27 The result is a representation similar to that of the average bird…

28 Use the representation to infer what this new thing can do.

29 Differential Importance (Marcario, 1991) 3-4 yr old children see a puppet and are told he likes to eat, or play with, a certain object (e.g., top object at right) –Children then must choose another one that will “be the same kind of thing to eat” or that will be “the same kind of thing to play with”. –In the first case they tend to choose the object with the same color. –In the second case they will tend to choose the object with the same shape.

30 Adjustments to Training Environment Among the plants: –All trees are large –All flowers are small –Either can be bright or dull Among the animals: –All birds are bright –All fish are dull –Either can be small or large In other words: –Size covaries with properties that differentiate different types of plants –Brightness covaries with properties that differentiate different types of animals

31

32

33 Similarities of Obtained Representations Size is relevant for Plants Brightness is relevant for Animals

34 Development and Degeneration Sensitivity to coherent covariation in an appropriately structured Parallel Distributed Processing system underlies the development of conceptual knowledge. Gradual degradation of the representations constructed through this developmental process underlies the pattern of semantic disintegration seen in semantic dementia.

35 Disintegration of Conceptual Knowledge in Semantic Dementia Progressive loss of specific knowledge of concepts, including their names, with preservation of general information Overgeneralization of frequent names Illusory correlations

36 Picture naming and drawing in Sem. Demantia

37

38 Proposed Architecture for the Organization of Semantic Memory color form motion action valance Temporal pole name Medial Temporal Lobe

39 Rogers et al (2005) model of semantic dementia nameassocfunction temporal pole vision

40 Severity of DementiaFraction of Neurons Destroyed omissionswithin categ. superord. Patient Data Simulation Results Errors in Naming for As a Function of Severity

41 Simulation of Delayed Copying Visual input is presented, then removed. After several time steps, pattern is compared to the pattern that was presented initially. nameassocfunction temporal pole vision

42 Omissions by feature typeIntrusions by feature type IF’s ‘camel’ DC’s ‘swan’ Simulation results

43 Conclusion Distributed representations gradually differentiate in ways that allow them to capture many phenomena in conceptual development. Their behavior is approximated by a blend of Naïve Bayes classifiers across several levels of granularity, with the blending weights shifting toward finer grain categories as learning progresses. Effects of damage are approximated by a reversal of this tendency: degraded representations retain the coarse-grained level knowledge but loose the finer- grained information. We are currently extending the models to address the sharing of knowledge across structurally related domains, I’ll be glad to discuss this idea in response to questions.


Download ppt "Does the Brain Use Symbols or Distributed Representations? James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford."

Similar presentations


Ads by Google