Awakening from the Cartesian Dream: The PDP Approach to Understanding the Mind and Brain Jay McClelland Stanford University February 7, 2013.

Slides:



Advertisements
Similar presentations
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Advertisements

Neural Network Models in Vision Peter Andras
Artificial Neural Network
Tuomas Sandholm Carnegie Mellon University Computer Science Department
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Emergence in Cognitive Science: Semantic Cognition Jay McClelland Stanford University.
Neural Networks Basic concepts ArchitectureOperation.
CSE 153 Cognitive ModelingChapter 3 Representations and Network computations In this chapter, we cover: –A bit about cortical architecture –Possible representational.
Stochastic Neural Networks, Optimal Perceptual Interpretation, and the Stochastic Interactive Activation Model PDP Class January 15, 2010.
COGNITIVE NEUROSCIENCE
Symbolic Encoding of Neural Networks using Communicating Automata with Applications to Verification of Neural Network Based Controllers* Li Su, Howard.
Chapter Seven The Network Approach: Mind as a Web.
Reading. Reading Research Processes involved in reading –Orthography (the spelling of words) –Phonology (the sound of words) –Word meaning –Syntax –Higher-level.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach Jay McClelland Department of Psychology and Center for.
Representation, Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Connectionism. ASSOCIATIONISM Associationism David Hume ( ) was one of the first philosophers to develop a detailed theory of mental processes.
James L. McClelland Stanford University
Stochastic Optimization and Simulated Annealing Psychology /719 January 25, 2001.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
The PDP Approach to Understanding the Mind and Brain J. McClelland Cognitive Core Class Lecture March 7, 2011.
The PDP Approach to Understanding the Mind and Brain Jay McClelland Stanford University January 21, 2014.
The Interactive Activation Model. Ubiquity of the Constraint Satisfaction Problem In sentence processing –I saw the grand canyon flying to New York –I.
Disintegration of Conceptual Knowledge In Semantic Dementia James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
The Boltzmann Machine Psych 419/719 March 1, 2001.
Models of Cognitive Processes: Historical Introduction with a Focus on Parallel Distributed Processing Models Psychology 209 Stanford University Jan 7,
Back Propagation and Representation in PDP Networks Psychology 209 February 6, 2013.
Emergence of Semantic Knowledge from Experience Jay McClelland Stanford University.
Perception, Thought and Language as Graded Constraint Satisfaction Processes Jay McClelland SymSys 100 April 12, 2011.
Understanding Human Cognition through Experimental and Computational Methods Jay McClelland Symbolic Systems 100 Spring, 2011.
Similarity and Attribution Contrasting Approaches To Semantic Knowledge Representation and Inference Jay McClelland Stanford University.
PDP Class Stanford University Jan 4, 2010
CS 478 – Tools for Machine Learning and Data Mining Perceptron.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
The PDP Approach to Understanding the Mind and Brain Jay McClelland Stanford University January 21, 2014.
Chapter 18 Connectionist Models
The Emergent Structure of Semantic Knowledge
Perception and Thought as Constraint Satisfaction Processes Jay McClelland Symsys 100 April 27, 2010.
Chapter 2: The Cognitive Science Approach
Chapter 6 Neural Network.
Minds and Computers Discovering the nature of intelligence by studying intelligence in all its forms: human and machine Artificial intelligence (A.I.)
Emergent Semantics: Meaning and Metaphor Jay McClelland Department of Psychology and Center for Mind, Brain, and Computation Stanford University.
Semantic Knowledge: Its Nature, its Development, and its Neural Basis James L. McClelland Department of Psychology and Center for Mind, Brain, and Computation.
Development and Disintegration of Conceptual Knowledge: A Parallel-Distributed Processing Approach James L. McClelland Department of Psychology and Center.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
Information Processing
Neural Networks: An Introduction and Overview
Psychology 209 – Winter 2017 January 31, 2017
What is cognitive psychology?
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 2, 2017.
Can they help us understand the human mind?
Fall 2004 Perceptron CS478 - Machine Learning.
Other Classification Models: Neural Network
Some Aspects of the History of Cognitive Science
James L. McClelland SS 100, May 31, 2011
Can they help us understand the human mind?
Emergence of Semantics from Experience
of the Artificial Neural Networks.
Some Aspects of the History of Cognitive Science
Learning linguistic structure with simple recurrent neural networks
The Network Approach: Mind as a Web
Neural Networks: An Introduction and Overview
Human Cognition: Is it more like a computer or a neural net?
Presentation transcript:

Awakening from the Cartesian Dream: The PDP Approach to Understanding the Mind and Brain Jay McClelland Stanford University February 7, 2013

Decartes’ Legacy Mechanistic approach to sensation and action Divine inspiration creates mind This leads to four dissociations: –Mind / Brain –Higher Cognitive Functions / Sensory-motor systems –Human / Animal –Descriptive / Mechanistic

Early Computational Models of Human Cognition ( ) The computer contributes to the overthrow of behaviorism. Computer simulation models emphasize strictly sequential operations, using flow charts. Simon announces that computers can ‘think’. Symbol processing languages are introduced allowing some success at theorem proving, problem solving, etc. Minsky and Pappert kill off Perceptrons. Cognitive psychologists distinguish between algorithm and hardware. Neisser deems physiology to be only of ‘peripheral interest’ Psychologists investigate mental processes as sequences of discrete stages.

Ubiquity of the Constraint Satisfaction Problem In sentence processing –I saw the grand canyon flying to New York –I saw the sheep grazing in the field In comprehension –Margie was sitting on the front steps when she heard the familiar jingle of the “Good Humor” truck. She remembered her birthday money and ran into the house. In reaching, grasping, typing…

Graded and variable nature of neuronal responses

Lateral Inhibition in Eye of Limulus (Horseshoe Crab)

The Interactive Activation Model

Input and activation of units in PDP models General form of unit update: Simple version used in cube simulation: An activation function that links PDP models to Bayesian ideas: Or set activation to 1 probabilistically: unit i Input from unit j w ij net i max=1 a min=-.2 rest 0 a i or p i

The Cube Network Positive weights have value +1 Negative weights have value -1.5 Stimulus provides input of.5 to all units

Cognitive Neuropsychology (1970’s) Deep and surface dyslexia (1970’s): –Deep dyslexics can’t read non-words (e.g. VINT), make semantic errors in reading words (PEACH -> ‘apricot’) –Surface dyslexics can read non-words, and regular words (e.g. MINT) but often regularize exceptions (PINT). Work leads to ‘box-and-arrow’ models, reminiscent of flow-charts

Graceful Degradation in Neuropsychology Patient deficits graded in severity Error patterns have systematic characteristics: –Deep dyslexic produce both visual and semantic errors: symphony -> sympathy symphony -> orchestra –Errors in surface dyslexia (and normal reading) depend on a word’s frequency, and on a word’s neighbors Effects of lesions to units and connections in distributed PDP models nicely capture both of these features of patient deficits. PINT TREAD MINT LAKE

Core Principles of Parallel Distributed Processing Processing occurs via interactions among neuron- like processing units via weighted connections. A representation is a pattern of activation. The knowledge is in the connections. Learning occurs through gradual connection adjustment, driven by experience. Learning affects both representation and processing. H I N T /h/ /i/ /n/ /t/

Learning in a Feedforward PDP Network Propagate activation ‘forward’ producing a i ( a j ) for all units using the logistic activation function. Calculate error at the output layer:  i = f(t i – a i ) Propagate error backward to calculate error information at the ‘hidden’ layer:  j = f(  i w ij f(t i – a i )) Change weights:  w ij =  i a j H I N T /h/ /i/ /n/ /t/

Additional Features of the PDP Framework Processing is in general thought to be continuous, bidirectional, and distributed within and across components of the cognitive system: –Each part contributes to the processing that takes place in other parts. –The outcome of processing anywhere can depend on processing everywhere. Processing can be very robust for highly typical and frequent items in well-practiced tasks such that considerable degradation can be tolerated before there is an apparent deficit. CONTEXT

Implications of this approach Knowledge that is otherwise represented in explicit form is inherently implicit in PDP: –Rules –Propositions –Lexical entries… None of these things are represented as such in a PDP system. Knowledge that others have claimed must be innate and pre- specified domain-by-domain often turns out to be learnable within the PDP approach. Thus the approach provides a new way of looking at many aspects of knowledge-dependent cognition and development. While the approach allows for structure (e.g. in the organization and interconnection of processing modules), processing is generally far more distributed, and causal attribution becomes more complex.

In short… Models that link human cognition to the underlying neural mechanisms of the brain simultaneously provide alternatives to other ways of understanding processing, learning, and representation at a cognitive level.

The PDP Enterprise… Attempts to explain human cognition as an emergent consequence of neural processes. –Global outcomes, local processes Forms a natural bridge between cognitive science on the one hand and neuroscience on the other. Is an ongoing process of exploration. Depends critically on computational modeling and mathematical analysis.