PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”

Slides:



Advertisements
Similar presentations
Chapter 2: Marr’s theory of vision. Cognitive Science  José Luis Bermúdez / Cambridge University Press 2010 Overview Introduce Marr’s distinction between.
Advertisements

Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Artificial Neural Network
Kostas Kontogiannis E&CE
Artificial Neural Networks - Introduction -
Artificial Intelligence (CS 461D)
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Neural Networks.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Un Supervised Learning & Self Organizing Maps Learning From Examples
COGNITIVE NEUROSCIENCE
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
1 Chapter 20 Section Slide Set 1 Prologue to neural networks: the neurons in the human brain Additional sources used in preparing the slides: Various.
Pattern Recognition using Hebbian Learning and Floating-Gates Certain pattern recognition problems have been shown to be easily solved by Artificial neural.
CSE 153Modeling Neurons Chapter 2: Neurons A “typical” neuron… How does such a thing support cognition???
How does the mind process all the information it receives?
Chapter Seven The Network Approach: Mind as a Web.
Cognitive Neuroscience How do we connect cognitive processes in the mind with physical processes in the brain?
1 Machine Learning: Connectionist Introduction 11.1Foundations of Connectionist Networks 11.2Perceptron Learning 11.3Backpropagation Learning 11.4.
Artificial Neural Networks - Introduction -
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Machine Learning. Learning agent Any other agent.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Neural Networks AI – Week 21 Sub-symbolic AI One: Neural Networks Lee McCluskey, room 3/10
Connectionism. ASSOCIATIONISM Associationism David Hume ( ) was one of the first philosophers to develop a detailed theory of mental processes.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Artificial Neural Networks. Applied Problems: Image, Sound, and Pattern recognition Decision making  Knowledge discovery  Context-Dependent Analysis.
NEURAL NETWORKS FOR DATA MINING
Cognition, Brain and Consciousness: An Introduction to Cognitive Neuroscience Edited by Bernard J. Baars and Nicole M. Gage 2007 Academic Press Chapter.
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
CS344 : Artificial Intelligence Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 29 Introducing Neural Nets.
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
ICS 586: Neural Networks Dr. Lahouari Ghouti Information & Computer Science Department.
CS-424 Gregory Dudek Today’s Lecture Neural networks –Training Backpropagation of error (backprop) –Example –Radial basis functions.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Cognitive Science Overview Introduction, Syllabus
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Biological and cognitive plausibility in connectionist networks for language modelling Maja Anđel Department for German Studies University of Zagreb.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
March 31, 2016Introduction to Artificial Intelligence Lecture 16: Neural Network Paradigms I 1 … let us move on to… Artificial Neural Networks.
Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
NEURONAL NETWORKS AND CONNECTIONIST (PDP) MODELS Thorndike’s “Law of Effect” (1920’s) –Reward strengthens connections for operant response Hebb’s “reverberatory.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
Information Processing
Today’s Lecture Neural networks Training
Outline Of Today’s Discussion
What is cognitive psychology?
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 2, 2017.
Artificial Intelligence (CS 370D)
James L. McClelland SS 100, May 31, 2011
Simple learning in connectionist networks
OVERVIEW OF BIOLOGICAL NEURONS
Learning linguistic structure with simple and more complex recurrent neural networks Psychology February 8, 2018.
Learning linguistic structure with simple recurrent neural networks
Simple learning in connectionist networks
The Network Approach: Mind as a Web
Introduction to Neural Network
Active, dynamic, interactive, system
Machine Learning.
Presentation transcript:

PDP: Motivation, basic approach

Cognitive psychology or “How the Mind Works”

Information processing Perception / sensationAction Transformations Mental representations

Key questions: What are the mental representations? What are the transformations and how do they work? Where do these come from?

Answers circa 1979 Mental representations are data structures (symbols bound together in relational structures) “Transformation” processes are rules, like the instructions in a computer program. The rules and starting representations are largely innate Learning is a process of building up new structured representations from primitives and deriving (and storing) new rules.

Why were (are?) these ideas so appealing?

EG modular, serial processing in word recognition… Their application is highly intuitive, logical, maybe even obvious!

Explain generalization / generativity, central to human cognition! EG: Today I will frump the car, yesterday I….______________ the car. EG: Colorless green ideas sleep furiously!

Can build mechanistic, implemented models of behavior…

Explains modular cognitive impairments! Pure alexia: Word representations gone! Prosopagnosia: Face reps gone! Category-specific impairment: Animals gone! Broca’s aphasia: Phonological word forms gone! And so on…

So why isn’t this a class on symbolic cognitive modeling?

W O R K

EG: Today I will meep the car, yesterday I….______________ the car.

Learning and development…

Cognitive impairments not so modular… Pure alexia: Can’t see high spatial frequencies. Prosopagnosia: Abnormal visual word recognition. Category-specific impairment: Can still name half the animals. Broca’s aphasia: Can still produce single concrete nouns, can’t do grammar. And so on… But also Had a lot of Other Problems…

Some issues with modular, serial, symbolic approaches Constraint satisfaction, context sensitivity Quasi regularity Learning and developmental change Graceful degradation

Rumelhart Brains are different than serial digital computers. Brain: Neurons are like little information processors They are slow and intrinsically noisy… …but there are many of them and they behave in parallel, not in serial. It turns out that noisy, parallel computing systems are naturally suited to some of the kinds of behavioral tasks that challenged symbolic theories of the time.

In other words… Rather than thinking of the mind as some kind of unconstrained “universal computer,” maybe we should pay attention to the kinds of computations a noisy, parallel, brain-like system can naturally conduct. Paying attention to the implementation might offer some clues about / constraints on theories about how the mind works. Added bonus: Such theories might offer a bridge between cognition and neuroscience!

Golgi Stain Cell body Dendrites (receive signals) Axon (transmits signals) Axon Terminal

Membrane potential at axon hillock Depolarized Hyperpolarized p(Firing a spike)

Input: Depends on activation of sending neurons and efficacy of synapses Output: Train of action potentials at a particular rate Weight: Effect on downstream neuron

Six elements of connectionist models: 1.A set of units 2.A weight matrix 3.An input function 4.A transfer function 5.A model environment 6.A learning rule

Six elements of connectionist models: 1.A set of units Each unit is like a population of cells with a similar receptive field. Think of all units in a model as a single vector, with each unit corresponding to one element of the vector. At any point in time, each unit has an activation state analogous to the mean firing activity of the population of neurons. These activation states are stored in an activation vector, with each element corresponding to one unit.

Input Output Hidden [ ] Input Output Hidden Bias [ ]

Six elements of connectionist models: 2.A weight matrix Each unit sends and receives a weighted connection to/from some other subset of units. These weights are analogous to synapses: they are the means by which one units transmits information about its activation state to another unit. Weights are stored in a weight matrix

Input Output Hidden Bias [ ] Sending Receiving

Six elements of connectionist models: 3.An input function For any given receiving unit, there needs to be some way of determining how to combine weights and sending activations to determine the unit’s net input This is almost always the dot product (ie weighted sum) of the sending activations and the weights.

Input Output Hidden Bias [1 0 ?? ?? ?? 1] Sending Receiving

Six elements of connectionist models:

Input Output Hidden Bias [1 0 ?? ?? ?? 1] Sending Receiving

Six elements of connectionist models: 5.A model environment All the models do is compute activation states over units, given the preceding elements and some partial input. The model environment specifies how events in the world are encoded in unit activation states, typically across a subset of units. It consists of vectors that describe the input activations corresponding to different events, and sometimes the “target” activations that the network should generate for each input.

In1In2Out X-OR function Input Output Hidden Bias Input1Input2Hidden1Hidden2Output Input1 Input2 Hidden1 Hidden2 Output bias

Note that the model environment is always theoretically important! It amounts to a theoretical statement about the nature of the information available to the system from perception and action or prior cognitive processing. Many models sink or swim on the basis of their assumptions about the nature of inputs / outputs.

Six elements of connectionist models: 6.A learning rule Only specified for models that learn (obviously) Specifies how the values stored in the weight matrix should change as the network processes patterns Many different varieties that we will see: Hebbian Error-correcting (e.g. backpropagation) Competitive / self-organizing Reinforcement-based

In1In2Out X-OR function Input Output Hidden Bias Input1Input2Hidden1Hidden2Output Input1 Input2 Hidden1 Hidden2 Output bias

Central challenge Given just these elements, can we build little systems—models—that help us to understand human cognitive abilities, and their neural underpinnings?

One early example…