Presentation is loading. Please wait.

Presentation is loading. Please wait.

COGNITIVE NEUROSCIENCE

Similar presentations


Presentation on theme: "COGNITIVE NEUROSCIENCE"— Presentation transcript:

1 COGNITIVE NEUROSCIENCE

2 Note Please read book to review major brain structures and their functions Please read book to review brain imaging techniques See also additional slides available on class website

3 Cognitive Neuroscience
the study of the relation between cognitive processes and brain activities Potential to measure some “hidden” processes that are part of cognitive theories (e.g. memory activation, attention, “insight”) Measuring when and where activity is happening. Different techniques have different strengths: tradeoff between spatial and temporal resolution

4 Techniques for Studying Brain Functioning
Single unit recordings Hubel and Wiesel (1962, 1979) Event-related potentials (ERPs) Positron emission tomography (PET) Magnetic resonance imaging (MRI and fMRI) Magneto-encephalography (MEG) Transcranial magnetic stimulation (TMS)

5 The spatial and temporal ranges of some techniques used to study brain functioning.

6 Single Cell Recording (usually in animal studies)
Measure neural activity with probes. E.g., research by Hubel and Wiesel:

7 Hubel and Wiesel (1962) Studied LGN and primary visual cortex in the cat. Found cells with different receptive fields – different ways of responding to light in certain areas LGN On cell (shown on left) LGN Off cell Directional cell Action potential frequency of a cell associated with a specific receptive field in a monkey's field of vision. The frequency increases as a light stimulus is brought closer to the receptive field.

8 COMPUTATIONAL COGNITIVE SCIENCE

9 Computer Models Artificial intelligence Computational modeling
Constructing computer systems that produce intelligent outcomes Computational modeling Programming computers to model or mimic some aspects of human cognitive functioning. Modeling natural intelligence.  Simulations of behavior

10 Why do we need computational models?
Provides precision need to specify complex theories. Makes vague verbal terms specific Provides explanations Obtain quantitative predictions just as meteorologists use computer models to predict tomorrow’s weather, the goal of modeling human behavior is to predict performance in novel settings

11 Neural Networks Alternative to traditional information processing models Also known as: PDP (parallel distributed processing approach) and Connectionist models Neural networks are networks of simple processors that operate simultaneously Some biological plausibility

12 Idealized neurons (units)
Inputs S Processor Output Abstract, simplified description of a neuron

13 Different ways to represent information with neural networks: localist representation
Unit 6 Unit 1 Unit 5 Unit 2 Unit 3 Unit 4 1 concept 1 concept 2 concept 3 (activations of units; 0=off 1=on) Each unit represents just one item  “grandmother” cells

14 Coarse Coding/ Distributed Representations
Unit 6 Unit 1 Unit 5 Unit 2 Unit 3 Unit 4 1 concept 1 concept 2 concept 3 (activations of units; 0=off 1=on) Each unit is involved in the representation of multiple items

15 Advantage of Distributed Representations
Efficiency Solve the combinatorial explosion problem: With n binary units, 2n different representations possible. (e.g.) How many English words from a combination of 26 alphabet letters? Damage resistance Even if some units do not work, information is still preserved – because information is distributed across a network, performance degrades gradually as function of damage (aka: robustness, fault-tolerance, graceful degradation)

16 Suppose we lost unit 6 Unit 6 Unit 1 Unit 5 Unit 2 Unit 3 Unit 4 1 concept 1 concept 2 concept 3 (activations of units; 0=off 1=on) Can the three concepts still be discriminated?

17 An example calculation for a single neuron
Diagram showing how the inputs from a number of units are combined to determine the overall input to unit-i. Unit-i has a threshold of 1; so if its net input exceeds 1 then it will respond with +1, but if the net input is less than 1 then it will respond with –1

18 Neural-Network Models
The simplest models include three layers of units: (1) The input layer is a set of units that receives stimulation from the external environment. (2) The units in the input layer are connected to units in a hidden layer, so named because these units have no direct contact with the environment. (3) The units in the hidden layer in turn are connected to those in the output layer. Each connection from an input unit ei­ther excites or inhibits a hidden unit. Furthermore, each connection has a weight, a measure of the strength of its influence on the receiving unit. Some networks include feedback loops, for example, with connections from hidden units to input units. Here is a crucial point: the pattern of weights in the entire network serves to repre­sent associations between input and output. Neural networks not only use parallel processing, they rely on distributed parallel processing, in which a representation is a pattern of weights, not a single weight, node, or connection. (p. 42)

19 Multi-layered Networks
Activation flows from a layer of input units through a set of hidden units to output units Weights determine how input patterns are mapped to output patterns Network can learn to associate output patterns with input patterns by adjusting weights Hidden units tend to develop internal representations of the input-output associations Backpropagation is a common weight-adjustment algorithm output units hidden units input units

20 Example of Learning Networks

21 Another example: NETtalk
Connectionist network learns to pronounce English words: i.e., learns spelling to sound relationships. Listen to this audio demo. teacher /k/ target output 26 output units 80 hidden units 7 groups of 29 input units The net was presented with seven consecutive letters (e.g., “a_cat_”) simultaneously as input. NETtalk learned to pronounce the phoneme associated with the central letter (“c” in this example) NETtalk achieved a 90% success rate during training. When tested on a set of novel inputs that it had not seen during training, NETtalk’s performance remained steady at 80%-87%. _ a c t 7 letters of text input target letter (after Hinton, 1989)

22 Other demos Hopfield network
Backpropagation algorithm and competitive learning: Competitive learning: Various networks: Optical character recognition: Brain-wave simulator

23 Neural Network Models Inspired by real neurons and brain organization but are highly idealized Can spontaneously generalize beyond information explicitly given to network Retrieve information even when network is damaged (graceful degradation) Networks can be taught: learning is possible by changing weighted connections between nodes


Download ppt "COGNITIVE NEUROSCIENCE"

Similar presentations


Ads by Google