Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht

Slides:



Advertisements
Similar presentations
© Negnevitsky, Pearson Education, Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
B.Macukow 1 Lecture 3 Neural Networks. B.Macukow 2 Principles to which the nervous system works.
5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised.
Artificial neural networks:
PDP: Motivation, basic approach. Cognitive psychology or “How the Mind Works”
Biological and Artificial Neurons Michael J. Watts
Artificial Neural Networks - Introduction -
Chapter 7 Supervised Hebbian Learning.
Neural Networks.
Slides are based on Negnevitsky, Pearson Education, Lecture 8 Artificial neural networks: Unsupervised learning n Introduction n Hebbian learning.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
September 16, 2010Neural Networks Lecture 4: Models of Neurons and Neural Networks 1 Capabilities of Threshold Neurons By choosing appropriate weights.
How does the mind process all the information it receives?
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Chapter Seven The Network Approach: Mind as a Web.
Supervised Hebbian Learning. Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing.

1 Pendahuluan Pertemuan 1 Matakuliah: T0293/Neuro Computing Tahun: 2005.
Artificial neural networks.
The McCulloch-Pitts Neuron. Characteristics The activation of a McCulloch Pitts neuron is binary. Neurons are connected by directed weighted paths. A.
1 COMP305. Part I. Artificial neural networks.. 2 The McCulloch-Pitts Neuron (1943). McCulloch and Pitts demonstrated that “…because of the all-or-none.
PART 5 Supervised Hebbian Learning. Outline Linear Associator The Hebb Rule Pseudoinverse Rule Application.
COMP305. Part I. Artificial neural networks.. Topic 3. Learning Rules of the Artificial Neural Networks.
INST 4200 David J Stucki Spring 2015 AN OVERVIEW OF CONNECTIONISM.
1 Introduction to Artificial Neural Networks Andrew L. Nelson Visiting Research Faculty University of South Florida.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
PSY105 Neural Networks 4/5 4. “Traces in time” Assignment note: you don't need to read the full book to answer the first half of the question. You should.
Artificial Neural Network Unsupervised Learning
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
Mestrado em Ciência de Computadores Mestrado Integrado em Engenharia de Redes e Sistemas Informáticos VC 14/15 – TP19 Neural Networks & SVMs Miguel Tavares.
Presented by Scott Lichtor An Introduction to Neural Networks.
Logical Calculus of Ideas Immanent in Nervous Activity McCulloch and Pitts Deep Learning Fatima Al-Raisi.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
7 1 Supervised Hebbian Learning. 7 2 Hebb’s Postulate “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Introduction to Artificial Intelligence (G51IAI) Dr Rong Qu Neural Networks.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Bain on Neural Networks and Connectionism Stephanie Rosenthal September 9, 2015.
ICS 586: Neural Networks Dr. Lahouari Ghouti Information & Computer Science Department.
1 Financial Informatics –XVII: Unsupervised Learning 1 Khurshid Ahmad, Professor of Computer Science, Department of Computer Science Trinity College, Dublin-2,
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
Artificial Intelligence Methods Neural Networks Lecture 1 Rakesh K. Bissoondeeal Rakesh K.
Neural Networks (NN) Part 1 1.NN: Basic Ideas 2.Computational Principles 3.Examples of Neural Computation.
0 Chapter 4: Associators and synaptic plasticity Fundamentals of Computational Neuroscience Dec 09.
Perceptron vs. the point neuron Incoming signals from synapses are summed up at the soma, the biological “inner product” On crossing a threshold, the cell.
Synaptic Plasticity Synaptic efficacy (strength) is changing with time. Many of these changes are activity-dependent, i.e. the magnitude and direction.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
Artificial neural networks
Joost N. Kok Universiteit Leiden
Simple learning in connectionist networks
CSE 573 Introduction to Artificial Intelligence Neural Networks
Backpropagation.
Simple learning in connectionist networks
The Network Approach: Mind as a Web
Introduction to Neural Network
Active, dynamic, interactive, system
Supervised Hebbian Learning
Presentation transcript:

Introduction to Connectionism Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht

Neural networks Based on an abstract view of the neuron Artificial neurons are connected to form large networks The connections determine the function of the network Connections can often be formed by learning and do not need to be ‘programmed’

Overview Biological and connectionist neurons –McCulloch and Pitts Hebbian learning –The Hebb rule –Competitive learning –Willshaw networks Illustration of key characteristics –Pattern completion –Graceful degradation

Many neurons have elaborate arborizations

The axon is covered with myelin sheaths for faster conductivity

With single-cell recordings, action potentials (spikes) can be recorded

McCulloch-Pitts (1943) Neuron 1. The activity of the neuron is an “all-or- none” process 2. A certain fixed number of synapses must be excited within the period of latent addition in order to excite a neuron at any time, and this number is independent of previous activity and position of the neuron

McCulloch-Pitts (1943) Neuron 3. The only significant delay within the nervous system is synaptic delay 4. The activity of any inhibitory sysapse absolutely prevents excitation of the neuron at that time 5. The structure of the net does not change with time From: A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5,

Neural networks abstract from the details of real neurons Conductivity delays are neglected An output signal is either discrete (e.g., 0 or 1) or it is a real-valued number (e.g., between 0 and 1) Net input is calculated as the weighted sum of the input signals Net input is transformed into an output signal via a simple function (e.g., a threshold function)

Artificial ‘neuron’

How to ‘program’ neural networks? The learning problem Selfridge (1958): evolutionary or ‘shake- and-check’ (hill climbing) Other approaches –Unsupervised or regularity detection –Supervised learning –Reinforcement learning has ‘some’ supervision

Neural networks and David Marr’s model (1969) Marr’s ideas are based on the learning rule by Donald Hebb (1949) Hebb-Marr networks can be auto- associative or hetero-associative The work by Marr and Hebb has been extremely influential in neural network theory

Hebb (1949) “When an axon of cell A is near enough to excite a cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased” From: The organization of behavior.

Hebb (1949) Also introduces the word connectionism “The theory is evidently a form of connectionism, one of the switchboard variety, though it does not deal in direct connections between afferent and efferent pathways: not an ‘S-R’psychology, if R means a muscular response. The connections server rather to establish autonomous central activities, which then are the basis of further learning” (p.xix)

William James (1890) Let us assume as the basis of all our subsequent reasoning this law: When two elementary brain-processes have been active together or in immediate succession, one of them, on re-occurring, tends to propagate its excitement into the other. From: Psychology (Briefer Course).

The Hebb rule is found with long term potentiation (LTP) in the hippocampus

With Hebbian learning, two learning methods are possible With unsupervised learning there is no teacher: the network tries to discern regularities in the input patterns With supervised learning an input is associated with an output –If the input and output are the same, we speak of auto-associative learning –If they are different it is called hetero- associative learning

Another type of network is based on error-correcting learning The most popular algorithm is called backpropagation

We will look at an example of each type of Hebbian learning Competitive learning is a form of unsupervised learning Hetero-associative learning in Willshaw networks is an example of supervised learning

Example of competitive learning: Stimulus ‘at’ is presented ato 12

Example of competitive learning: Competition starts at category level ato 12

Example of competitive learning: Competition resolves ato 12

Example of competitive learning: Hebbian learning takes place ato 12 Category node 2 now represents ‘at’

Presenting ‘to’ leads to activation of category node 1 ato 12

ato 12

ato 12

ato 12

Category 1 is established through Hebbian learning as well ato 12 Category node 1 now represents ‘to’

Willshaw networks A weight either has the value 0 or 1 A weight is set to 1 if input and output are 1 At retrieval the net input is divided by the total number of active nodes in the input pattern

Example of a simple heteroassociative memory of the Willshaw type

Example of pattern retrieval Sum = 3 Div by 3 = ( )

Example of successful pattern completion using a subpattern Sum = 2 Div by 2 = ( ) 1

Example graceful degradation: small lesions have small effects Sum = 3 Div by 3 = ( )

Summing up Neural networks can learn input-output associations They are able to retrieve an output on the basis of incomplete input cues They show graceful degradation Eventually the network will become overloaded with too many patterns