Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.

Slides:



Advertisements
Similar presentations
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Advertisements

1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Classification Neural Networks 1
Perceptron.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Data Mining with Neural Networks (HK: Chapter 7.5)
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial neural networks:
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Neural Networks. Plan Perceptron  Linear discriminant Associative memories  Hopfield networks  Chaotic networks Multilayer perceptron  Backpropagation.
Artificial Neural Networks
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
2101INT – Principles of Intelligent Systems Lecture 10.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
START OF DAY 4 Reading: Chap. 3 & 4. Project Topics & Teams Select topics/domains Select teams Deliverables – Description of the problem – Selection.
Artificial Neural Network Yalong Li Some slides are from _24_2011_ann.pdf.
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
 Diagram of a Neuron  The Simple Perceptron  Multilayer Neural Network  What is Hidden Layer?  Why do we Need a Hidden Layer?  How do Multilayer.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 31: Feedforward N/W; sigmoid.
Feed-Forward Neural Networks 主講人 : 虞台文. Content Introduction Single-Layer Perceptron Networks Learning Rules for Single-Layer Perceptron Networks – Perceptron.
Neural Networks and Backpropagation Sebastian Thrun , Fall 2000.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
CS621 : Artificial Intelligence
CS344: Introduction to Artificial Intelligence (associated lab: CS386) Pushpak Bhattacharyya CSE Dept., IIT Bombay Lecture 32: sigmoid neuron; Feedforward.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Image Source: ww.physiol.ucl.ac.uk/fedwards/ ca1%20neuron.jpg
EEE502 Pattern Recognition
IE 585 History of Neural Networks & Introduction to Simple Learning Rules.
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
COMP53311 Other Classification Models: Neural Network Prepared by Raymond Wong Some of the notes about Neural Network are borrowed from LW Chan’s notes.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Lecture 12. Outline of Rule-Based Classification 1. Overview of ANN 2. Basic Feedforward ANN 3. Linear Perceptron Algorithm 4. Nonlinear and Multilayer.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Neural Networks.
Learning with Perceptrons and Neural Networks
Artificial neural networks:
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSSE463: Image Recognition Day 17
Backpropagation.
CSSE463: Image Recognition Day 17
Machine Learning: Lecture 4
Machine Learning: UNIT-2 CHAPTER-1
CSSE463: Image Recognition Day 17
David Kauchak CS158 – Spring 2019
Presentation transcript:

Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991

What is Connectionist Architecture? Very simple neuron-like processing elements. Weighted connections between these elements. Highly parallel & distributed. Emphasis on learning internal representations automatically.

What is Good About Connectionist Models? Inspired by the brain. –Neuron-like elements & synapse-like connections. –Local, parallel computation. –Distributed representation. Plausible experience-based learning. Good generalization via similarity. Graceful degradation.

Inspired by the Brain

The brain is made up of areas. Complex patterns of projections within and between areas. –Feedforward (sensory -> central) –Feedback (recurrence)

Neurons Input from many other neurons. Inputs sum until a threshold reached. At threshold, a spike is generated. The neuron then rests. Typical firing rate is 100 Hz (computer is 1,000,000,000 Hz)

Synapses Axons almost touch dendrites of other neurons. Neurotransmitters effect transmission from cell to cell through synapse. This is where long term learning takes place.

Synapse Learning One way the brain learns is by modification of synapses as a result of experience. Hebb’s postulate (1949): –When an axon of cell A … excites cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells so that A’s efficiency as one of the cells firing B is increased. Bliss and Lomo (1973) discovered this type of learning in the hippocampus.

Local, Parallel Computation The net input is the weighted sum of all incoming activations. The activation of this unit is some function of net, f.

Local, Parallel Computation net = 1* *.9 + 1*.3 = f(x) = x -.4

units weights Simple Feedforward Network

Mapping from input to output Input pattern: input layer

Mapping from input to output Input pattern: input layer hidden layer

Mapping from input to output Input pattern: input layer hidden layer Output pattern: output layer feed-forward processing

Early Network Models McClelland and Rummelhart’s model of Word Superiority effect Weights hand crafted.

Perceptrons Rosenblatt, Layer network. Threshold activation function at output –+1 if weighted input is above threshold. –-1 if below threshold.

Perceptrons x1x1 x2x2 xnxn  w1w1 w2w2 wnwn

x 0 =1 x1x1 xnxn  w0w0 w1w1 wnwn

Perceptrons x 0 =1 x1x1 x2x2  w0w0 w1w1 w2w2 g(x)=w 0 +x 1 w 1 +x 2 w 2 1 if g(x) > 0 0 if g(x) < 0

Perceptrons Perceptrons can learn to compute functions. In particular, perceptrons can solve linearly separable problems. B B B A B A A B xor and

Perceptrons x 0 =1 x1x1 xnxn  w0w0 w1w1 wnwn Perceptrons are trained on input/output pairs. If fires when shouldn’t, make each w i smaller by an amount proportional to x i. If doesn’t fire when should, make each w i larger.

Perceptrons  x1x1 x2x2 o RIGHT

Perceptrons  x1x1 x2x2 o RIGHT

Perceptrons  x1x1 x2x2 o RIGHT

Perceptrons  x1x1 x2x2 o WRONG

Perceptrons 1  x1x1 x2x2 o Fails to fire, so add proportion, , to weights.

Perceptrons 1  x x x1 x1x1 x2x2 o  =.01

Perceptrons 1  x1x1 x2x2 o nnd4pr

Gradient Descent

1.Choose some (random) initial values for the model parameters. 2.Calculate the gradient G of the error function with respect to each model parameter. 3.Change the model parameters so that we move a short distance in the direction of the greatest rate of decrease of the error, i.e., in the direction of -G. 4.Repeat steps 2 and 3 until G gets close to zero.

Gradient Descent

Learning Rate

Adding Hidden Units 1 10 input space hidden unit space

Minsky & Papert Minsky & Papert (1969) claimed that multi-layered networks with non-linear hidden units could not be trained. Backpropagation solved this problem.

Backpropagation For each pattern in the training set: Compute the error at the output nodes Compute  w for each wt in 2 nd layer Compute delta (generalized error expression) for hidden units Compute  w for each wt in 1 st layer After amassing  w for all weights and all patterns, change each wt a little bit, as determined by the learning rate nnd12sd1 nnd12mo

Benefits of Connectionism Link to biological systems –Neural basis. Parallel. Distributed. Good generalization. Graceful degredation. –Learning. Very powerful and general.

Problems with Connectionism Intrepretablility. –Weights. –Distributed nature. Faithfulness. –Often not well understood why they do what they do. Often complex. Falsifiability. –Gradient descent as search. –Gradient descent as model of learning.