Real Neurons Cell structures Cell body Dendrites Axon

Slides:



Advertisements
Similar presentations
Memristor in Learning Neural Networks
Advertisements

Slides from: Doug Gray, David Poole
Introduction to Neural Networks Computing
Ch. Eick: More on Machine Learning & Neural Networks Different Forms of Learning: –Learning agent receives feedback with respect to its actions (e.g. using.
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
For Wednesday Read chapter 19, sections 1-3 No homework.
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Overview over different methods – Supervised Learning
Simple Neural Nets For Pattern Classification
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Data Mining with Neural Networks (HK: Chapter 7.5)
For Monday Read chapter 22, sections 1-3 No homework.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Artificial Neural Networks
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
1 CS 343: Artificial Intelligence Neural Networks Raymond J. Mooney University of Texas at Austin.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
1 Machine Learning Neural Networks Neural Networks NN belong to the category of trained classification models The learned classification model is an.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 CS 391L: Machine Learning Neural Networks Raymond J. Mooney University of Texas at Austin.
For Friday No reading Take home exam due Exam 2. For Monday Read chapter 22, sections 1-3 FOIL exercise due.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
SUPERVISED LEARNING NETWORK
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Artificial Neural Networks (Cont.) Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
1 Machine Learning Neural Networks Tomorrow no lesson! (will recover on Wednesday 11, WEKA !!)
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Bab 5 Classification: Alternative Techniques Part 4 Artificial Neural Networks Based Classifer.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Machine Learning Supervised Learning Classification and Regression
Neural networks and support vector machines
Fall 2004 Backpropagation CS478 - Machine Learning.
CS 388: Natural Language Processing: Neural Networks
Neural Networks.
Artificial Neural Networks
The Gradient Descent Algorithm
Learning with Perceptrons and Neural Networks
Artificial Neural Networks
Chapter 2 Single Layer Feedforward Networks
CSE 473 Introduction to Artificial Intelligence Neural Networks
Neural Networks CS 446 Machine Learning.
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
CSE P573 Applications of Artificial Intelligence Neural Networks
CS621: Artificial Intelligence
CSC 578 Neural Networks and Deep Learning
Data Mining with Neural Networks (HK: Chapter 7.5)
Perceptron as one Type of Linear Discriminants
N-Gram Model Formulas Word sequences Chain rule of probability
CSE 573 Introduction to Artificial Intelligence Neural Networks
Artificial Neural Networks
Neural Network - 2 Mayank Vatsa
Lecture Notes for Chapter 4 Artificial Neural Networks
Artificial neurons Nisheeth 10th January 2019.
CSSE463: Image Recognition Day 17
Seminar on Machine Learning Rada Mihalcea
Introduction to Neural Network
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
David Kauchak CS158 – Spring 2019
Presentation transcript:

Real Neurons Cell structures Cell body Dendrites Axon Synaptic terminals

Real Neural Learning Synapses change size and strength with experience. Hebbian learning: When two connected neurons are firing at the same time, the strength of the synapse between them increases. “Neurons that fire together, wire together.”

Artificial Neuron Model Model network as a graph with cells as nodes and synaptic connections as weighted edges from node i to node j, wji Model net input to cell as Cell output is: 1 3 2 5 4 6 w12 w13 w14 w15 w16 oj 1 (Tj is threshold for unit j) Tj netj

www.psi.toronto.edu/~vincent/research/presentations

www.psi.toronto.edu/~vincent/research/presentations

www.psi.toronto.edu/~vincent/research/presentations

www.psi.toronto.edu/~vincent/research/presentations

www.psi.toronto.edu/~vincent/research/presentations

www.psi.toronto.edu/~vincent/research/presentations

www.psi.toronto.edu/~vincent/research/presentations

What’s the idea? assembly of interconnected processing elements links modified through training magic! magic! problem solution input weighted link output neuron

von Neumann vs. Neural Net CPU data and instructions data data data memory

A few terms architecture activity rule learning rule the variables & their topological relationships activity rule specifies how a neuron will respond to its inputs (short time scale) learning rule specifies how a neuron will modify itself over longer time scale supervised neural network networks that are given ‘training sets’ – a set of inputs and correct answers to learn from unsupervised neural network networks that are given only a set of examples to memorize

Single neuron architecture: inputs xi output y weights wi w0 x1 w1 x2 … xi w1 w2 wi w0 y

Single neuron activity rule in 2 steps: neuron activation: x1 x2 … xi w1 w2 wi w0 y activity rule in 2 steps: neuron activation: activation function (pick one!): Deterministic options Stochastic options

Single neuron learning rule: x1 x2 … xi w1 w2 wi w0 y learning rule: compute error signal, or difference between target and output: adjust the weights accordingly: or batch learning: stuff all the input / target sets through the neuron and then modify the weight

Binary Hopfield Network architecture: I neurons with symmetric, bidirectional connections and weights wij activity rule: may be synchronous, asynchronous, or even continuous learning rule: each memory should be a stable state of the network’s activity rule. e.g. Hebb rule:

Hopfield Network Memory & Capacity more memories: 5 gives stable but shallow states 6 results in disaster: maximum number of patterns with ‘total’ stability criterion Avalanche occurs at

Perceptron Training Assume supervised training examples giving the desired output for a unit given a set of known input activations. Learn synaptic weights so that unit produces the correct output for each example. Perceptron uses iterative update algorithm to learn a correct set of weights.

Perceptron Learning Rule Update weights by: where η is the “learning rate” tj is the teacher specified output for unit j. Equivalent to rules: If output is correct do nothing. If output is high, lower weights on active inputs If output is low, increase weights on active inputs Also adjust threshold to compensate:

Perceptron Learning Algorithm Iteratively update weights until convergence. Each execution of the outer loop is typically called an epoch. Initialize weights to random values Until outputs of all training examples are correct For each training pair, E, do: Compute current output oj for E given its inputs Compare current output to target value, tj , for E Update synaptic weights and threshold using learning rule

Perceptron as a Linear Separator Since perceptron uses linear threshold function, it is searching for a linear separator that discriminates the classes. o3 ?? Or hyperplane in n-dimensional space o2

Hill-Climbing in Multi-Layer Nets Since “greed is good” perhaps hill-climbing can be used to learn multi-layer networks in practice although its theoretical limits are clear. However, to do gradient descent, we need the output of a unit to be a differentiable function of its input and weights. Standard linear threshold function is not differentiable at the threshold. oi 1 Tj netj

Differentiable Output Function Need non-linear output function to move beyond linear functions. A multi-layer linear network is still linear. Standard solution is to use the non-linear, differentiable sigmoidal “logistic” function: 1 Tj netj Can also use tanh or Gaussian output function

Backpropagation Learning Rule Each weight changed by: where η is a constant called the learning rate tj is the correct teacher output for unit j δj is the error measure for unit j

Error Backpropagation First calculate error of output units and use this to change the top layer of weights. Current output: oj=0.2 Correct output: tj=1.0 Error δj = oj(1–oj)(tj–oj) 0.2(1–0.2)(1–0.2)=0.128 output Update weights into j hidden input

Error Backpropagation Next calculate error for hidden units based on errors on the output units it feeds into. output hidden input

Error Backpropagation Finally update bottom layer of weights based on errors calculated for hidden units. output hidden Update weights into j input

Backpropagation Training Algorithm Create the 3-layer network with H hidden units with full connectivity between layers. Set weights to small random real values. Until all training examples produce the correct value (within ε), or mean squared error ceases to decrease, or other termination criteria: Begin epoch For each training example, d, do: Calculate network output for d’s input values Compute error between current output and correct output for d Update weights by backpropagating error and using learning rule End epoch

Comments on Training Algorithm Not guaranteed to converge to zero training error, may converge to local optima or oscillate indefinitely. However, in practice, does converge to low error for many large networks on real data. Many epochs (thousands) may be required, hours or days of training for large networks. To avoid local-minima problems, run several trials starting with different random weights (random restarts). Take results of trial with lowest training set error. Build a committee of results from multiple trials (possibly weighting votes by training set accuracy).