Neural Networks Chapter 6 Joost N. Kok Universiteit Leiden.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

Artificial Intelligence 12. Two Layer ANNs
NEURAL NETWORKS Backpropagation Algorithm
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Machine Learning Lecture 4 Multilayer Perceptrons G53MLE | Machine Learning | Dr Guoping Qiu1.
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Classification Neural Networks 1
Machine Learning Neural Networks
Lecture 14 – Neural Networks
Simple Neural Nets For Pattern Classification
Artificial Neural Networks ML Paul Scheible.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Back-Propagation Algorithm
Neural Networks Chapter Feed-Forward Neural Networks.
Multi Layer Perceptrons (MLP) Course website: The back-propagation algorithm Following Hertz chapter 6.
MACHINE LEARNING 12. Multilayer Perceptrons. Neural Networks Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)
An Introduction To The Backpropagation Algorithm Who gets the credit?
CS 4700: Foundations of Artificial Intelligence
CS 484 – Artificial Intelligence
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neural networks - Lecture 111 Recurrent neural networks (II) Time series processing –Networks with delayed input layer –Elman network Cellular networks.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
A Genetic Algorithms Approach to Feature Subset Selection Problem by Hasan Doğu TAŞKIRAN CS 550 – Machine Learning Workshop Department of Computer Engineering.
Artificial Neural Networks
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Chapter 9 Neural Network.
Chapter 3 Neural Network Xiu-jun GONG (Ph. D) School of Computer Science and Technology, Tianjin University
Appendix B: An Example of Back-propagation algorithm
NEURAL NETWORKS FOR DATA MINING
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
Chapter 2 Single Layer Feedforward Networks
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks - lecture 51 Multi-layer neural networks  Motivation  Choosing the architecture  Functioning. FORWARD algorithm  Neural networks as.
Artificial Neural Networks (Cont.) Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
CAP6938 Neuroevolution and Artificial Embryogeny Neural Network Weight Optimization Dr. Kenneth Stanley January 18, 2006.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
An Introduction To The Backpropagation Algorithm.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Machine Learning Supervised Learning Classification and Regression
Neural networks.
Multiple-Layer Networks and Backpropagation Algorithms
Neural Network Architecture Session 2
The Gradient Descent Algorithm
Dr. Kenneth Stanley September 6, 2006
Prof. Carolina Ruiz Department of Computer Science
BACKPROPOGATION OF NETWORKS
Classification Neural Networks 1
network of simple neuron-like computing elements
Neural Networks Chapter 5
Neural Network - 2 Mayank Vatsa
Backpropagation.
An Introduction To The Backpropagation Algorithm
Machine Learning: Lecture 4
Ch4: Backpropagation (BP)
Machine Learning: UNIT-2 CHAPTER-1
CSC321: Neural Networks Lecture 11: Learning in recurrent networks
CS621: Artificial Intelligence Lecture 22-23: Sigmoid neuron, Backpropagation (Lecture 20 and 21 taken by Anup on Graphical Models) Pushpak Bhattacharyya.
Ch4: Backpropagation (BP)
Prof. Carolina Ruiz Department of Computer Science
Outline Announcement Neural networks Perceptrons - continued
Presentation transcript:

Neural Networks Chapter 6 Joost N. Kok Universiteit Leiden

Feedforward networks

Feedforward Networks

NetTalk

Feedforward Networks A network to pronounce English text 7 x 29 input units 1 hidden layer with 80 hidden units 26 output units encoding phonemes Trained by 1024 words with context Produces intelligible speech after 10 training epochs

Feedforward Networks Functionally equivalent to DEC-talk Rule-based DEC-talk is the result of a decade of efforts by many linguists NETtalk learns from examples, and requires no linguistic knowledge

Back-Propagation

Initialize the weights to small random values Choose a pattern and apply it to the input layer Propagate the signal forwards through the network Compute the deltas for the output layer

Back-Propagation Compute the deltas for the preceding layers by propagating the errors backwards Update all the connections Go back to the second step for the next pattern

Feedforward Networks

Navigation of a Car Carnegie-Mellon 30 times 32 pixel image 8 times 32 range finder 29 hidden units, 45 output units 1200 simulated road images, 40 training cycles 5km/hr

Feedforward Networks

Backgammon Score from –100 to examples 459 inputs Two hidden layers of 24 nodes Neurogammon vs. Gammontool: 59 percent Without precomputed features: 41 percent Without noise: 45 percent

Feedforward Networks

Parity Problem Parity Problem: Output is on if an odd number of inputs is on

Back-Propagation

The update rule is local Incremental weight updating vs. batch mode Momentum: accelerate the long term trend by a factor

Back-Propagation Adaptive parameters

Feedforward Networks Process Modeling and Control Machine Diagnostics Portfolio Management Target Recognition Medical Diagnosis Credit Rating

Feedforward Networks Targeted Marketing Voice Recognition Financial Forecasting Quality Control Intelligent Searching Fraud Detection

Optimal Network Architectures Optimization Use as few units as possible: –Improve computational costs and training time –Improve generalization Search through space of possible architectures, for example using Back- Propagation and Evolutionary Algorithms

Optimal Network Architectures Construct or modify architecture –Start with too many nodes and take some away –Start with too few and add some more

Optimal Network Architectures Pruning and weight decay

Optimal Network Architectures Small weights decay more rapidly than large ones:

Optimal Network Architectures We want to remove units: use same for all connections feeding unit i:

Optimal Network Architectures Start with small network and gradually grow one of the appropriate size Boolean function from N binary inputs to single binary output

Optimal Network Architectures

Choose hidden units such that –Same output for all remaining patterns with one target –Opposite output for at least one of the remaining patterns with opposite target and remove these patterns Linearly separable problem

Optimal Network Architectures

We do the best we can with single node Correct with two nodes –One for wrongly on patterns –One for wrongly off patterns Each additional unit reduces the number of incorrectly classified patterns by at least one

Optimal Network Architectures

Faithful representation: two patterns with different targets should have different representations Master unit: does as well as possible on the task Ancillary units: added to obtain faithful representation