Back-propagation Chih-yun Lin 5/16/2015. Agenda Perceptron vs. back-propagation network Network structure Learning rule Why a hidden layer? An example:

Slides:



Advertisements
Similar presentations
Multi-Layer Perceptron (MLP)
Advertisements

A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Slides from: Doug Gray, David Poole
For Wednesday Read chapter 19, sections 1-3 No homework.
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Learning Department of Computer Science & Engineering Indian Institute of Technology Kharagpur.
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Classification Neural Networks 1
Financial Informatics –XVI: Supervised Backpropagation Learning
Machine Learning Neural Networks
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
Branch Prediction with Neural- Networks: Hidden Layers and Recurrent Connections Andrew Smith CSE Dept. June 10, 2004.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2008 Shreekanth Mandayam ECE Department Rowan University.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Artificial Neural Networks Artificial Neural Networks are (among other things) another technique for supervised learning k-Nearest Neighbor Decision Tree.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Fall 2004 Shreekanth Mandayam ECE Department Rowan University.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2010 Shreekanth Mandayam ECE Department Rowan University.
Neural Networks Chapter Feed-Forward Neural Networks.
Chapter 6: Multilayer Neural Networks
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
Machine Learning Motivation for machine learning How to set up a problem How to design a learner Introduce one class of learners (ANN) –Perceptrons –Feed-forward.
Artificial Neural Networks
Data Mining with Neural Networks (HK: Chapter 7.5)
Artificial Neural Networks
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Hopefully a clearer version of Neural Network. With Actual Weights.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
Dr. Hala Moushir Ebied Faculty of Computers & Information Sciences
Artificial Neural Networks (ANN). Output Y is 1 if at least two of the three inputs are equal to 1.
© N. Kasabov Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering, MIT Press, 1996 INFO331 Machine learning. Neural networks. Supervised.
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Appendix B: An Example of Back-propagation algorithm
Backpropagation An efficient way to compute the gradient Hung-yi Lee.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Multi-Layer Perceptron
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
CS621 : Artificial Intelligence
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Demystified by Louise Francis Francis Analytics and Actuarial Data Mining, Inc.
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
Neural Networks 2nd Edition Simon Haykin
1 Technological Educational Institute Of Crete Department Of Applied Informatics and Multimedia Intelligent Systems Laboratory.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Learning with Perceptrons and Neural Networks
Learning in Neural Networks
第 3 章 神经网络.
CSE 473 Introduction to Artificial Intelligence Neural Networks
CSE P573 Applications of Artificial Intelligence Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Supplemental slides for CSE 327 Prof. Jeff Heflin
Classification Neural Networks 1
CSE 573 Introduction to Artificial Intelligence Neural Networks
Backpropagation.
Backpropagation.
Neural Networks References: “Artificial Intelligence for Games”
Backpropagation.
David Kauchak CS51A Spring 2019
Learning Combinational Logic
Artificial Intelligence Chapter 3 Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

Back-propagation Chih-yun Lin 5/16/2015

Agenda Perceptron vs. back-propagation network Network structure Learning rule Why a hidden layer? An example: Jets or Sharks Conclusions

Network Structure – Perceptron O Output Unit W j I j Input Units

Network Structure – Back-propagation Network O i Output Unit W j,i a j Hidden Units W k,j I k Input Units

Learning Rule Measure error Reduce that error By appropriately adjusting each of the weights in the network

Learning Rule – Perceptron Err = T – O O is the predicted output T is the correct output W j  W j + α * I j * Err I j is the activation of a unit j in the input layer α is a constant called the learning rate

Learning Rule – Back-propagation Network Err i = T i – O i W j,i  W j,i + α * a j * Δ i Δ i = Err i * g’(in i ) g’ is the derivative of the activation function g a j is the activation of the hidden unit W k,j  W k,j + α * I k * Δ j Δ j = g’(in j ) * Σ i W j,i * Δ i

Learning Rule – Back-propagation Network E = 1/2Σ i (T i – O i ) 2 = - I k * Δ j

Why a hidden layer? (1 w 1 ) + (1 w 2 ) w 1 + w 2 < (1 w 1 ) + (0 w 2 ) > ==> w 1 > (0 w 1 ) + (1 w 2 ) > ==> w 2 > (0 w 1 ) + (0 w 2 ) 0 <

Why a hidden layer? (cont.) (1 w 1 ) + (1 w 2 ) + (1 w 3 ) w 1 + w 2 + w 3 < (1 w 1 ) + (0 w 2 ) + (0 w 3 ) > ==> w 1 > (0 w 1 ) + (1 w 2 ) + (0 w 3 ) > ==> w 2 > (0 w 1 ) + (0 w 2 ) + (0 w 3 ) 0 <

An example: Jets or Sharks

Conclusion Expressiveness: Well-suited for continuous inputs,unlike most decision tree systems Computational efficiency: Time to error convergence is highly variable Generalization: Have reasonable success in a number of real-world problems

Conclusions (cont.) Sensitivity to noise: Very tolerant of noise in the input data Transparency: Neural networks are essentially black boxes Prior knowledge: Hard to used one’s knowledge to “prime” a network to learn better