Neuron Model and Network Architecture

Slides:



Advertisements
Similar presentations
Perceptron Lecture 4.
Advertisements

Neural Network Toolbox COMM2M Harry R. Erwin, PhD University of Sunderland.
NEURAL NETWORKS Biological analogy
Introduction to Artificial Neural Networks
Artificial Neural Networks (1)
Perceptron Learning Rule
Multilayer Perceptrons 1. Overview  Recap of neural network theory  The multi-layered perceptron  Back-propagation  Introduction to training  Uses.
Ming-Feng Yeh1 CHAPTER 13 Associative Learning. Ming-Feng Yeh2 Objectives The neural networks, trained in a supervised manner, require a target signal.
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Artificial Neural Networks
Neural NetworksNN 11 Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Neural Networks Basic concepts ArchitectureOperation.
BP - Review CS/CMPE 333 – Neural Networks. CS/CMPE Neural Networks (Sp 2002/2003) - Asim LUMS2 Notation Consider a MLP with P input, Q hidden,
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
1 Artificial Neural Networks: An Introduction S. Bapi Raju Dept. of Computer and Information Sciences, University of Hyderabad.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Perceptron Learning Rule
An Illustrative Example
An Introduction To The Backpropagation Algorithm Who gets the credit?
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
Neural Networks Lab 5. What Is Neural Networks? Neural networks are composed of simple elements( Neurons) operating in parallel. Neural networks are composed.
Ming-Feng Yeh1 CHAPTER 3 An Illustrative Example.
Yuki Osada Andrew Cannon 1.  Humans are an intelligent species. ◦ One feature is the ability to learn.  The ability to learn comes down to the brain.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Supervised Hebbian Learning
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
2101INT – Principles of Intelligent Systems Lecture 10.
Neural Network Tool Box Khaled A. Al-Utaibi. Outlines  Neuron Model  Transfer Functions  Network Architecture  Neural Network Models  Feed-forward.
Appendix B: An Example of Back-propagation algorithm
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Multi-Layer Perceptron
Features of Biological Neural Networks 1)Robustness and Fault Tolerance. 2)Flexibility. 3)Ability to deal with variety of Data situations. 4)Collective.
Language Project.  Neural networks have a large appeal to many researchers due to their great closeness to the structure of the brain, a characteristic.
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
CHAPTER 14 Competitive Networks Ming-Feng Yeh.
Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks.
Topic 1 Neural Networks. Ming-Feng Yeh1-2 OUTLINES Neural Networks Cerebellar Model Articulation Controller (CMAC) Applications References C.L. Lin &
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
W is weight matrices, dimension 1xR p is input vector, dimension Rx1 b is bias a = f(Wp + b) w1, w2 harus dicari berdasarkan nilai w1, w2 awal yang diberikan.
Data Mining: Concepts and Techniques1 Prediction Prediction vs. classification Classification predicts categorical class label Prediction predicts continuous-valued.
An Introduction To The Backpropagation Algorithm.
Multiple-Layer Networks and Backpropagation Algorithms
Neural Networks.
Modelleerimine ja Juhtimine Tehisnärvivõrgudega
Real Neurons Cell structures Cell body Dendrites Axon
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
Artificial Neural Network & Backpropagation Algorithm
Artificial Neural Network
Network Architectures
Introduction to Neural Network
Presentation transcript:

Neuron Model and Network Architecture CHAPTER 2 Neuron Model and Network Architecture Ming-Feng Yeh

Objectives Introduce the simplified mathematical model of the neuron Explain how these artificial neurons can be interconnected to form a variety of network architectures Illustrate the basic operation of these neural networks Ming-Feng Yeh

Notation Scalars: small italic letters e.g., a, b, c Vectors: small bold nonitalic letters Matrices: capital BOLD nonitalic letters e.g., A, B, C Other notations are given in Appendix B Ming-Feng Yeh

Single-Input Neuron  f f(·) Scalar input: p Scalar weight: w (synapse) Bias: b Net input: n Transfer function : f (cell body) Output: a  f(·) p w 1 b a synapse cell body axon dendrites n = wp + b a = f(n) = f(wp + b) w=3, p=2 and b= 1.5  n = 321.5=4.5  a = f(4.5) f p 1 w b a Ming-Feng Yeh

Bias and Weight The bias b is much like a weight w, except that it has a constant input of 1. It can be omitted if NOT necessary. Bias b and weight w are both adjustable scalar parameters of the neuron. They can be adjusted by some learning rule so that the neuron input/output relationship meets some special goal. Ming-Feng Yeh

Transfer Functions The transfer function f may be a linear or nonlinear function of net input n Three of the most commonly used func. Hard limit transfer function Linear limit transfer function Log-sigmoid transfer function Ming-Feng Yeh

Hard Limit Transfer Func. a=hardlim(n) a=hardlim(wp+b) a = 0, if n  0 a = 1, if n  1 MATLAB function: hardlim Ming-Feng Yeh

Linear Transfer Function a=purelin(n) a=purelin(wp+b) a = n MATLAB function: purelin Ming-Feng Yeh

Log-Sigmoid Transfer Func. a=logsig(n) a=logsig(wp+b) a = 1/[1+exp(n)] MATLAB function: logsig Other transfer functions see Pages 2-6 & 2-17 Ming-Feng Yeh

Multiple-Input Neuron A neuron (node) with R inputs, p1, p2,…, pR The weight matrix W, w11, w12,…,w1R The neuron has a bias b Net input: n = w11 p1 + w12 p2+…+ w1RpR + b = Wp + b Neuron output: a = f(Wp + b) Ming-Feng Yeh

Single-Layer Network S: number of neuron (node) in a layer (R  S) R: number of input S: number of neuron (node) in a layer (R  S) Input vector p is a vector of length R Bias vector b and output vector a are vectors of length S Weight matrix W is an SR matrix Ming-Feng Yeh

Multiple-Layer Network Input Layer Hidden Layer Output Layer Input Layer Hidden Layer Output Layer Ming-Feng Yeh

Multiple-Layer Network Input Layer Hidden Layer Output Layer a1= f1(W1p+ b1) a2= f2(W2p+ b2) a3= f3(W3p+ b3) Layer Superscript: the number of the layer R inputs, S n neurons (nodes) in the nth layer Different layers can have different numbers of neurons The outputs of layer k are the inputs of layer (k+1) Weight matrix W j between layer i and j is an S iS j matrix Ming-Feng Yeh

Network Architectures Models of neural networks are specified by the three basic entities: models of the processing elements (neurons), models of inter-connections and structures (network topology), and the learning rules (the ways information is stored in the network). The weights may be positive (excitatory) or negative (inhibitory). Information is stored in the connection weights. Ming-Feng Yeh

Network Structures The layer that receives inputs is called the input layer. The outputs of the network are generated from the output layer. Any layer between the input and the output layers is called a hidden layer. There may be from zero to several hidden layers in a neural network. Ming-Feng Yeh

Network Structures When no node output is an input to a node in the same layer or preceding layer, the network is a feedforward network. When outputs are directed back as inputs to same- or preceding-layer nodes, the network is a feedback network. Feedback networks that have closed loops are called recurrent networks. Ming-Feng Yeh

Delay Block & Integrator Ming-Feng Yeh

Recurrent Network Initial Condition Recurrent Layer Ming-Feng Yeh

Learning Scheme Two kinds of learning in neural networks: parameter learning, which concerns the updating the connection weights in a neural network, and structure learning, which focuses on the change in the network structure, including the number of nodes and their connection types. Each kind of learning can be further classified into three categories: supervised learning, reinforcement learning, and unsupervised learning. Ming-Feng Yeh

Learning Scheme Ming-Feng Yeh

How to Pick an Architecture Problem specifications help define the network in the following ways: Number of network inputs = number of problem inputs Number of neurons in output layer = number of problem outputs Output layer transfer function choice at least partly determined by problem specification of the outputs. Ming-Feng Yeh