MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #30 4/15/02 Neural Networks.

Slides:



Advertisements
Similar presentations
Deep Learning Bing-Chen Tsai 1/21.
Advertisements

also known as the “Perceptron”
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
Mehran University of Engineering and Technology, Jamshoro Department of Electronic Engineering Neural Networks Feedforward Networks By Dr. Mukhtiar Ali.
Lecture 13 – Perceptrons Machine Learning March 16, 2010.
Artificial Neural Networks
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #33 4/22/02 Fully Stressed Design.
1 Neural Networks - Basics Artificial Neural Networks - Basics Uwe Lämmel Business School Institute of Business Informatics
PROTEIN SECONDARY STRUCTURE PREDICTION WITH NEURAL NETWORKS.
Neural Networks.
1 Part I Artificial Neural Networks Sofia Nikitaki.
Introduction CS/CMPE 537 – Neural Networks. CS/CMPE Neural Networks (Sp 2004/2005) - Asim LUMS2 Biological Inspiration The brain is a highly.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Artificial Neural Networks Artificial Neural Networks are (among other things) another technique for supervised learning k-Nearest Neighbor Decision Tree.
Chapter 6: Multilayer Neural Networks
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #29 4/12/02 Neural Networks.
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #31 4/17/02 Neural Networks.
October 28, 2010Neural Networks Lecture 13: Adaptive Networks 1 Adaptive Networks As you know, there is no equation that would tell you the ideal number.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Hazırlayan NEURAL NETWORKS Radial Basis Function Networks II PROF. DR. YUSUF OYSAL.
Neuron Model and Network Architecture
Artificial neural networks:
Radial Basis Function Networks
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Classification Part 3: Artificial Neural Networks
1 st Neural Network: AND function Threshold(Y) = 2 X1 Y X Y.
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
Multi-Layer Perceptrons Michael J. Watts
2101INT – Principles of Intelligent Systems Lecture 10.
Neural Networks AI – Week 23 Sub-symbolic AI Multi-Layer Neural Networks Lee McCluskey, room 3/10
Matlab Matlab Sigmoid Sigmoid Perceptron Perceptron Linear Linear Training Training Small, Round Blue-Cell Tumor Classification Example Small, Round Blue-Cell.
Neural Networks Kasin Prakobwaitayakit Department of Electrical Engineering Chiangmai University EE459 Neural Networks The Structure.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
PSY105 Neural Networks 2/5 2. “A universe of numbers”
An informal description of artificial neural networks John MacCormick.
Artificial Intelligence & Neural Network
BCS547 Neural Decoding.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
CHEE825 Fall 2005J. McLellan1 Nonlinear Empirical Models.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Modelleerimine ja Juhtimine Tehisnärvivõrgudega Identification and Control with artificial neural networks.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
“Principles of Soft Computing, 2 nd Edition” by S.N. Sivanandam & SN Deepa Copyright  2011 Wiley India Pvt. Ltd. All rights reserved. CHAPTER 2 ARTIFICIAL.
April 5, 2016Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigms II 1 Capabilities of Threshold Neurons By choosing appropriate.
Deep Feedforward Networks
Lecture 2. Basic Neurons To model neurons we have to idealize them:
Modelleerimine ja Juhtimine Tehisnärvivõrgudega
CSE 473 Introduction to Artificial Intelligence Neural Networks
بحث في موضوع : Neural Network
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
Chapter 3. Artificial Neural Networks - Introduction -
OVERVIEW OF BIOLOGICAL NEURONS
Artificial Intelligence Lecture No. 28
Introduction to Neural Network
Presentation transcript:

MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #30 4/15/02 Neural Networks

Non-Linearity: (In response to a question asked) 1 st – We’ll define a non-linear system as one in which the outputs are not proportional to the inputs. NN’s incorporate non-linearity at each neuron by means of the neurons activation function which is typically a sigmoid. (sigmoid meaning curved like the letter C or curved like the letter S). More on this later.

Neural Networks Modeling of a Neuron.

Elements of a Neuron Model 1 - Synapses or Connecting links: it is at these sites that input enters the neuron. Each synapse has an associated weight. This weight will multiply the input prior to entering the summing junction (to be discussed next).

Elements of a Neuron Model 2 – An adder (linear combiner) for summing the input signals (which have been weighted by their respective synapses prior to entering). This is our summing junction. So we have as output from our adder:

Elements of a Neuron Model 3 – An activation function (or squashing function) for limiting and otherwise controlling the output (y k ) of a neuron. The result of this function is usually normalized between 0 and 1 or perhaps -1 to 1.

Neural Networks So what is θ k ? θ k is referred to as the threshold value. It alters the value of the input to the activation function.

Neural Networks On the other hand, a given neuron may implement a bias term formulation. A bias term would typically be thought of as increasing the activation energy for a neuron. Mathematically, a neuron may be modeled equivalently using either a threshold or a bias if you keep in mind that either can take on positive and negative values. Consider the following neuron models.

=

Elements of a Neuron Model So for example: our previous output equation written for a threshold was:

Elements of a Neuron Model So we can show that the two neuron models just presented are not only equivalent to each other, but they are also equivalent to the first neuron model where the threshold was input directly to the activation function.

Elements of a Neuron Model So with the zero indexed term, we account for the threshold or bias and the activation function representation becomes:

Neural Networks So what is does the activation function φ(*) look like? There are three common implementation of the activation function. The first is referred to as a Threshold Function:

Neural Networks And thus the output from one of our neurons will be:

Neural Networks Piecewise-Linear

Neural Networks Sigmoid Function: (by far the most common) Common Example is the Logistic Function.

Neural Networks

Learning Ok, so we know something about what a neuron looks like. How does a collection of such constructs “Learn”? Def: Learning – in the context of a neural network…

Learning There are 3 steps in the general case learning process:

Learning Mathematically:

Learning There are 3 learning paradigms that we will consider: 1 – Supervised Learning: 2 – Reinforcement Learning: 3 – Self-Organized Learning: