Download presentation
Presentation is loading. Please wait.
1
Introduction to Neural Networks
Neural Network Architecture and Learning
2
Error Gradient Descent Learning Rule
5
Local Minima
6
Physical Annealing Process
7
Simulated Annealing By analogy with this physical process, each step of the Simulated Annealing (SA) algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T(called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when Tis large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. Initially, T is set to a high value (or infinity), and it is decreased at each step according to some annealing schedule—which may be specified by the user, but must end with T= 0 towards the end of the allotted time budget. In this way, the system is expected to wander initially towards a broad region of the search space containing good solutions, ignoring small features of the energy function; then drift towards low-energy regions that become narrower and narrower; and finally move downhill according to the steepest descent heuristic.
8
Simulated Annealing Algorithm (Part I)
9
Simulated Annealing Algorithm (Part II)
10
Evolutionary Learning Rule
11
Training in Neural Networks
12
Data Under and Over Fitting
13
Feedforeword Neural Networks Using Supervised Learning
14
Perceptron –The Simplest Neural Network
15
Error-correction based perceptron learning algorithm
17
Multilayer Perceptron (MLP)
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.