Presentation is loading. Please wait.

Presentation is loading. Please wait.

04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 1 CS 621 Artificial Intelligence Lecture 21 - 04/10/05 Prof. Pushpak Bhattacharyya Artificial Neural Networks:

Similar presentations


Presentation on theme: "04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 1 CS 621 Artificial Intelligence Lecture 21 - 04/10/05 Prof. Pushpak Bhattacharyya Artificial Neural Networks:"— Presentation transcript:

1 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 1 CS 621 Artificial Intelligence Lecture 21 - 04/10/05 Prof. Pushpak Bhattacharyya Artificial Neural Networks: Models, Algorithms, Applications

2 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 2 Softcomputing Neural Networks is a computing device composed of very simple processing elements called neurons which are interconnected. The processing power in such devices comes from many connections. There are different models of neural network like Perceptron, Back Propogation, Hopfield, Adaptive Resonance Theory, Kohonen Net. The first three models are supervised learning algorithms and the rest are unsupervised.

3 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 3 Fundamental Unit: Perceptron These are single neurons: motivated by Brain Cells. ∑ n i=1 w i x i is compared with θ. If the net input is more than the threshold the neuron fires. i.e., assumes a state 1, else the state is 0.

4 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 4 Computing with Perceptron X 1 X 2 Y -- -- - 0 0 0 0 1 0 1 0 0 1 1 1 0. w 1 + 0. w 2 < θ From this, one possible 0. w 1 + 1. w 2 < θ solution is 1. w 1 + 0. w 2 < θ θ= 1.5 and 1. w 1 + 1. w 2 > θ w 1 =1.0 and w 2 =1.0

5 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 5 Geometrical View

6 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 6 Perceptron Training Algorithm (PTA): Preprocess

7 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 7 Pre-processing for PTA The threshold is absorbed as a weight and a new input line is introduced which is applied a constant input of -1. Also we negate the components of the vector in the 0- class so that only the test ∑ n i=1 w i x i > θ is performed In PTA we need to find the values

8 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 8 Perceptron Training Algorithm Input : Preprocessed vectors X i, i = 1 to m 1. Choose random values for w i, i = 0 to n 2. i:= 1 3. IF W.X > 0 goto 6 4. W := W + X i 5. goto 2 6. i := i+1 7. IF (i > m) goto 9 8. goto 3 9. End Thus whenever an example is misclassified, we add it to the weight vector and this goes on until the test W.X i > 0 is satisfied for all input vectors.

9 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 9 Example of PTA AND function 1-class: { } 0-class: {,, } Augmented classes: 1-class: { } 0-class: {,, } Negating the 0-class vectors, the total set of vectors is: X 1 = X 2 = X 3 = X 4 =

10 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 10 Example of PTA (contd) Start with a random value for the weight vector: W = W.X 1 = 0 and hence fails W new = W old + X i = fails for X 2 Keep on adding the failed vectors. Finally the result must come. WHY ?

11 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 11 Perceptron Training Convergence Theorem Whatever be the initial choice of the weights, the Perceptron Training Algorithm will eventually converge by finding the correct weight values, provided the function being trained is linearly separable.

12 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 12 Non-Linearity How to deal with Non- linearity: 1. Tolerate some error e.g. in Case of XOR Classify the X points but include one O point also 2. Separate by higher order surface but in that case the neuron does not remain a linear threshold element.

13 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 13 Non-Linearity (contd) Introduce more hyperplanes so that one segment encloses points of only one kind and no point of the other kind.

14 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 14 Multilayer Perceptrons Multilayer Perceptrons are more powerful than single layer perceptron. The neurons have Sigmoid function as input output relationship. Merits of Sigmoid: 1. Biological Plausibility: mimics the behavior of actual brain cells. 2. Easy to compute derivative: dy/dx=y(1-y).

15 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 15 Backpropagation Algorithm

16 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 16 Backpropagation Algorithm (contd) Each neuron in one layer is connected to all neurons in the next layer if any. Let I = be the input vector, O = be the output vector, T = be the target vector Then error E = ½∑ n i=1 (t i -o i ) 2

17 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 17 BP: Gradient Descent Δw ji α –δE/δw ji where, Δw ji = change in weight between i th (feeding) and j th (fed) neuron. With learning constant η as constant of proportionality, Δw ji = –ηδE/δw ji From this the weight change values for the whole network can be easily found. The weight change always reduces the error. Hence BP is by nature a greedy strategy.

18 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 18 Analysis of BP Influence of Learning Rate: Large value gives fast progress, but oscillation about minima. Too small value makes the progress of the algorithm very slow. Symmetry Breaking: If mapping demands different weights, but we start with same weights everywhere, then BP will never converge. Momentum Factor: Momentum factor hastens the operating point towards minima and once near the minima it dampens the oscillations.

19 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 19 Local Minima Due to the Greedy nature of BP, it can get stuck in local minimum m and will never be able to reach the global minimum g as the error can only decrease by weight change.

20 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 20 Some useful tips for BP If the network weights do not change, one of the following might have happened: a) Learning Rate too small. b) Network paralysis due to neurons operating near saturation region (1 or 0) as the inputs are high positive or negative. In this case scale the inputs. c) Network stuck in local minimum. In this case start with a fresh random set of weights.

21 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 21 Tips (contd) If there is an equal distribution of positive and negative values in the input then use the tanh(x) curve.

22 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 22 Application Loan Defaulter recognition: classification task. NameAgeSexAddrIncome (rs/yr) Y/N A.K. Das 43M 5 th Street, Juhu Reclama tion 5 lakhsN S. Singh 52M A.S. Marg, Powai 0.7Y

23 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 23 Network Design Input layer with 4 neurons (in actuality hundreds of attributes). Hidden layer with 2 neurons. Output layer with a single neuron. Learning rate of about 0.6 and momentum factor of about 0.3. Training can be done with very efficient packages/hardware http://www-ra.informatik.uni-tuebingen.de/SNNS/

24 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 24 Hopfield net Inspired by associative memory which means memory retrieval is not by address, but by part of the data. Consists of N neurons fully connected with symmetric weight strength w ij = w ji No self connection. So the weight matrix is 0- diagonal and symmetric. Each computing element or neuron is a linear threshold element with threshold = 0.

25 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 25 Computation

26 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 26 Stability Asynchronous mode of operation: at any instant a randomly selected neuron compares the net input with the threshold. In the synchronous mode of operation all neurons update themselves simultaneously at any instant of time. Since there are feedback connections in the Hopfield Net, the question of stability arises. At every time instant the network evolves and finally settles into a stable state. How does the Hopfield Net function as associative memory ? One needs to store or stabilize a vector which is the memory element.

27 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 27 Example w 12 = w 21 = 5 w 13 = w 31 = 3 w 23 = w 32 = 2 At time t=0 s 1 (t) = 1 s 2 (t) = -1 s 3 (t) = 1 Unstable state. Neuron 1 will flip. A stable pattern is called an attractor for the net.

28 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 28 Energy Consideration Stable patterns correspond to minimum energy states. Energy at state E = -1/2∑ j ∑ j<>i w ji x i x j Change in energy always comes out to be negative in the asynchronous mode of operation. Energy always decreases. Stability ensured.

29 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 29 Association Finding Association detection a very important problem in data mining. A Hopfield net can be trained to store associations. Classic example People who buy bread also buy butter Stores keep bread and butter together. Sweet shops serving Bengali sweets also keep popular Bengali magazines.

30 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 30 Hopfield Net as a Storehouse of Associations Get the features of situations in terms of pairs Train the Hopfield net with an appropriate learning algorithm (Hopfield rule). Use this to detect associations in future examples.

31 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 31 Conclusion Neural net and fuzzy logic are alternate ways of computation that provide ways of discovering knowledge from high volume of data. First crucial step is feature selection. The features can be fed into the neural net. The features can be described qualitatively supported with profile for data mining tasks.

32 04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 32 References Books: R.J.Schalkoff, Artificial Neural Networks, McGraw Hill Publishers, 1997. G.J. Klir and T.A. Folger, Fuzzy Sets, Uncertainty and Information, Prentice Hall India, 1995. E. Cox, Fuzzy Logic, Charles River Media Inc. Publication, 1995. J. Han and M. Kamber, Data Mining: Concepts and Techniques, Morgan Kaufmann, 2000. Journals Neural Computation, IEEE Transactions on Neural Nets, Data and Knowledge Engineering Journal, SIGKDD, IEEE Expert etc.


Download ppt "04-10-05Prof. Pushpak Bhattacharyya, IIT Bombay 1 CS 621 Artificial Intelligence Lecture 21 - 04/10/05 Prof. Pushpak Bhattacharyya Artificial Neural Networks:"

Similar presentations


Ads by Google