ECE 471/571 - Lecture 16 Hopfield Network 11/03/15.

Slides:



Advertisements
Similar presentations
Bioinspired Computing Lecture 16
Advertisements

Chapter3 Pattern Association & Associative Memory
Presentation By Utkarsh Trivedi Y8544
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
Introduction to Artificial Neural Networks
Tuomas Sandholm Carnegie Mellon University Computer Science Department
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
1 Neural networks 3. 2 Hopfield network (HN) model A Hopfield network is a form of recurrent artificial neural network invented by John Hopfield in 1982.
Machine Learning Neural Networks
Lecture 14 – Neural Networks
Pattern Association A pattern association learns associations between input patterns and output patterns. One of the most appealing characteristics of.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
September 30, 2010Neural Networks Lecture 8: Backpropagation Learning 1 Sigmoidal Neurons In backpropagation networks, we typically choose  = 1 and 
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
AN INTERACTIVE TOOL FOR THE STOCK MARKET RESEARCH USING RECURSIVE NEURAL NETWORKS Master Thesis Michal Trna
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks / Spring 2002 Shreekanth Mandayam Robi Polikar ECE Department.
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
CHAPTER 11 Back-Propagation Ming-Feng Yeh.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
Artificial Neural Network
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
CS623: Introduction to Computing with Neural Nets (lecture-10) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
© Negnevitsky, Pearson Education, Lecture 7 Artificial neural networks: Supervised learning Introduction, or how the brain works Introduction, or.
Neural Networks. Plan Perceptron  Linear discriminant Associative memories  Hopfield networks  Chaotic networks Multilayer perceptron  Backpropagation.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Introduction to Neural Networks Debrup Chakraborty Pattern Recognition and Machine Learning 2006.
Neural Networks Architecture Baktash Babadi IPM, SCS Fall 2004.
Neural Networks Ellen Walker Hiram College. Connectionist Architectures Characterized by (Rich & Knight) –Large number of very simple neuron-like processing.
Neural Network Hopfield model Kim, Il Joong. Contents  Neural network: Introduction  Definition & Application  Network architectures  Learning processes.
Appendix B: An Example of Back-propagation algorithm
© Negnevitsky, Pearson Education, Lecture 8 (chapter 6) Artificial neural networks: Supervised learning The perceptron Quick Review The perceptron.
Artificial Neural Network Supervised Learning دكترمحسن كاهاني
NEURAL NETWORKS FOR DATA MINING
Hebbian Coincidence Learning
Neural Networks and Fuzzy Systems Hopfield Network A feedback neural network has feedback loops from its outputs to its inputs. The presence of such loops.
IE 585 Associative Network. 2 Associative Memory NN Single-layer net in which the weights are determined in such a way that the net can store a set of.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam
Non-Bayes classifiers. Linear discriminants, neural networks.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 18 Connectionist Models
Activations, attractors, and associators Jaap Murre Universiteit van Amsterdam en Universiteit Utrecht
COSC 4426 AJ Boulay Julia Johnson Artificial Neural Networks: Introduction to Soft Computing (Textbook)
Neural Networks 2nd Edition Simon Haykin
Previous Lecture Perceptron W  t+1  W  t  t  d(t) - sign (w(t)  x)] x Adaline W  t+1  W  t  t  d(t) - f(w(t)  x)] f’ x Gradient.
Chapter 6 Neural Network.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Computational Intelligence Semester 2 Neural Networks Lecture 2 out of 4.
Assocative Neural Networks (Hopfield) Sule Yildirim 01/11/2004.
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
J. Kubalík, Gerstner Laboratory for Intelligent Decision Making and Control Artificial Neural Networks II - Outline Cascade Nets and Cascade-Correlation.
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
Neural Networks.
ECE 471/571 - Lecture 15 Hopfield Network 03/29/17.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
ECE 471/571 - Lecture 19 Hopfield Network.
ECE 471/571 - Lecture 17 Back Propagation.
Ch4: Backpropagation (BP)
The Network Approach: Mind as a Web
Ch4: Backpropagation (BP)
CSC 578 Neural Networks and Deep Learning
Presentation transcript:

ECE 471/571 - Lecture 16 Hopfield Network 11/03/15

2 Types of NN Recurrent (feedback during operation) Hopfield Kohonen Associative memory Feedforward No feedback during operation (only during determination of weights) Perceptron Backpropagation

3 Memory in Humans Human brain can lay down and recall of memories in both long-term and short-term fashions Associative or content-addressable Memory is not isolated - All memories are, in some sense, strings of memories We access the memory by its content – not by its location or the neural pathways  Compare to the traditional computer memory  Given incomplete or low resolution or partial information, the capability of reconstruction

4 A Simple Hopfield Network Recurrent NN No distinct layers Every node is connected to every other node The connections are bidirectional 16x16 nodes  11 22 dd x1x1 x2x2 xdxd 1 y -b ……

5 Properties Is able to store certain patterns in a similar fashion as human brain Given partial information, the full pattern can be recovered Robustness during an average lifetime many neurons will die but we do not suffer a catastrophic loss of individual memories (by the time we die we may have lost 20 percent of our original neurons). Guarantee of convergence We are guaranteed that the pattern will settle down after a long enough time to some fixed pattern. In the language of memory recall, if we start the network off with a pattern of firing which approximates one of the "stable firing patterns" (memories) it will "under its own steam" end up in the nearby well in the energy surface thereby recalling the original perfect memory.

6 Examples Images are from (no longer available)

7 How Does It Work? A set of exemplar patterns are chosen and used to initialize the weights of the network. Once this is done, any pattern can be presented to the network, which will respond by displaying the exemplar pattern that is in some sense similar to the input pattern. The output pattern can be read off from the network by reading the states of the units in the order determined by the mapping of the components of the input vector to the units.

Four Components How to train the network? How to update a node? What sequence should use when updating nodes? How to stop? 8

9 Network Initialization Assumptions: The network has N units (nodes) The weight from node i to node j is  ij  ij =  ji Each node has a threshold / bias value associated with it, b i We have M known patterns p i = (p i 1,…,p i N ), i=1..M, each of which has N elements

10 Classification Suppose we have an input pattern (p 1, …, p N ) to be classified Suppose the state of the ith node is  i (t) Then  i (0) = p i Testing (S is the sigmoid function)

11

12 Why Converge? - Energy Descent Billiard table model Surface of billiard table -> energy surface Energy of the network The choice of the network weights ensures that minima of the energy function occur at (or near) points representing exemplar patterns

13 **Energy Descent

Reference John Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proceedings of the National Academy of Science of the USA, 79(8): , April 1982 Tutorial on Hopfield Networks 14