Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.

Slides:



Advertisements
Similar presentations
NEURAL NETWORKS Backpropagation Algorithm
Advertisements

Introduction to Artificial Neural Networks
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Artificial Neural Networks (1)
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Introduction to Training and Learning in Neural Networks n CS/PY 399 Lab Presentation # 4 n February 1, 2001 n Mount Union College.
Machine Learning Neural Networks
Brian Merrick CS498 Seminar.  Introduction to Neural Networks  Types of Neural Networks  Neural Networks with Pattern Recognition  Applications.
Decision Support Systems
Branch Prediction with Neural- Networks: Hidden Layers and Recurrent Connections Andrew Smith CSE Dept. June 10, 2004.
1 Part I Artificial Neural Networks Sofia Nikitaki.
Neural Networks Basic concepts ArchitectureOperation.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Connectionist models. Connectionist Models Motivated by Brain rather than Mind –A large number of very simple processing elements –A large number of weighted.
November 19, 2009Introduction to Cognitive Science Lecture 20: Artificial Neural Networks I 1 Artificial Neural Network (ANN) Paradigms Overview: The Backpropagation.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Neural Networks Marco Loog.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Modeling Cross-Episodic Migration of Memory Using Neural Networks by, Adam Britt Definitions: Episodic Memory – Memory of a specific event, combination.
October 14, 2010Neural Networks Lecture 12: Backpropagation Examples 1 Example I: Predicting the Weather We decide (or experimentally determine) to use.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Multiple-Layer Networks and Backpropagation Algorithms
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Appendix B: An Example of Back-propagation algorithm
Outline What Neural Networks are and why they are desirable Historical background Applications Strengths neural networks and advantages Status N.N and.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Neural Networks II By Jinhwa Kim. 2 Neural Computing is a problem solving methodology that attempts to mimic how human brain function Artificial Neural.
BACKPROPAGATION: An Example of Supervised Learning One useful network is feed-forward network (often trained using the backpropagation algorithm) called.
Multi-Layer Perceptron
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Neural Networks - Berrin Yanıkoğlu1 Applications and Examples From Mitchell Chp. 4.
1 Lecture 6 Neural Network Training. 2 Neural Network Training Network training is basic to establishing the functional relationship between the inputs.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks - lecture 51 Multi-layer neural networks  Motivation  Choosing the architecture  Functioning. FORWARD algorithm  Neural networks as.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Chapter 8: Adaptive Networks
Hazırlayan NEURAL NETWORKS Backpropagation Network PROF. DR. YUSUF OYSAL.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Artificial Neural Networks for Data Mining. Copyright © 2011 Pearson Education, Inc. Publishing as Prentice Hall 6-2 Learning Objectives Understand the.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
Kim HS Introduction considering that the amount of MRI data to analyze in present-day clinical trials is often on the order of hundreds or.
1 Azhari, Dr Computer Science UGM. Human brain is a densely interconnected network of approximately neurons, each connected to, on average, 10 4.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Neural Network Architecture Session 2
Supervised Learning in ANNs
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
CSE P573 Applications of Artificial Intelligence Neural Networks
Prof. Carolina Ruiz Department of Computer Science
Artificial Intelligence Methods
Artificial Neural Network & Backpropagation Algorithm
of the Artificial Neural Networks.
CSE 573 Introduction to Artificial Intelligence Neural Networks
network of simple neuron-like computing elements
Neural Networks Geoff Hulten.
Capabilities of Threshold Neurons
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Artificial Neural Networks
Computer Vision Lecture 19: Object Recognition III
ARTIFICIAL NEURAL networks.
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Prof. Carolina Ruiz Department of Computer Science
Presentation transcript:

Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray

Artificial Neural Network Basics Artificial neural networks are a computational representation of biological neural networks that exist in our brains. They consist of a layer of input nodes, one or more layers of hidden nodes, and one layer of output nodes. A basic network has a signal given by desired input data is passed to the input nodes, which then sends a signal to the hidden layer(s). This signal is modified at each node by a weight value for that node and passed through all the hidden layers to the output nodes. Like biological neural networks, artificial neural networks are trained on sample data that (hopefully) shows all the varying possibilities the network will encounter in real data. There are a few different types of commonly used training algorithms. The most basic is called back-propagation. You can have supervised or unsupervised learning.

Artificial Neural Network Basics Uses for neural networks include: Mainly pattern matching Data processing (converting data into information) Regression analysis (basically predictions and hypothesis testing)

Artificial Neural Network Basics Some types of neural networks: Feed-forward neural network - operates as stated previously. Recurrent neural network – multiple types, but the general idea is to have a signal propagate data from both directions. They are not limited as a feed-forward is (input → output). Stochastic neural network – gives a network random variations by stochastic weights or by stochastic transfer functions. Spiking neural network – takes into account the timing of inputs. They can detect signals that vary over time.

Over-Trained Network Node Removal Inspired by Professor Jin's lecture on Spiking Neural Networks representing a songbird. The results he got when he deleted nodes and had the network adjust itself showed a similar song that was modified. Main idea is to train a network, remove a small percentage of it's nodes, and then retrain the network quickly with a new kind of situation in the training data. It can be used on any kind of artificial neural network. The reason this would be of use is that when a network is over- trained, it has trouble learning new kinds of data. A network like this has lost some of its robustness, and performing this method would allow it to learn additional things without completely retraining it.

Over-Trained Network Node Removal Selection Methods: Pick random nodes (roughly 5-10%). Select a group of nodes at random. Select a group of nodes that appear to be either mostly turned off or turned on (e.g. weight values close to 0.0 and 1.0). Removal/Modification Methods: Delete the nodes that are selected either by actually removing them, or resetting the value to 0.0, 1.0 or 0.5. Shift the weight values closer to 0.0, 1.0, or 0.5.

Over-Trained Network Node Removal Experimentation Network layout: Feed-forward, back-propagation, sigmoid transfer function

Over-Trained Network Node Removal Experimentation Selection Method: Random nodes Removal/Modification Method: Shift weights Data: 3 increasing values (ex. 45.2, 66.1, 92.5) 3 decreasing values (ex. 91.1, 24.2, 4.9) Expected Output: 0.0 for decreasing 1.0 for increasing Training time: 240 loops Data Size: Randomly generated 60 sets

Over-Trained Network Node Removal Experimentation Results: Successful recovery of original data most of the time. Some ability to recognize unfamiliar data, but not consistent enough (50% maybe). Conclusions: The reason it did not fully work in this experiment was because the network was not large enough. I realized this after I came up with the data set and the network. In rare testing, it showed that it could actually adapt in just a short amount of training time.

Neurotransmitter-Inspired ANNs Inspired by Dr. Andrews lectures on the brain and its neurotransmitters/chemicals. The implementation of this is to add a variable that controls the level of a given neurotransmitter. This variable is then multiplied to each node's weight values, particular weight values, to just the output nodes, or even to the value passed from each node to the next layer. This mimics the high or low levels of a neurotransmitter in the brain. This can also enable a network to correct its own neurotransmitter values, somewhat simulating confidence, depression, etc. Thus if a network is predicting correctly, it can modify its neurotransmitters to yield results closer to the extremes. This can be added to most (if not all) artificial neural network implementations.

Neurotransmitter-Inspired ANNs Experimentation Network layout: Feed-forward, back-propagation, sigmoid transfer function

Neurotransmitter-Inspired ANNs Experimentation Data: 3 increasing values (ex. 45.2, 66.1, 92.5) 3 decreasing values (ex. 91.1, 24.2, 4.9) Expected Output: 0.0 for decreasing 1.0 for increasing Training time: 240 loops Data Size: Randomly generated 60 sets

Neurotransmitter-Inspired ANNs Experimentation Results: The ability to affect the output worked great. It works well, and shows that how you can make a network less confident or more confident. Conclusions: Needs some more thought, but shows promise. With some more work, the network could adjust its own values as it runs. This would give it the ability to give better output values as it is running, letting it adjust depending on its performace.