Presentation is loading. Please wait.

Presentation is loading. Please wait.

Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.

Similar presentations


Presentation on theme: "Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray."— Presentation transcript:

1 Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks
By: Kyle Wray

2 Artificial Neural Network Basics
Artificial neural networks are a computational representation of biological neural networks that exist in our brains. They consist of a layer of input nodes, one or more layers of hidden nodes, and one layer of output nodes. A basic network has a signal given by desired input data is passed to the input nodes, which then sends a signal to the hidden layer(s). This signal is modified at each node by a weight value for that node and passed through all the hidden layers to the output nodes. Like biological neural networks, artificial neural networks are trained on sample data that (hopefully) shows all the varying possibilities the network will encounter in real data. There are a few different types of commonly used training algorithms. The most basic is called back-propagation. You can have supervised or unsupervised learning.

3 Artificial Neural Network Basics
Uses for neural networks include: Mainly pattern matching Data processing (converting data into information) Regression analysis (basically predictions and hypothesis testing)

4 Artificial Neural Network Basics
Some types of neural networks: Feed-forward neural network - operates as stated previously. Recurrent neural network – multiple types, but the general idea is to have a signal propagate data from both directions. They are not limited as a feed-forward is (input → output). Stochastic neural network – gives a network random variations by stochastic weights or by stochastic transfer functions. Spiking neural network – takes into account the timing of inputs. They can detect signals that vary over time.

5 Over-Trained Network Node Removal
Inspired by Professor Jin's lecture on Spiking Neural Networks representing a songbird. The results he got when he deleted nodes and had the network adjust itself showed a similar song that was modified. Main idea is to train a network, remove a small percentage of it's nodes, and then retrain the network quickly with a new kind of situation in the training data. It can be used on any kind of artificial neural network. The reason this would be of use is that when a network is over- trained, it has trouble learning new kinds of data. A network like this has lost some of its robustness, and performing this method would allow it to learn additional things without completely retraining it.

6 Over-Trained Network Node Removal
Selection Methods: Pick random nodes (roughly 5-10%). Select a group of nodes at random. Select a group of nodes that appear to be either mostly turned off or turned on (e.g. weight values close to 0.0 and 1.0). Removal/Modification Methods: Delete the nodes that are selected either by actually removing them, or resetting the value to 0.0, 1.0 or 0.5. Shift the weight values closer to 0.0, 1.0, or 0.5.

7 Over-Trained Network Node Removal
Experimentation Network layout: Feed-forward, back-propagation, sigmoid transfer function

8 Over-Trained Network Node Removal
Experimentation Selection Method: Random nodes Removal/Modification Method: Shift weights Data: 3 increasing values (ex. 45.2, 66.1, 92.5) 3 decreasing values (ex. 91.1, 24.2, 4.9) Expected Output: 0.0 for decreasing 1.0 for increasing Training time: 240 loops Data Size: Randomly generated 60 sets

9 Over-Trained Network Node Removal
Experimentation Results: Successful recovery of original data most of the time. Some ability to recognize unfamiliar data, but not consistent enough (50% maybe). Conclusions: The reason it did not fully work in this experiment was because the network was not large enough. I realized this after I came up with the data set and the network. In rare testing, it showed that it could actually adapt in just a short amount of training time.

10 Neurotransmitter-Inspired ANNs
Inspired by Dr. Andrews lectures on the brain and its neurotransmitters/chemicals. The implementation of this is to add a variable that controls the level of a given neurotransmitter. This variable is then multiplied to each node's weight values, particular weight values, to just the output nodes, or even to the value passed from each node to the next layer. This mimics the high or low levels of a neurotransmitter in the brain. This can also enable a network to correct its own neurotransmitter values, somewhat simulating confidence, depression, etc. Thus if a network is predicting correctly, it can modify its neurotransmitters to yield results closer to the extremes. This can be added to most (if not all) artificial neural network implementations.

11 Neurotransmitter-Inspired ANNs
Experimentation Network layout: Feed-forward, back-propagation, sigmoid transfer function

12 Neurotransmitter-Inspired ANNs
Experimentation Data: 3 increasing values (ex. 45.2, 66.1, 92.5) 3 decreasing values (ex. 91.1, 24.2, 4.9) Expected Output: 0.0 for decreasing 1.0 for increasing Training time: 240 loops Data Size: Randomly generated 60 sets

13 Neurotransmitter-Inspired ANNs
Experimentation Results: The ability to affect the output worked great. It works well, and shows that how you can make a network less confident or more confident. Conclusions: Needs some more thought, but shows promise. With some more work, the network could adjust its own values as it runs. This would give it the ability to give better output values as it is running, letting it adjust depending on its performace.


Download ppt "Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray."

Similar presentations


Ads by Google