Neural Networks William Lai Chris Rowlett. What are Neural Networks? A type of program that is completely different from functional programming. Consists.

Slides:



Advertisements
Similar presentations
Artificial Neural Networks
Advertisements

KULIAH II JST: BASIC CONCEPTS
Multi-Layer Perceptron (MLP)
A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C. Wunsch.
Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
Artificial Neural Networks - Introduction -
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
Machine Learning Neural Networks
Lecture 14 – Neural Networks
Artificial Intelligence (CS 461D)
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
1 Artificial Neural Networks: An Introduction S. Bapi Raju Dept. of Computer and Information Sciences, University of Hyderabad.
Rutgers CS440, Fall 2003 Neural networks Reading: Ch. 20, Sec. 5, AIMA 2 nd Ed.
Artificial Neural Networks Artificial Neural Networks are (among other things) another technique for supervised learning k-Nearest Neighbor Decision Tree.
Neural Networks. R & G Chapter Feed-Forward Neural Networks otherwise known as The Multi-layer Perceptron or The Back-Propagation Neural Network.
Chapter 6: Multilayer Neural Networks
Introduction to Neural Networks John Paxton Montana State University Summer 2003.
September 28, 2010Neural Networks Lecture 7: Perceptron Modifications 1 Adaline Schematic Adjust weights i1i1i1i1 i2i2i2i2 inininin …  w 0 + w 1 i 1 +
CS 484 – Artificial Intelligence
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Artificial neural networks:
CHAPTER 12 ADVANCED INTELLIGENT SYSTEMS © 2005 Prentice Hall, Decision Support Systems and Intelligent Systems, 7th Edition, Turban, Aronson, and Liang.
Machine Learning. Learning agent Any other agent.
Neurons, Neural Networks, and Learning 1. Human brain contains a massively interconnected net of (10 billion) neurons (cortical cells) Biological.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
Introduction to Neural Networks. Neural Networks in the Brain Human brain “computes” in an entirely different way from conventional digital computers.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Artificial Neural Networks An Overview and Analysis.
Explorations in Neural Networks Tianhui Cai Period 3.
2101INT – Principles of Intelligent Systems Lecture 10.
Artificial Intelligence Neural Networks ( Chapter 9 )
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
1 Chapter 11 Neural Networks. 2 Chapter 11 Contents (1) l Biological Neurons l Artificial Neurons l Perceptrons l Multilayer Neural Networks l Backpropagation.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Artificial Neural Networks Bruno Angeles McGill University – Schulich School of Music MUMT-621 Fall 2009.
Neural Networks Steven Le. Overview Introduction Architectures Learning Techniques Advantages Applications.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
Lecture 5 Neural Control
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
Dr.Abeer Mahmoud ARTIFICIAL INTELLIGENCE (CS 461D) Dr. Abeer Mahmoud Computer science Department Princess Nora University Faculty of Computer & Information.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Introduction to Neural Networks Freek Stulp. 2 Overview Biological Background Artificial Neuron Classes of Neural Networks 1. Perceptrons 2. Multi-Layered.
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Perceptrons Michael J. Watts
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Artificial Intelligence Methods Neural Networks Lecture 3 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Neural Networks Lecture 11: Learning in recurrent networks Geoffrey Hinton.
Neural Networks References: “Artificial Intelligence for Games” "Artificial Intelligence: A new Synthesis"
1 Neural Networks MUMT 611 Philippe Zaborowski April 2005.
INTRODUCTION TO NEURAL NETWORKS 2 A new sort of computer What are (everyday) computer systems good at... and not so good at? Good at..Not so good at..
Neural networks.
Artificial Intelligence (CS 370D)
Neural Networks Dr. Peter Phillips.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
XOR problem Input 2 Input 1
Multilayer Perceptron & Backpropagation
Introduction to Neural Network
Presentation transcript:

Neural Networks William Lai Chris Rowlett

What are Neural Networks? A type of program that is completely different from functional programming. Consists of units that carry out simple computations linked together to perform a function Modeled after the decision making process of the biological network of neurons in the brain

The Biology of Neural Networks Neural Networks are models of neuron clusters in the brain Each Neuron has a: Dendrites Axon Terminal buds Synapse Action potential is passed down the axon, which causes the release of neurotransmitters

Types of Neural Networks: General Supervised During training, error is determined by subtracting output from actual value Unsupervised Nothing is known of results Used to classify complicated data Nonlearning Optimization

Types of Neural Networks: Specific Perceptrons A subset of feed-forward networks, containing only one input layer, one output layer, and each input unit links to only output units Feed-forward networks a.k.a. Directed Acyclic Graphs Each unit only links to units in subsequent layers Allows for hidden layers Recurrent networks Not very well understood Units can link to units in the same layer or even previous layers Example: The Brain

Neural Net Capabilities Neural Nets can do anything a normal digital computer can do (such as perform basic or complex computations) Functional Approximations/Mapping Classification Good at ignoring ‘noise’

Neural Net Limitations Problems similar to Y=1/X between (0,1) on the open interval (Pseudo)-random number predictors Factoring integers or determining prime numbers Decryption

History of Neural Networks McColloch and Pitts (1943) Co-wrote first paper on possible model for a neuron Widrow Hoff (1959) Developed MADALINE and ADALINE MADALINE was the first neural network to try to solve a real world problem Eliminates echo in phone lines vonNeumann architecture took over for about 20 years (60’s-80’s)

Early Applications Checkers (Samuel, 1952) At first, played very poorly as a novice With practice games, eventually beat its author ADALINE (Widrow and Hoff, 1959) Recognizes binary patterns in streaming data MADALINE (same) Multiple ADAptive LINear Elements Uses an adaptive filter that eliminates echoes on phone lines

Modern Practical Applications Pattern recognition, including Handwriting Deciphering Voice Understanding “Predictability of High-Dissipation Auroral Activity” Image analysis Finding tanks hiding in trees (cheating) Material Classification "A real-time system for the characterization of sheep feeding phases from acoustic signals of jaw sounds"

How Do Neural Networks Relate to Artificial Intelligence? Neural networks are usually geared towards some application, so they represent the practical action aspect of AI Since neural networks are modeled after human brains, they are an imitation of human action. However, than can be taught to act rationally instead. Neural networks can modify their own weights and learn.

The Future of Neural Networks Pulsed neural networks The AI behind a good Go playing agent Increased speed through the making of chips robots that can see, feel, and predict the world around them improved stock prediction common usage of self-driving cars Applications involving the Human Genome Project self-diagnosis of medical problems using neural networks

Past Difficulties Single-layer approach limited applications Converting Widrow-Hoff Technique for use with multiple layers Use of poorly chosen and derived learning function High expectations and early failures led to loss of funding

Recurring Difficulties Cheating Exactly what a neural net is doing to get its solutions is unknown and therefore, it can cheat to find the solution as opposed to find a reliable algorithm Memorization Overfitting without generalization

Describing Neural Net Units All units have input values, a j All input values are weighted, as in each a j is multiplied by the link’s weight, W j,i All weighted inputs are summed, generating in i The unit’s activation function is called on in i, generating the activation value a i The activation value is output to every destination of the current unit’s links.

Perceptrons Single layer neural networks Require linearly separable functions Guarantees the one solution OR XOR

Back-Propagation Back-propagation uses a special function to divide the error of the outputs to all the weights of the network The result is a slow-learning method for solving many real world problems

Organic vs. Artificial Computer cycle times are in the order of nanoseconds while neurons take milliseconds Computers compute the results of each neuron sequentially, while all neurons in the brain fire simultaneously every cycle Result: massive parallelism makes brains a billion times faster than computers, even though computer bits can cycle a million times faster than neurons

Questions?