INTRODUCTION TO ARTIFICIAL INTELLIGENCE

Slides:



Advertisements
Similar presentations
Slides from: Doug Gray, David Poole
Advertisements

Learning in Neural and Belief Networks - Feed Forward Neural Network 2001 년 3 월 28 일 안순길.
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
For Wednesday Read chapter 19, sections 1-3 No homework.
Artificial Neural Networks - Introduction -
Artificial Neural Networks - Introduction -
Machine Learning Neural Networks
Simple Neural Nets For Pattern Classification
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
20.5 Nerual Networks Thanks: Professors Frank Hoffmann and Jiawei Han, and Russell and Norvig.
Prénom Nom Document Analysis: Artificial Neural Networks Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Lecture 4 Neural Networks ICS 273A UC Irvine Instructor: Max Welling Read chapter 4.
Data Mining with Neural Networks (HK: Chapter 7.5)
An Introduction To The Backpropagation Algorithm Who gets the credit?
CS 484 – Artificial Intelligence
For Monday Read chapter 22, sections 1-3 No homework.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC 4510 – Machine Learning Dr. Mary-Angela Papalaskari Department of Computing Sciences Villanova University Course website:
Artificial neural networks:
CISC 4631 Data Mining Lecture 11: Neural Networks.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
Neural Networks Chapter 18.7
Artificial Intelligence Lecture No. 28 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Artificial Neural Networks
1 CS 343: Artificial Intelligence Neural Networks Raymond J. Mooney University of Texas at Austin.
Artificial Neural Networks
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
2101INT – Principles of Intelligent Systems Lecture 10.
1 Machine Learning Neural Networks Neural Networks NN belong to the category of trained classification models The learned classification model is an.
Waqas Haider Khan Bangyal. Multi-Layer Perceptron (MLP)
For Friday Read chapter 18, section 7 Program 3 due.
Machine Learning Dr. Shazzad Hosain Department of EECS North South Universtiy
1 Machine Learning The Perceptron. 2 Heuristic Search Knowledge Based Systems (KBS) Genetic Algorithms (GAs)
NEURAL NETWORKS FOR DATA MINING
LINEAR CLASSIFICATION. Biological inspirations  Some numbers…  The human brain contains about 10 billion nerve cells ( neurons )  Each neuron is connected.
Artificial Neural Networks. The Brain How do brains work? How do human brains differ from that of other animals? Can we base models of artificial intelligence.
For Wednesday No reading No homework. Exam 2 Friday. Will cover material through chapter 18. Take home is due Friday.
For Wednesday No reading Homework: –Chapter 18, exercise 25 (a & b)
EE459 Neural Networks Backpropagation
Artificial Neural Networks Students: Albu Alexandru Deaconescu Ionu.
Neural Network Basics Anns are analytical systems that address problems whose solutions have not been explicitly formulated Structure in which multiple.
Back-Propagation Algorithm AN INTRODUCTION TO LEARNING INTERNAL REPRESENTATIONS BY ERROR PROPAGATION Presented by: Kunal Parmar UHID:
1 CS 391L: Machine Learning Neural Networks Raymond J. Mooney University of Texas at Austin.
For Friday No reading Take home exam due Exam 2. For Monday Read chapter 22, sections 1-3 FOIL exercise due.
Neural Networks Presented by M. Abbasi Course lecturer: Dr.Tohidkhah.
Artificial Neural Networks (Cont.) Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
1 Machine Learning Neural Networks Tomorrow no lesson! (will recover on Wednesday 11, WEKA !!)
Neural Networks Teacher: Elena Marchiori R4.47 Assistant: Kees Jong S2.22
Artificial Neural Networks Chapter 4 Perceptron Gradient Descent Multilayer Networks Backpropagation Algorithm 1.
For Friday Read ch. 20, sections 1-3 Program 4 due.
Neural Networks 2nd Edition Simon Haykin
Artificial Neural Networks (ANN). Artificial Neural Networks First proposed in 1940s as an attempt to simulate the human brain’s cognitive learning processes.
Artificial Intelligence CIS 342 The College of Saint Rose David Goldschmidt, Ph.D.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Where are we? What’s left? HW 7 due on Wednesday Finish learning this week. Exam #4 next Monday Final Exam is a take-home handed out next Friday in class.
An Introduction To The Backpropagation Algorithm.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
CSE343/543 Machine Learning Mayank Vatsa Lecture slides are prepared using several teaching resources and no authorship is claimed for any slides.
Neural networks.
CS 388: Natural Language Processing: Neural Networks
Artificial Neural Networks
Learning with Perceptrons and Neural Networks
Artificial Neural Networks
Real Neurons Cell structures Cell body Dendrites Axon
CISC 4631/6930 Data Mining Lecture 11: Neural Networks.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
Data Mining with Neural Networks (HK: Chapter 7.5)
An Introduction To The Backpropagation Algorithm
Data Mining Neural Networks Last modified 1/9/19.
Presentation transcript:

INTRODUCTION TO ARTIFICIAL INTELLIGENCE Massimo Poesio LECTURE 15: Artificial Neural Networks

ARTIFICIAL NEURAL NETWORKS Analogy to biological neural systems, the most robust learning systems we know. Attempt to understand natural biological systems through computational modeling. Massive parallelism allows for computational efficiency. Help understand “distributed” nature of neural representations (rather than “localist” representation) that allow robustness and graceful degradation. Intelligent behavior as an “emergent” property of large number of simple units rather than from explicitly encoded symbolic rules and algorithms.

Real Neurons Cell structures Cell body Dendrites Axon Synaptic terminals

Neural Communication Electrical potential across cell membrane exhibits spikes called action potentials. Spike originates in cell body, travels down axon, and causes synaptic terminals to release neurotransmitters. Chemical diffuses across synapse to dendrites of other neurons. Neurotransmitters can be excititory or inhibitory. If net input of neurotransmitters to a neuron from other neurons is excititory and exceeds some threshold, it fires an action potential.

NEURAL SPEED CONSTRAINTS Neurons have a “switching time” on the order of a few milliseconds, compared to nanoseconds for current computing hardware. However, neural systems can perform complex cognitive tasks (vision, speech understanding) in tenths of a second. Only time for performing 100 serial steps in this time frame, compared to orders of magnitude more for current computers. Must be exploiting “massive parallelism.” Human brain has about 1011 neurons with an average of 104 connections each.

Real Neural Learning Synapses change size and strength with experience. Hebbian learning: When two connected neurons are firing at the same time, the strength of the synapse between them increases. “Neurons that fire together, wire together.”

ARTIFICIAL NEURAL NETWORKS Inputs xi arrive through pre-synaptic connections Synaptic efficacy is modeled using real weights wi The response of the neuron is a nonlinear function f of its weighted inputs

Inputs To Neurons Arise from other neurons or from outside the network Nodes whose inputs arise outside the network are called input nodes and simply copy values An input may excite or inhibit the response of the neuron to which it is applied, depending upon the weight of the connection

Weights Represent synaptic efficacy and may be excitatory or inhibitory Normally, positive weights are considered as excitatory while negative weights are thought of as inhibitory Learning is the process of modifying the weights in order to produce a network that performs some function

ARTIFICIAL NEURAL NETWORK LEARNING Learning approach based on modeling adaptation in biological neural systems. Perceptron: Initial algorithm for learning simple neural networks (single layer) developed in the 1950’s. Backpropagation: More complex algorithm for learning multi-layer neural networks developed in the 1980’s.

(Tj is threshold for unit j) SINGLE LAYER NETWORKS Model network as a graph with cells as nodes and synaptic connections as weighted edges from node i to node j, wji Model net input to cell as Cell output is: 1 3 2 5 4 6 w12 w13 w14 w15 w16 oj 1 (Tj is threshold for unit j) Tj netj

Neural Computation McCollough and Pitts (1943) showed how single-layer neurons could compute logical functions and be used to construct finite-state machines. Can be used to simulate logic gates: AND: Let all wji be Tj/n, where n is the number of inputs. OR: Let all wji be Tj NOT: Let threshold be 0, single input with a negative weight. Can build arbitrary logic circuits, sequential machines, and computers with such gates. Given negated inputs, two layer network can compute any boolean function using a two level AND-OR network.

Perceptron Training Assume supervised training examples giving the desired output for a unit given a set of known input activations. Learn synaptic weights so that unit produces the correct output for each example. Perceptron uses iterative update algorithm to learn a correct set of weights.

Perceptron Learning Rule Update weights by: where η is the “learning rate” tj is the teacher specified output for unit j. Equivalent to rules: If output is correct do nothing. If output is high, lower weights on active inputs If output is low, increase weights on active inputs Also adjust threshold to compensate:

Perceptron Learning Algorithm Iteratively update weights until convergence. Each execution of the outer loop is typically called an epoch. Initialize weights to random values Until outputs of all training examples are correct For each training pair, E, do: Compute current output oj for E given its inputs Compare current output to target value, tj , for E Update synaptic weights and threshold using learning rule

Perceptron as a Linear Separator Since perceptron uses linear threshold function, it is searching for a linear separator that discriminates the classes. o3 ?? Or hyperplane in n-dimensional space o2

Concept Perceptron Cannot Learn Cannot learn exclusive-or, or parity function in general. o3 1 + – ?? – + o2 1

Perceptron Limits System obviously cannot learn concepts it cannot represent. Minksy and Papert (1969) wrote a book analyzing the perceptron and demonstrating many functions it could not learn. These results discouraged further research on neural nets; and symbolic AI became the dominate paradigm.

Multi-Layer Networks Multi-layer networks can represent arbitrary functions, but an effective learning algorithm for such networks was thought to be difficult. A typical multi-layer network consists of an input, hidden and output layer, each fully connected to the next, with activation feeding forward. The weights determine the function computed. Given an arbitrary number of hidden units, any boolean function can be computed with a single hidden layer. output hidden input activation

Output The response function is normally nonlinear Samples include Sigmoid Piecewise linear

Differentiable Output Function Need non-linear output function to move beyond linear functions. A multi-layer linear network is still linear. Standard solution is to use the non-linear, differentiable sigmoidal “logistic” function: 1 Tj netj Can also use tanh or Gaussian output function

LEARNING WITH MULTI-LAYER NETWORKS The BACKPROPAGATION algorithm allows to modify weights in a network consisting of several layers by ‘propagating back’ corrections

Error Backpropagation First calculate error of output units and use this to change the top layer of weights. Current output: oj=0.2 Correct output: tj=1.0 Error δj = oj(1–oj)(tj–oj) 0.2(1–0.2)(1–0.2)=0.128 output Update weights into j hidden input

Error Backpropagation Next calculate error for hidden units based on errors on the output units it feeds into. output hidden input

Error Backpropagation Finally update bottom layer of weights based on errors calculated for hidden units. output hidden Update weights into j input

Backpropagation Learning Rule Each weight changed by: where η is a constant called the learning rate tj is the correct teacher output for unit j δj is the error measure for unit j

Backpropagation Training Algorithm Create the 3-layer network with H hidden units with full connectivity between layers. Set weights to small random real values. Until all training examples produce the correct value (within ε), or mean squared error ceases to decrease, or other termination criteria: Begin epoch For each training example, d, do: Calculate network output for d’s input values Compute error between current output and correct output for d Update weights by backpropagating error and using learning rule End epoch

Backpropagation Preparation Training Set A collection of input-output patterns that are used to train the network Testing Set A collection of input-output patterns that are used to assess network performance Learning Rate-η A scalar parameter, analogous to step size in numerical integration, used to set the rate of adjustments

Network Error Total-Sum-Squared-Error (TSSE) Root-Mean-Squared-Error (RMSE)

A Pseudo-Code Algorithm Randomly choose the initial weights While error is too large For each training pattern Apply the inputs to the network Calculate the output for every neuron from the input layer, through the hidden layer(s), to the output layer Calculate the error at the outputs Use the output error to compute error signals for pre-output layers Use the error signals to compute weight adjustments Apply the weight adjustments Periodically evaluate the network performance

Apply Inputs From A Pattern Apply the value of each input parameter to each input node Input nodes computer only the identity function Feedforward Inputs Outputs

Calculate Outputs For Each Neuron Based On The Pattern The output from neuron j for pattern p is Opj where and k ranges over the input indices and Wjk is the weight on the connection from input k to neuron j Feedforward Inputs Outputs

Calculate The Error Signal For Each Output Neuron The output neuron error signal dpj is given by dpj=(Tpj-Opj) Opj (1-Opj) Tpj is the target value of output neuron j for pattern p Opj is the actual output value of output neuron j for pattern p

Calculate The Error Signal For Each Hidden Neuron The hidden neuron error signal dpj is given by where dpk is the error signal of a post-synaptic neuron k and Wkj is the weight of the connection from hidden neuron j to the post-synaptic neuron k

Calculate And Apply Weight Adjustments Compute weight adjustments DWji by DWji = η dpj Opi Apply weight adjustments according to Wji = Wji + DWji

Representational Power Boolean functions: Any boolean function can be represented by a two-layer network with sufficient hidden units. Continuous functions: Any bounded continuous function can be approximated with arbitrarily small error by a two-layer network. Sigmoid functions can act as a set of basis functions for composing more complex functions, like sine waves in Fourier analysis. Arbitrary function: Any function can be approximated to arbitrary accuracy by a three-layer network.

Sample Learned XOR Network 3.11 6.96 7.38 5.24 A B 2.03 3.58 3.6 5.57 5.74 X Y Hidden Unit A represents: (X  Y) Hidden Unit B represents: (X  Y) Output O represents: A  B = (X  Y)  (X  Y) = X  Y

Hidden Unit Representations Trained hidden units can be seen as newly constructed features that make the target concept linearly separable in the transformed space. On many real domains, hidden units can be interpreted as representing meaningful features such as vowel detectors or edge detectors, etc.. However, the hidden layer can also become a distributed representation of the input in which each individual unit is not easily interpretable as a meaningful feature.

Successful Applications Text to Speech (NetTalk) Fraud detection Financial Applications HNC (eventually bought by Fair Isaac) Chemical Plant Control Pavillion Technologies Automated Vehicles Game Playing Neurogammon Handwriting recognition

Analysis of ANN Algorithm Advantages: Produce good results in complex domains Suitable for both discrete and continuous data (especially better for the continuous domain) Testing is very fast Disadvantages: Training is relatively slow Learned results are difficult for users to interpret than learned rules (comparing with DT) Empirical Risk Minimization (ERM) makes ANN try to minimize training error, may lead to overfitting

READINGS Mitchell, Machine Learning, ch. 4

ACKNOWLEDGMENTS Several slides come from Ray Mooney’s course on AI