Download presentation
Presentation is loading. Please wait.
1
Introduction CS/CMPE 537 – Neural Networks
2
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS2 Biological Inspiration The brain is a highly complex, nonlinear, and parallel computer Simple processing units called neurons Cycle times in milliseconds Massive number of neurons (estimated at 10 11 in humans) Massive interconnection (estimated at 60 12 connections) The brain can perform certain computations (e.g. pattern recognition, perception, etc) many times faster than the fastest digital computers available today
3
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS3 Comparison of the Brain and Digital Computer Functionality comparison BrainDigital computer Fault tolerantIntolerant to errors AdaptivePreprogrammed Learn-abilityPreprogrammed TrainedDesigned NonlinearLinear, primarily
4
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS4 Comparison of the Brain and Digital Computer Structural comparison BrainDigital computer Simple processing unitComplex processing unit Large number of unitsLesser number of units Massive interconnectionLittle or no interconnection DynamicStatic Slow switchingFaster switching
5
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS5 What is an Artificial Neural Network? (1) Conceptual definitions An artificial neural network (neural network) is a machine that is designed to model the way in which the brain performs a particular task or function of interest; the network is either implemented in hardware or simulated in software on a digital computer. A neural network mimics the brain or nervous system. In what sense? In structure (simple processing units, massive interconnection, etc) In functionality (learning, adaptability, fault tolerance, etc)
6
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS6 What is an Artificial Neural Network? (2) A pragmatic definition A neural network is a massively parallel distributed computing system that has a natural propensity for storing experiential knowledge and making it available for use. It resembles the brain in two respects: Knowledge is acquired by the network through a learning process (called training) Interneuron connection strengths known as synaptic weights are used to store the knowledge Other names for neural networks Neurocomputers Connectionist networks Parallel distributed processors
7
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS7 Significance of Artificial Neural Networks Why study neural networks? To develop artificial systems that posses human-like characteristics (thought and action) Neural computation To understand the working of the brain Computational neuroscience These two fields are not mutually exclusive. Knowledge gained in each helps developments in the other. Our primary concern will be neural computation.
8
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS8 Characteristics of Neural Networks Nonlinearity Input-output mapping Adaptivity Evidential response Contextual information Fault tolerance VLSI implementability Uniformity of analysis and design Neurobiological analogy
9
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS9 Structural Levels of Organization in the Brain Conceptual organization of the nervous system Receptors Neural network Effectors
10
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS10 Structural Levels in the Brain
11
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS11 Biological Neuron
12
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS12 Biological Neuron Cell body – simple processing unit Axon – output link Dendrite – input link Synapse – connection between neuron
13
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS13 Neuron Model (1) Bias b k vkvk
14
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS14 Neuron Model (2) Properties A set of signals x 1 to x p A set of connection weights w associated with each of the p connecting links (synapses) A processing unit (or neuron) k that performs the weighted summation of the inputs (also known as adder) An activation function φ(.) that limits the amplitude of the output, usually to the range {0, 1} or {-1, 1} (also known as squashing or transfer function) A bias b k that has the effect of lowering or increasing the net input to the activation function
15
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS15 Neuron Model (3) Mathematically…(using a threshold) u k = Σ j=1 p w kj x j (linear combiner) y k = φ(u k + b k ) Mathematically…(using a bias unit) v k = Σ j=0 p w kj x j (linear combiner) y k = φ(v k ) where v k = u k + b k and x 0 = +1 and w k0 = b k
16
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS16 Activation Functions (1) Threshold function
17
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS17 Activation Functions (2) Piecewise linear function
18
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS18 Activation Functions (3) Sigmoid function
19
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS19 Signal-Flow Graph Representation A cleaner (simpler) representation that captures the various elements of a neural network and the flow of information (signals) Directed graph – network of arrows and nodes Rules for signal transmission
20
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS20 Signal-Flow Graph Representation of Neuron
21
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS21 Neural Net Definition – Signal-Flow Rep. A neural net is a directed graph consisting of nodes with interconnecting synaptic and activation links, and which is characterized by four properties: Each neuron is represented by a set of linear synaptic links, an externally applied threshold, and a nonlinear activation link The synaptic links of a neuron weight their respective input signals The weighted sum of the input signals defines the total internal activity level of the neuron in question The activity link squashes the internal activity level of the neuron to produce an output
22
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS22 Architectural Graph Representation A signal-flow graph that omits details of flow inside the neuron A partially complete directed graph
23
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS23 Classification of Neural Networks (1) Number of layers Single layer network Multilayer networks Direction of information (signal) flow Feedforward Recurrent Connectivity Fully connected Partially connected
24
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS24 Classification of Neural Networks (2) Activation function Threshold networks Linear networks Nonlinear networks Radial-basis function networks Learning methodology Supervised Unsupervised Reinforcement Training algorithm Static Dynamic
25
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS25 Example
26
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS26 Example
27
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS27 Example
28
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS28 Example
29
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS29 Example
30
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS30 Design Decisions Architecture No. of layers, units, links; connectivity Activation functions Learning Supervised, unsupervised, competitive Training algorithm Iterative, static, dynamic, epoch-based, sample-based
31
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS31 Knowledge Representation (1) Neural networks encode and make available knowledge for information processing (knowledge representation) What is knowledge? Information or models used to interpret, predict and appropriately respond to the outside world (environment) How can knowledge be represented? Explicit – what is made explicit Physically encoded – how can is physically encoded The quality of knowledge representation generally translates into the quality of the solution –better representation -> better solution
32
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS32 Knowledge Representation (2) Knowledge is encoded in the free parameters of the network. Assuming the architecture is fixed, the free parameters are…. Weights and bias values Knowledge representation in neural nets is complex Few theories exist that relate a given weight, for example, to a particular piece of information Hmm, so neural nets are worthless? Nope!
33
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS33 Learning Learning: to acquire and maintain knowledge of interest Knowledge of interest: knowledge of environment that will enable the machine (neural net) to achieve its goals Prior information Current information Knowledge can be built into neural networks from input-output examples by a automatic process of learning (commonly known as training) Training algorithms
34
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS34 Some General Knowledge Rep. Rules Rule 1: Similar inputs from similar classes should usually produce similar representations inside the network, and should therefore be classified as belonging to the same category Rule 2: Inputs to be characterized as separate classes should be given widely different representations in the network Rule 3: If a particular feature is important, then there should be a large number of neurons involved in the representation of that item in the network Rule 4: Prior knowledge and invariances should be built into the design of the network, thus simplifying learning
35
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS35 Building-in Prior Knowledge Advantages Specialized structure Benefits of having a specialized structure – biologically plausible, less complication, fewer free parameters, faster training, fewer examples needed, better generalization, etc How to build-in prior knowledge? No hard and fast rules In general, use domain knowledge to reduce complexity of neural network based on what we know about their performance characteristics
36
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS36 Building-in Invariance Invariance? Fault tolerance Immunity to transformations Invariance by structure Invariance by training Invariant feature space
37
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS37 Building-in Current Knowledge Learning and training algorithms Subject of next 2 lectures (chapter 2)
38
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS38 An Example A neural net for signature verification Prior knowledge Architecture Current knowledge Input-output examples Learning Modification of the free parameters (weights, biases, etc) Generalization Using the trained network for prediction
39
CS/CMPE 537 - Neural Networks (Sp 2004/2005) - Asim Karim @ LUMS39 Application Areas of Neural Networks Model estimation Interpolation Extrapolation Pattern classification Lots of examples Signal processing Noise reduction, enhancement, echo removal Optimization Lots of examples
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.