Presentation is loading. Please wait.

Presentation is loading. Please wait.

2806 Neural Computation Introduction Lecture 1 2005 Ari Visa.

Similar presentations


Presentation on theme: "2806 Neural Computation Introduction Lecture 1 2005 Ari Visa."— Presentation transcript:

1 2806 Neural Computation Introduction Lecture 1 2005 Ari Visa

2 Agenda n Some historical notes n Biological background n What neural networks are? n Properties of neural network n Compositions of neural network n Relation to artificial intelligence

3 Overview The human brain computes in an entirely different way from the conventional digital computer. The brain routinely accomplishes perceptual recognition in approximately 100-200 ms. How does a human brain do it?

4 Some Expected Benefits n Nonlinearity n Input-Output Mapping n Adaptivity n Evidential Response n Contextual Information n Fault Tolerance n VLSI Implementability n Uniformity of Analysis and Design n Neurobiological Analogy

5 Definition n A neural network is a massive parallel distributed processor made up of simple processing units, which has a natural propensity for storing experimental knowledge and making it available for use. It resembles the brain in two respects:

6 Definition n 1) Knowledge is acquired by the network from its environment through a learning process. n 2) Interneuron connection strengths, known as synaptic weights, are used to store the acquired knowledge.

7 Some historical notes Lot of activities concerning automatas, communication, computation, understanding of nervous system during 1930s and 1940s McCulloch and Pitts 1943 von Neumann EDVAC (Electronic Discrete Variable Automatic Computer) Hebb: The Organization of Behavior, 1949

8 Some historical notes

9 n Minsky: Theory of Neural-Analog Reinforcement Systems and Its Application to the Brain-Model Problem, 1954 n Gabor: Nonlinear adaptive filter, 1954 n Uttley: leaky integrate and fire neuron, 1956 n Rosenblatt: the perceptron, 1958

10 Biological Background n The human nervous system may be viewed as a three stage system (Arbib 1987): The brain continually receives information, perceives it, and makes appropriate decisions.

11 Biological Background n Axons = the transmission lines n Dendrites = the receptive zones n Action potentials, spikes originate at the cell body of neurons and then propagate across the individual neurons at constant velocity and amplitude.

12 Biological Background n Synapses are elementary structural and functional units that mediate the interactions between neurons. n Excitation or inhibition

13 Biological Background n Note, that the structural levels of organization are a unique characteristic of the brain

14 Biological Background

15 Properties of Neural Network n A model of a neuron: n synapses (=connecting links) n adder (=a linear combiner) n an activation function

16 Properties of Neural Network n Another formulation of a neuron model

17 Properties of Neural Network n Types of Activation Function: n Threshold Function n Piecewise-Linear Function n Sigmoid Function (signum fuction or hyperbolic tangent function)

18 Properties of Neural Network n Stochastic Model of a Neuron The activation function of the McCulloch-Pitts model is given a probabilistic interpretation, a neuron is permitted to reside in only one of two states: +1 or –1. The decision for a neuron to fire is probabilistic. A standard choice for P(v) is the sigmoid-shaped function = 1/(1+exp(-v/T)), where T is a pseudotemperature.

19 Properties of Neural Network n The model of an artificial neuron may also be represented as a signal-flow graph. n A signal-flow graph is a network of directed links that are interconnected at certain points called nodes. A typical node j has an associated node signal x j. A typical directed link originates at node j and terminates on node k. It has an associated transfer function (transmittance) that specifies the manner in which the signal y k at node k depends on the signal x j at node j.

20 Properties of Neural Network n Rule 1: A signal flows along a link in the direction defined by the arrow n Synaptic links (a linear input-output relation, 1.19a) n Activation links (a nonlinear input-output relation, 1.19b)

21 Properties of Neural Network n Rule 2: A node signal equals the algebraic sum of all signals entering the pertinent node via the incoming links (1.19c)

22 Properties of Neural Network Rule 3: The signal at a node is transmitted to each outgoing link originating from that node, with the transmission being entirely independent of the transfer functions of the outgoing links, synaptic divergence or fan-out (1.9d)

23 Properties of Neural Network n A neural network is a directed graph consisting of nodes with interconnecting synaptic and activation links, and is characterized by four properties: n 1. Each neuron is represented by a set of linear synaptic links, an externally applied bias, and a possibly nonlinear activation link, This bias is represented by a synaptic link connected to an input fixed at +1.

24 Properties of Neural Network n 2. The synaptic links of a neuron weight their respective input signals. n 3. The weighted sum of the input signals defines the induced local field of a neuron in question. n 4. The activation link squashes the induced local field of the neuron to produce an output.

25 Properties of Neural Network n Complete graph n Partially complete graph = architectural graph

26 Properties of Neural Network n Feedback is said to exist in a dynamic system whenever the output of an element in the system influences in part the input applied to the particular element, thereby giving rise to one or more closed paths for the transmission of signals around the system (1.12)

27 Properties of Neural Network n y k (n) = A[x’ j (n)] n x’ j (n) = x j (n)+B[y k (n)] n y k (n)=A/(1-AB)[x j (n)] n the closed-loop operator A/(1-AB) n the open-loop operator AB In general AB  BA

28 Properties of Neural Network n A/(1-AB) n w/1-wz -1 ) n y k (n) is convergent (=stable), if |w| < 1 (1.14a) n y k (n) is divergent (=unstable), if |w| < 1

29 Properties of Neural Network n A/(1-AB) n w/1-wz -1 ) n y k (n) is convergent (=stable), if |w| < 1 (1.14a) y k (n) is divergent (=unstable), if |w|  1, if |w| = 1 the divergence is linear (1.14.b) if |w| >1 the divergence is exponential (1.14c)

30 Compositions of Neural Network n The manner in which the neurons of a neural network are structured is intimately linked with the learning algorithm used to train the network. n Single-Layer Feedforward Networks

31 Compositions of Neural Network n Multilayer Feedforward Networks (1.16) n Hidden layers, hidden neurons or hidden units -> enabled to extract higher-order statistics

32 Compositions of Neural Network n Recurrent Neural Network (1.17) n It has at least one feedback loop.

33 Knowledge Representation n Knowledge refers to stored information or models used by a person or machine to interpret, predict, and appropriately respond to the outside world (Fishler and Firschein, 1987) n A major task for neural network is to learn a model of the world

34 Knowledge Representation n Knowledge of the world consists of two kind of information n 1) The known world state, prior information n 2) Observations of the world, obtained by means of sensor. n Obtained observations provide a pool of information from which the examples used to train the neural network are drawn.

35 Knowledge Representation n The examples can be labelled or unlabelled. n In labelled examples, each example representing an input signal is paired with a corresponding desired response. Note, both positive and negative examples are possible. n A set of input-output pairs, with each pair consisting of an input signal and the corresponding desired response, is referred to as a set of training data or training sample.

36 Knowledge Representation n Selection of an appropriate architecture n A subset of examples is used to train the network by means of a suitable algorithm (=learning). n The performance of the trained network is tested with data not seen before (=testing). n Generalization

37 Knowledge Representation n Rule 1: Similar inputs from similar classes should usually produce similar representations inside the network, and should therefore be classified as belonging to the same category. n Rule 2: Items to be categorized as separate classes should be given widely different representations in the network.

38 Knowledge Representation n Rule 3: If a particular feature is important, then there should be a large number of neurons involved in the representation of that item in the network n Rule 4: Prior information and invariances should be built into the design of a neural network, thereby simplifying the network design by not having to learn them.

39 Knowledge Representation n How to Build Prior Information Into Neural Network Design? n 1) Restricting the network architecture through the use of local connections known as receptive fields. n 2) Constraining the choice of synaptic weights through the use of weight-sharing.

40 Knowledge Representation n How to Build Invariances into Neural Network Design? n 1) Invariance by Structure n 2) Invariance by Training n 3) Invariant Feature Space

41 Relation to Artificial Intelligence n The goal of artificial intelligence (AI) is the development of paradigms or algorithms that require machines to perform cognitive tasks (Sage 1990). n An AI system must be capable of doing three things: n 1) Store knowledge n 2) Apply the knowledge stored to solve problems. n 3) Acquire new knowledge through experience.

42 Relation to Artificial Intelligence n Representation: The use of a language of symbol structures to represent both general knowledge about a problem domain of interest and specific knowledge about the solution to the problem Declarative knowledge Procedural knowledge

43 n Reasoning: The ability to solve problems n The system must be able to express and solve a broad range of problems and problem types. n The system must be able to make explicit and implicit information known to it. n The system must have control mechanism that determines which operations to apply to a particular problem. Relation to Artificial Intelligence

44 n Learning: The environment supplies some information to a learning element. The learning element then uses this information to make improvements in a knowledge base, and finally the performance element uses the knowledge base to perform its task.

45 Summary n A major task for neural network is to learn a model of the world n It is not a totally new approach but it has differences to AI, matematical modeling, Pattern Recognition and so on.


Download ppt "2806 Neural Computation Introduction Lecture 1 2005 Ari Visa."

Similar presentations


Ads by Google