Download presentation

Presentation is loading. Please wait.

Published byKyle Hodges Modified over 3 years ago

1
KULIAH II JST: BASIC CONCEPTS Amer Sharif, S.Si, M.Kom

2
INTRODUCTION REVIEW Neural Network definition: A massively parallel distributed processor of simple processing units (neuron) Store experiential knowledge and make it available for use Knowledge is acquired from the environment through learning process Knowledge is stored as internerneuron connection strengths (synaptic weights)

3
INTRODUCTION REVIEW Benefits: Nonlinearity Input Output Mapping Adaptivity Evidential Response Contextual Information Fault Tolerance/Graceful Degrading VLSI Implementability Uniform Analysis & Design

4
NEURON MODELLING Basic elements of neuron: A set of synapses or connecting links Each synapse is characterized by its weight Signal x j at synapse j connected to neuron k is multiplied by synaptic weight w kj Bias is b k An adder for summing the input signals An activation function for limiting the output amplitude of the neuron

5
NEURON MODELLING Block diagram of a nonlinier neuron

6
NEURON MODELLING Note x 1, x 2,…, x m are input signals w k1, w k2,…, w km are synaptic weights of neuron k u k is the linier combiner output b k is bias is the activation function y k is the output signal of the neuron

7
NEURON MODELLING If and bias is substituted for a synapse where x 0 = + 1 with weight w k0 = b k then and

8
NEURON MODELLING Modified block diagram of a nonlinier neuron

9
ACTIVATION FUNCTIONS Activation Function types: Threshold Function and also known as the McCulloch-Pitts model -2 -1 0 1 2 v 1.2 1 0.8 0.6 0.4 0.2 0

10
ACTIVATION FUNCTIONS Piecewise-Linear Function

11
ACTIVATION FUNCTIONS Sigmoid Function S-shaped Sample logistic function: a is the slope parameter: the larger a the steeper the function Differentiable everywhere increasing a

12
NEURAL NETWORKS AS DIRECTED GRAPHS Neural networks maybe represented as directed graphs: Synaptic links (linier I/O) Activation links (nonlinier I/O) Synaptic convergence Synaptic divergence w kj xjxj y k= w kj x j xjxj y k =y i + y j yiyi yjyj xjxj xjxj xjxj

13
NEURAL NETWORKS AS DIRECTED GRAPHS Architectural graph: partially complete directed graph Output y k x 0 =+1 xmxm x2x2 x1x1......

14
FEEDBACK Output of a system influences some of the input applied to the system One or more closed paths of signal transmission around the system Feedback plays an important role in recurrent networks

15
FEEDBACK Sample single-loop feedback system Output signal y k (n) is an infinite weighted summation of present and past samples of input signal x j (n) x j (n) y k (n) w z -1 w is fixed weight z -1 is unit-delay operator is sample of input signal delayed by l time units

16
FEEDBACK Dynamic system behavior is determined by weight w y k (n) wx j (0) 0 1 2 3 4 n w < 1 System is exponentially convergent/stable System posses infinite memory: Output depends on input samples extending into the infinite past Memory is fading: influence of past samples is reduced exponentially with time n w < 1

17
FEEDBACK w = 1 System is linearly divergent w > 1 System is exponentially divergent y k (n) wx j (0) 0 1 2 3 4 n y k (n) wx j (0) 0 1 2 3 4 n w = 1 w > 1

18
NETWORK ARCHITECTURES Single-Layered Feedforward Networks input layer of source nodes output layer of neurons Neurons are organized in layers Single-layer refers to output neurons Source nodes supply to output neurons but not vice versa Network is feedforward or acyclic

19
NETWORK ARCHITECTURES One or more hidden layers Hidden neurons enable extractions of higher-order statistic Network acquires global perspective due to extra set of synaptic connections and neural interactions Multilayer Feedforward Networks Input layer of source nodes Layer of hidden neurons Layer of output neurons 7-4-2 fully connected network: 7 source nodes 4 hidden neurons 2 output neurons

20
NETWORK ARCHITECTURES Recurrent Networks At least one feedback loop Feedback loops affect learning capability and performance of the network z -1 Unit-delay operators Inputs Outputs

21
KNOWLEDGE REPRESENTATION Definition of Knowledge: Knowledge refers to stored information or models used by a person or a machine to interpret, predict, and appropriately respond to the outside world Issues: What information is actually made explicit How information is physically encoded for subsequent use Knowledge representation is goal-directed Good solution depends on good representation of knowledge

22
KNOWLEDGE REPRESENTATION Challenges faced by Neural Networks: Learn the model of the world/environment Maintain the model to be consistent with the real world to achieve the goals desired Neural Networks may learn from a set of observations data in form of input-output pairs (training data/training sample) Input is input signal and output is the corresponding desired response

23
KNOWLEDGE REPRESENTATION Handwritten digit recognition problem Input signal: one of 10 images of digits Goal: to identify image presented to the network as input Design steps: Select the appropriate architecture Train the network with subset of examples (learning phase) Test the network with presentation of data/digit image not seen before, then compare response of network with actual identity of the digit image presented (generalization phase)

24
KNOWLEDGE REPRESENTATION Difference with classical pattern-classifier: Classical pattern-classifier design steps: Formulate mathematical model of the problem Validate model with real data Build based on model Neural Network design is: Based on real life data Data may speak for itself Neural network not only provides model of the environment but also process the information

25
ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS AI systems must be able to: Store knowledge Use stored knowledge to solve problem Acquire new knowledge through experience AI components: Representation Knowledge is presented in a language of symbolic structures Symbolic representation makes it relatively easy for human users

26
ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS Reasoning Able to express and solve broad range of problems Able to make explicit and implicit information known to it Have a control mechanism to determine which operation for a particular problem, when a solution is obtained, or when further work on the problem must be terminated Rules, Data, and Control: Rules operate on Data Control operate on Rules The Travelling Salesman Problem: Data: possible tours and cost Rules:ways to go from city to city Control:which Rules to apply and when

27
ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS Learning Inductive learning: determine rules from raw data and experience Deductive learning: use rules to determine specific facts Environment Learning element Knowlegdge Base Performance element

28
ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS ParameterArtificial Intelligence Neural Networks Level of ExplanationSymbolic representation with sequential processing Parallel distributed processing (PDP) Processing StyleSequentialParallel Representational Structure Quasi-linguistic structure Poor SummaryFormal manipulation of algorithm and data representation in top down fashion Parallel distributed processing with natural ability to learn in bottom up fashion

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google