Presentation is loading. Please wait.

Presentation is loading. Please wait.

KULIAH II JST: BASIC CONCEPTS

Similar presentations


Presentation on theme: "KULIAH II JST: BASIC CONCEPTS"— Presentation transcript:

1 KULIAH II JST: BASIC CONCEPTS
Amer Sharif, S.Si, M.Kom

2 INTRODUCTION REVIEW Neural Network definition:
A massively parallel distributed processor of simple processing units (neuron) Store experiential knowledge and make it available for use Knowledge is acquired from the environment through learning process Knowledge is stored as internerneuron connection strengths (synaptic weights)

3 INTRODUCTION REVIEW Benefits: Nonlinearity Input Output Mapping
Adaptivity Evidential Response Contextual Information Fault Tolerance/Graceful Degrading VLSI Implementability Uniform Analysis & Design

4 NEURON MODELLING Basic elements of neuron:
A set of synapses or connecting links Each synapse is characterized by its weight Signal xj at synapse j connected to neuron k is multiplied by synaptic weight wkj Bias is bk An adder for summing the input signals An activation function for limiting the output amplitude of the neuron

5 NEURON MODELLING Block diagram of a nonlinier neuron

6 NEURON MODELLING Note x1, x2,…, xm are input signals
wk1, wk2,…, wkm are synaptic weights of neuron k uk is the linier combiner output bk is bias is the activation function yk is the output signal of the neuron

7 NEURON MODELLING If and bias is substituted for a synapse where
x0 = + 1 with weight wk0 = bk then and

8 NEURON MODELLING Modified block diagram of a nonlinier neuron

9 ACTIVATION FUNCTIONS Activation Function types: Threshold Function and
also known as the McCulloch-Pitts model v 1.2 1 0.8 0.6 0.4 0.2

10 ACTIVATION FUNCTIONS Piecewise-Linear Function

11 ACTIVATION FUNCTIONS Sigmoid Function S-shaped Sample logistic
a is the slope parameter: the larger a the steeper the function Differentiable everywhere increasing a

12 NEURAL NETWORKS AS DIRECTED GRAPHS
Neural networks maybe represented as directed graphs: Synaptic links (linier I/O) Activation links (nonlinier I/O) Synaptic convergence Synaptic divergence wkj xj yk= wkj xj xj yk=yi + yj yi yj xj

13 NEURAL NETWORKS AS DIRECTED GRAPHS
Architectural graph: partially complete directed graph Output yk x0 =+1 xm x2 x1 .

14 FEEDBACK Output of a system influences some of the input applied to the system One or more closed paths of signal transmission around the system Feedback plays an important role in recurrent networks

15 FEEDBACK x’j (n) yk(n) xj(n) Sample single-loop feedback system w z-1
Output signal yk(n) is an infinite weighted summation of present and past samples of input signal xj(n) x’j (n) xj(n) yk(n) w z-1 w is fixed weight z-1 is unit-delay operator is sample of input signal delayed by l time units

16 FEEDBACK Dynamic system behavior is determined by weight w w < 1
yk(n) wxj(0) n Dynamic system behavior is determined by weight w w < 1 w < 1 System is exponentially convergent/stable System posses infinite memory: Output depends on input samples extending into the infinite past Memory is fading: influence of past samples is reduced exponentially with time n

17 FEEDBACK w = 1 w > 1 System is linearly divergent
yk(n) wxj(0) n w = 1 w = 1 System is linearly divergent w > 1 System is exponentially divergent yk(n) wxj(0) n w > 1

18 NETWORK ARCHITECTURES
Single-Layered Feedforward Networks input layer of source nodes output layer of neurons Neurons are organized in layers “Single-layer” refers to output neurons Source nodes supply to output neurons but not vice versa Network is feedforward or acyclic

19 NETWORK ARCHITECTURES
Multilayer Feedforward Networks Input layer of source nodes Layer of hidden neurons Layer of output One or more hidden layers Hidden neurons enable extractions of higher-order statistic Network acquires global perspective due to extra set of synaptic connections and neural interactions 7-4-2 fully connected network: 7 source nodes 4 hidden neurons 2 output neurons

20 NETWORK ARCHITECTURES
Recurrent Networks z-1 Unit-delay operators Inputs Outputs At least one feedback loop Feedback loops affect learning capability and performance of the network

21 KNOWLEDGE REPRESENTATION
Definition of Knowledge: Knowledge refers to stored information or models used by a person or a machine to interpret, predict, and appropriately respond to the outside world Issues: What information is actually made explicit How information is physically encoded for subsequent use Knowledge representation is goal-directed Good solution depends on good representation of knowledge

22 KNOWLEDGE REPRESENTATION
Challenges faced by Neural Networks: Learn the model of the world/environment Maintain the model to be consistent with the real world to achieve the goals desired Neural Networks may learn from a set of observations data in form of input-output pairs (training data/training sample) Input is input signal and output is the corresponding desired response

23 KNOWLEDGE REPRESENTATION
Handwritten digit recognition problem Input signal: one of 10 images of digits Goal: to identify image presented to the network as input Design steps: Select the appropriate architecture Train the network with subset of examples (learning phase) Test the network with presentation of data/digit image not seen before, then compare response of network with actual identity of the digit image presented (generalization phase)

24 KNOWLEDGE REPRESENTATION
Difference with classical pattern-classifier: Classical pattern-classifier design steps: Formulate mathematical model of the problem Validate model with real data Build based on model Neural Network design is: Based on real life data Data may “speak for itself” Neural network not only provides model of the environment but also process the information

25 ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS
AI systems must be able to: Store knowledge Use stored knowledge to solve problem Acquire new knowledge through experience AI components: Representation Knowledge is presented in a language of symbolic structures Symbolic representation makes it relatively easy for human users

26 ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS
Reasoning Able to express and solve broad range of problems Able to make explicit and implicit information known to it Have a control mechanism to determine which operation for a particular problem, when a solution is obtained, or when further work on the problem must be terminated Rules, Data, and Control: Rules operate on Data Control operate on Rules The Travelling Salesman Problem: Data: possible tours and cost Rules: ways to go from city to city Control: which Rules to apply and when

27 ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS
Learning Inductive learning: determine rules from raw data and experience Deductive learning: use rules to determine specific facts Environment Learning element Knowlegdge Base Performance element

28 ARTIFICIAL INTELLIGENCE AND NEURAL NETWORKS
Parameter Artificial Intelligence Neural Networks Level of Explanation Symbolic representation with sequential processing Parallel distributed processing (PDP) Processing Style Sequential Parallel Representational Structure Quasi-linguistic structure Poor Summary Formal manipulation of algorithm and data representation in top down fashion Parallel distributed processing with natural ability to learn in bottom up fashion


Download ppt "KULIAH II JST: BASIC CONCEPTS"

Similar presentations


Ads by Google