 # Back-propagation Chih-yun Lin 5/16/2015. Agenda Perceptron vs. back-propagation network Network structure Learning rule Why a hidden layer? An example:

## Presentation on theme: "Back-propagation Chih-yun Lin 5/16/2015. Agenda Perceptron vs. back-propagation network Network structure Learning rule Why a hidden layer? An example:"— Presentation transcript:

Back-propagation Chih-yun Lin 5/16/2015

Agenda Perceptron vs. back-propagation network Network structure Learning rule Why a hidden layer? An example: Jets or Sharks Conclusions

Network Structure – Perceptron O Output Unit W j I j Input Units

Network Structure – Back-propagation Network O i Output Unit W j,i a j Hidden Units W k,j I k Input Units

Learning Rule Measure error Reduce that error By appropriately adjusting each of the weights in the network

Learning Rule – Perceptron Err = T – O O is the predicted output T is the correct output W j  W j + α * I j * Err I j is the activation of a unit j in the input layer α is a constant called the learning rate

Learning Rule – Back-propagation Network Err i = T i – O i W j,i  W j,i + α * a j * Δ i Δ i = Err i * g’(in i ) g’ is the derivative of the activation function g a j is the activation of the hidden unit W k,j  W k,j + α * I k * Δ j Δ j = g’(in j ) * Σ i W j,i * Δ i

Learning Rule – Back-propagation Network E = 1/2Σ i (T i – O i ) 2 = - I k * Δ j

Why a hidden layer? (1 w 1 ) + (1 w 2 ) w 1 + w 2 < (1 w 1 ) + (0 w 2 ) > ==> w 1 > (0 w 1 ) + (1 w 2 ) > ==> w 2 > (0 w 1 ) + (0 w 2 ) 0 <

Why a hidden layer? (cont.) (1 w 1 ) + (1 w 2 ) + (1 w 3 ) w 1 + w 2 + w 3 < (1 w 1 ) + (0 w 2 ) + (0 w 3 ) > ==> w 1 > (0 w 1 ) + (1 w 2 ) + (0 w 3 ) > ==> w 2 > (0 w 1 ) + (0 w 2 ) + (0 w 3 ) 0 <

An example: Jets or Sharks

Conclusion Expressiveness: Well-suited for continuous inputs,unlike most decision tree systems Computational efficiency: Time to error convergence is highly variable Generalization: Have reasonable success in a number of real-world problems

Conclusions (cont.) Sensitivity to noise: Very tolerant of noise in the input data Transparency: Neural networks are essentially black boxes Prior knowledge: Hard to used one’s knowledge to “prime” a network to learn better

Download ppt "Back-propagation Chih-yun Lin 5/16/2015. Agenda Perceptron vs. back-propagation network Network structure Learning rule Why a hidden layer? An example:"

Similar presentations