Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSSE463: Image Recognition Day 18

Similar presentations


Presentation on theme: "CSSE463: Image Recognition Day 18"— Presentation transcript:

1 CSSE463: Image Recognition Day 18
Exam covers materials through neural nets

2 Perceptron training Each misclassified sample is used to change the weight “a little bit” so that the classification is better the next time. Consider inputs in form x = [x1, x2, … xn] Target label is y = {+1,-1} Algorithm (Hebbian Learning) Randomize weights Loop until converge If wx + b > 0 and y is -1: wi -= e*xi for all i b -= ey else if wx + b < 0 and y is +1: wi += e*xi for all i b += ey Else (it’s classified correctly, do nothing) e is the learning rate (a parameter that can be tuned). Demo perceptronLearning.m is broken.

3 Multilayer feedforward neural nets
Many perceptrons Organized into layers Input (sensory) layer Hidden layer(s): 2 proven sufficient to model any arbitrary function Output (classification) layer Powerful! Calculates functions of input, maps to output layers Example x1 y1 x2 y2 x3 y3 Sensory (HSV) Hidden (functions) Classification (apple/orange/banana) Q4

4 XOR example 2 inputs 1 hidden layer of 5 neurons 1 output

5 Backpropagation algorithm
Initialize all weights randomly For each labeled example: Calculate output using current network Update weights across network, from output to input, using Hebbian learning Iterate until convergence Epsilon decreases at every iteration Matlab does this for you. matlabNeuralNetDemo.m x1 y1 x2 y2 x3 y3 Architecture of demo: 1 input, 1 hidden layer of 5 neurons, 1 output (each hidden and output neuron has a bias (so 6), each link has a weight (so 10)) a. Calculate output (feedforward) R peat b. Update weights (feedback) Q5

6 Parameters Most networks are reasonably robust with respect to learning rate and how weights are initialized However, figuring out how to normalize your input determine the architecture of your net is a black art. You might need to experiment. One hint: Re-run network with different initial weights and different architectures, and test performance each time on a validation set. Pick best.

7 References This is just the tip of the iceberg! See:
Sonka, pp Laurene Fausett. Fundamentals of Neural Networks. Prentice Hall, 1994. Approachable for beginner. C.M. Bishop. Neural Networks for Pattern Classification. Oxford University Press, 1995. Technical reference focused on the art of constructing networks (learning rate, # of hidden layers, etc.) Matlab neural net help is very good

8 Multiclass scene classification
Neural nets Make one node per class in output layer. Take max. SVM Train 1 SVM per class. For example, 1 Sunset vs all others 1 Beach vs all others 1 Field vs all others Assign class of max positive output. Note: that this isn’t multi-label scene classification (google it)

9 How does SVM predict compute score?
y1 is just the weighted sum of contributions of individual support vectors: d = data dimension, e.g., 294, s = kernel width. numSupVecs, svcoeff (alpha) and bias are learned during training. Note: looking at which of your training examples are support vectors can be revealing! (Keep in mind for sunset detector and term project) Much easier computation than training Was easy to implement on a device without MATLAB (ea smartphone)

10 More SVMs vs. Neural Nets
Empirically shown to have good generalizability even with relatively-small training sets and no domain knowledge. The classification runtime and space are O(____), where s is the number of support vectors, and d is the dimensionality of the feature vectors. In the worst case, s = size of whole training set (like nearest neighbor) No worse runtime than implementing a neural net with s perceptrons in the hidden layer. But s too high means generalization is ________ Neural networks: can tune architecture. Lots of parameters! Training can take a long time with large data sets. Consider that you’ll want to experiment with parameters… But performance can be excellent with larger networks and lots of training data Stay tuned for deep nets next time! Generalization is poor. Last Q’s

11 CSSE463 Roadmap


Download ppt "CSSE463: Image Recognition Day 18"

Similar presentations


Ads by Google