Presentation is loading. Please wait.

Presentation is loading. Please wait.

Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.

Similar presentations


Presentation on theme: "Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General."— Presentation transcript:

1 Computer Vision Lecture 7 Classifiers

2 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General theory –Gaussian distributions –Nearest neighbor classifier –Histogram method Feature selection (22.3) –Principal component analysis –Canonical variables Neural networks (22.4) –Structure –Error criterion –Gradient descent Support vector machines (SVM) (22.5)

3 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 2 Motivation Task: Find faces in image Method: –While (image not explored) Obtain pixels from a rectangular ROI in image (call this set of pixels x) Classify according to the set of values

4 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 3 Example of Result Figure 22-5 from text. Classifier has found the faces in image, and indicated their locations with polygonal figures.

5 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 4 Conceptual Framework Assume that there are two classes, 1 and 2 We know p(1|x) and p(2|x) Intuitive classification - see below

6 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 5 Decision Theory What are the consequences of an incorrect decision? Bayes method –Assume that a loss (cost) of L(1->2) if we assign object of class 1 to category 2 and L(2->1) if we assign object of class 2 to category 1. To minimize the average loss we decide as follows –Choose 1 if L(1->2)P(1|x) 1)P(2|x) –Choose 2 otherwise

7 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 6 Experimental Approaches We are given a training (learning) sample (x i, y i ) of data vectors x i and their classes y i, and we need to construct the classifier Estimate p(1|x), p(2|x) and build classifier –Parametric method –Nearest neighbor method –Histogram method Use classifier of specified structure and adjust classifier parameters from training sample.

8 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 7 Example: Gaussian Classifier Assume we have data vectors x k,i for i = 1, 2. The probabilities Pr{i} are known. The Bayes loss is equal for both categories. Estimate the means and covariances Classifier: Given unknown x. Compute –Chose class that has the lower value of g(i|x)

9 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 8 Example: Nearest Neighbor Assume we have data vectors x k,i for i = 1, 2. Classifier: Given unknown x –Find Chose i for smaller value of d i

10 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 9 Example

11 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 10 Histogram Method Assume we have data vectors x k,i for i = 1, 2. The Bayes loss is equal for both categories. Divide input space into J ‘bins’,J < N 1, N 2 Find h i,j the number of training vectors of category i in bin j. Given: unknown x. Find its bin j. Decide according to which h i,j is higher.

12 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 11 Curse of Dimension If the number of measurements in x is very large, we are in trouble –The covariance matrix ∑ will be singular (not possible to find ∑ -1, ln|∑| = -∞). Gaussian method does not work. –Hard to divide space of x into bins. Histogram method does not work Feature extraction?

13 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 12 Principal Component Analysis Picture

14 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 13 Principal Component Analysis Formulas Find v i, the eigenvalues and eigenvectors of ∑ Chose xv i for large eigenvalues as features

15 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 14 Issues Data may not be linearly seperable Principal components may not be appropriate

16 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 15 Canonical Transformation For two categories: Fisher linear discriminant function Use v T x as only feature. See textbook for general formula

17 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 16 Neural Networks Nonlinear classifiers Parameters are found by iterative (slow) training method Can be very effective

18 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 17 Two-Layer Neural Network

19 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 18 Curse of Dimension High order neural network has limited ability to generalize (and can be very slow to train) Problem structure can improve performance

20 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 19 Useful Preprocessing Brightness/contrast equalization Position, size adjustment Angle adjustment

21 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 20 Architecture for Face Search

22 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 21 Architecture for Character Recognition

23 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 22 Support Vector Machines (SVM) General nonlinear (and linear) classifier Structure similar to perceptron Training based on sophisticated optimization theory Currently, the ‘Mercedes-Bentz’ of classifiers.

24 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 23 Summary Goal: Find object in image Method: Look at windows, classify (template search with classifiers) Issues: –Classifier design –Training –Curse of dimension –Generalization

25 Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 24 Template Connections


Download ppt "Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General."

Similar presentations


Ads by Google