ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Whitening.

Slides:



Advertisements
Similar presentations
Component Analysis (Review)
Advertisements

ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
CS479/679 Pattern Recognition Dr. George Bebis
Chapter 2: Bayesian Decision Theory (Part 2) Minimum-Error-Rate Classification Classifiers, Discriminant Functions and Decision Surfaces The Normal Density.
Pattern Classification, Chapter 2 (Part 2) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R.
Pattern Classification. Chapter 2 (Part 1): Bayesian Decision Theory (Sections ) Introduction Bayesian Decision Theory–Continuous Features.
Pattern Classification, Chapter 2 (Part 2) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R.
Chapter 2: Bayesian Decision Theory (Part 2) Minimum-Error-Rate Classification Classifiers, Discriminant Functions and Decision Surfaces The Normal Density.
Pattern Classification Chapter 2 (Part 2)0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O.
Visual Recognition Tutorial
Bayesian Decision Theory Chapter 2 (Duda et al.) – Sections
Prénom Nom Document Analysis: Parameter Estimation for Pattern Recognition Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Linear Discriminant Functions Chapter 5 (Duda et al.)
METU Informatics Institute Min 720 Pattern Classification with Bio-Medical Applications PART 2: Statistical Pattern Classification: Optimal Classification.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
1 Linear Methods for Classification Lecture Notes for CMPUT 466/551 Nilanjan Ray.
Probability of Error Feature vectors typically have dimensions greater than 50. Classification accuracy depends upon the dimensionality and the amount.
Principles of Pattern Recognition
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 13 Oct 14, 2005 Nanjing University of Science & Technology.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 02: BAYESIAN DECISION THEORY Objectives: Bayes.
Discriminant Functions
URL:.../publications/courses/ece_8443/lectures/current/exam/2004/ ECE 8443 – Pattern Recognition LECTURE 15: EXAM NO. 1 (CHAP. 2) Spring 2004 Solutions:
1 E. Fatemizadeh Statistical Pattern Recognition.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Bayesian Decision Theory (Classification) 主講人:虞台文.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
1 Pattern Recognition: Statistical and Neural Lonnie C. Ludeman Lecture 12 Sept 30, 2005 Nanjing University of Science & Technology.
1 Bayesian Decision Theory Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication/ Graduate Institute of Networking and.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Discriminant Analysis
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Elements of Pattern Recognition CNS/EE Lecture 5 M. Weber P. Perona.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
ECE 8443 – Pattern Recognition LECTURE 04: PERFORMANCE BOUNDS Objectives: Typical Examples Performance Bounds ROC Curves Resources: D.H.S.: Chapter 2 (Part.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 04: GAUSSIAN CLASSIFIERS Objectives: Whitening.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Objectives: Chernoff Bound Bhattacharyya Bound ROC Curves Discrete Features Resources: V.V. – Chernoff Bound J.G. – Bhattacharyya T.T. – ROC Curves NIST.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 10: PRINCIPAL COMPONENTS ANALYSIS Objectives:
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
ECE 471/571 – Lecture 3 Discriminant Function and Normal Density 08/27/15.
Objectives: Normal Random Variables Support Regions Whitening Transformations Resources: DHS – Chap. 2 (Part 2) K.F. – Intro to PR X. Z. – PR Course S.B.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Mixture Densities Maximum Likelihood Estimates.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Objectives: Loss Functions Risk Min. Error Rate Class. Resources: DHS – Chap. 2 (Part 1) DHS – Chap. 2 (Part 2) RGO - Intro to PR MCE for Speech MCE for.
Lecture 2. Bayesian Decision Theory
LECTURE 04: DECISION SURFACES
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
LECTURE 10: DISCRIMINANT ANALYSIS
LECTURE 03: DECISION SURFACES
LECTURE 05: THRESHOLD DECODING
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
LECTURE 05: THRESHOLD DECODING
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
EE513 Audio Signals and Systems
Generally Discriminant Analysis
Mathematical Foundations of BME
LECTURE 09: DISCRIMINANT ANALYSIS
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Parametric Methods Berlin Chen, 2005 References:
Multivariate Methods Berlin Chen
LECTURE 05: THRESHOLD DECODING
Multivariate Methods Berlin Chen, 2005 References:
LECTURE 11: Exam No. 1 Review
Presentation transcript:

ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Whitening Transformations Linear Discriminants ROC Curves Resources : D.H.S: Chapter 2 (Part 3) K.F.: Intro to PR X. Z.: PR Course A.N. : Gaussian Discriminants E.M. : Linear Discriminants M.R.: Chernoff Bounds S.S.: Bhattacharyya T.T. : ROC Curves NIST: DET Curves D.H.S: Chapter 2 (Part 3) K.F.: Intro to PR X. Z.: PR Course A.N. : Gaussian Discriminants E.M. : Linear Discriminants M.R.: Chernoff Bounds S.S.: Bhattacharyya T.T. : ROC Curves NIST: DET Curves

ECE 8527: Lecture 03, Slide 1 Let α 1 correspond to ω 1, α 2 to ω 2, and λ ij = λ(α i |ω j ) The conditional risk is given by: R(α 1 |x) = λ 11 P(ω 1 |x) + λ 12 P(ω 2 |x) R(α 2 |x) = λ 21 P(ω 1 |x) + λ 22 P(ω 2 |x) Our decision rule is: choose ω 1 if: R(α 1 |x) < R(α 2 |x); otherwise decide ω 2 This results in the equivalent rule: choose ω 1 if: (λ 21 - λ 11 ) P(x|ω 1 ) > (λ 12 - λ 22 ) P(x|ω 2 ); otherwise decide ω 2 If the loss incurred for making an error is greater than that incurred for being correct, the factors (λ 21 - λ 11 ) and (λ 12 - λ 22 ) are positive, and the ratio of these factors simply scales the posteriors. Two-Category Classification

ECE 8527: Lecture 03, Slide 2 Minimum error rate classification: choose ω i if: P(ω i | x) > P(ω j | x) for all j≠i Likelihood Ratio

ECE 8527: Lecture 03, Slide 3 Define a set of discriminant functions: g i (x), i = 1,…, c Define a decision rule: choose ω i if: g i (x) > g j (x) ∨ j ≠ i For a Bayes classifier, let g i (x) = -R(ω i |x) because the maximum discriminant function will correspond to the minimum conditional risk. For the minimum error rate case, let g i (x) = P(ω i |x), so that the maximum discriminant function corresponds to the maximum posterior probability. Choice of discriminant function is not unique:  multiply or add by same positive constant  Replace g i (x) with a monotonically increasing function, f(g i (x)). Multicategory Decision Surfaces

ECE 8527: Lecture 03, Slide 4 A classifier can be visualized as a connected graph with arcs and weights: What are the advantages of this type of visualization? Network Representation of a Classifier

ECE 8527: Lecture 03, Slide 5 Some monotonically increasing functions can simplify calculations considerably: What are some of the reasons (3) is particularly useful?  Computational complexity (e.g., Gaussian)  Numerical accuracy (e.g., probabilities tend to zero)  Decomposition (e.g., likelihood and prior are separated and can be weighted differently)  Normalization (e.g., likelihoods are channel dependent). Log Probabilities

ECE 8527: Lecture 03, Slide 6 We can visualize our decision rule several ways: choose ω i if: g i (x) > g j (x) ∨ j ≠ i Decision Surfaces

ECE 8527: Lecture 03, Slide 7 A classifier that places a pattern in one of two classes is often referred to as a dichotomizer. We can reshape the decision rule: If we use log of the posterior probabilities: A dichotomizer can be viewed as a machine that computes a single discriminant function and classifies x according to the sign (e.g., support vector machines). Two-Category Case

ECE 8527: Lecture 03, Slide 8 Why is it convenient to convert an arbitrary distribution into a spherical one? (Hint: Euclidean distance) Consider the transformation: A w = Φ Λ -1/2 where Φ is the matrix whose columns are the orthonormal eigenvectors of Σ and Λ is a diagonal matrix of eigenvalues (Σ= Φ Λ Φ t ). Note that Φ is unitary. What is the covariance of y=A w x? E[yy t ]= (A w x)(A w x) t =(Φ Λ -1/2 x) (Φ Λ -1/2 x) t = Φ Λ -1/2 x x t Λ -1/2 Φ t = Φ Λ -1/2 Σ Λ -1/2 Φ t = Φ Λ -1/2 Φ Λ Φ t Λ -1/2 Φ t = (Φ Φ t ) (Λ -1/2 Λ Λ -1/2 )(ΦΦ t ) = I Coordinate Transformations

ECE 8527: Lecture 03, Slide 9 The weighted Euclidean distance: is known as the Mahalanobis distance, and represents a statistically normalized distance calculation that results from our whitening transformation. Consider an example using our Java Applet.Java Applet Mahalanobis Distance

ECE 8527: Lecture 03, Slide 10 Recall our discriminant function for minimum error rate classification: For a multivariate normal distribution: Consider the case: Σ i = σ 2 I (statistical independence, equal variance, class-independent variance) Discriminant Functions

ECE 8527: Lecture 03, Slide 11 The discriminant function can be reduced to: Since these terms are constant w.r.t. the maximization: We can expand this: The term x t x is a constant w.r.t. i, and μ i t μ i is a constant that can be precomputed. Gaussian Classifiers

ECE 8527: Lecture 03, Slide 12 We can use an equivalent linear discriminant function: w i0 is called the threshold or bias for the i th category. A classifier that uses linear discriminant functions is called a linear machine. The decision surfaces defined by the equation: Linear Machines

ECE 8527: Lecture 03, Slide 13 This has a simple geometric interpretation: The decision region when the priors are equal and the support regions are spherical is simply halfway between the means (Euclidean distance). Threshold Decoding

ECE 8527: Lecture 03, Slide 14 General Case for Gaussian Classifiers

ECE 8527: Lecture 03, Slide 15 Case: Σ i = σ 2 I This can be rewritten as: Identity Covariance

ECE 8527: Lecture 03, Slide 16 Case: Σ i = Σ Equal Covariances

ECE 8527: Lecture 03, Slide 17 Arbitrary Covariances

ECE 8527: Lecture 03, Slide 18 Typical Examples of 2D Classifiers

ECE 8527: Lecture 03, Slide 19 How do we compare two decision rules if they require different thresholds for optimum performance? Consider four probabilities: Receiver Operating Characteristic (ROC)

ECE 8527: Lecture 03, Slide 20 An ROC curve is typically monotonic but not symmetric: One system can be considered superior to another only if its ROC curve lies above the competing system for the operating region of interest. General ROC Curves

ECE 8527: Lecture 03, Slide 21 Summary Decision Surfaces: geometric interpretation of a Bayesian classifier. Gaussian Distributions: how is the shape of the distribution influenced by the mean and covariance? Bayesian classifiers for Gaussian distributions: how does the decision surface change as a function of the mean and covariance? Gaussian Distributions: how is the shape of the decision region influenced by the mean and covariance? Bounds on performance (i.e., Chernoff, Bhattacharyya) are useful abstractions for obtaining closed-form solutions to problems. A Receiver Operating Characteristic (ROC) curve is a very useful way to analyze performance and select operating points for systems. Discrete features can be handled in a way completely analogous to continuous features.

ECE 8527: Lecture 03, Slide 22 Bayes decision rule guarantees lowest average error rate Closed-form solution for two-class Gaussian distributions Full calculation for high dimensional space difficult Bounds provide a way to get insight into a problem and engineer better solutions. Need the following inequality: Assume a > b without loss of generality: min[a,b] = b. Also, a β b (1- β) = (a/b) β b and (a/b) β > 1. Therefore, b < (a/b) β b, which implies min[a,b] < a β b (1- β). Apply to our standard expression for P(error). Error Bounds

ECE 8527: Lecture 03, Slide 23 Recall: Note that this integral is over the entire feature space, not the decision regions (which makes it simpler). If the conditional probabilities are normal, this expression can be simplified. Chernoff Bound

ECE 8527: Lecture 03, Slide 24 If the conditional probabilities are normal, our bound can be evaluated analytically: where: Procedure: find the value of β that minimizes exp(-k(β ), and then compute P(error) using the bound. Benefit: one-dimensional optimization using β Chernoff Bound for Normal Densities

ECE 8527: Lecture 03, Slide 25 The Chernoff bound is loose for extreme values The Bhattacharyya bound can be derived by β = 0.5: where: These bounds can still be used if the distributions are not Gaussian (why? hint: Occam’s Razor). However, they might not be adequately tight.Occam’s Razor Bhattacharyya Bound