Chapter 4 (Part 1): Non-Parametric Classification

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Principles of Density Estimation
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Lecture 3 Nonparametric density estimation and classification
Chapter 2: Bayesian Decision Theory (Part 2) Minimum-Error-Rate Classification Classifiers, Discriminant Functions and Decision Surfaces The Normal Density.
Pattern Classification, Chapter 2 (Part 2) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R.
Pattern Classification, Chapter 2 (Part 2) 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R.
Pattern Classification Chapter 2 (Part 2)0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O.
Chapter 20 of AIMA KAIST CS570 Lecture note
Pattern recognition Professor Aly A. Farag
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 5: Linear Discriminant Functions
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 2: Bayesian Decision Theory (Part 1) Introduction Bayesian Decision Theory–Continuous Features All materials used in this course were taken from.
Pattern Classification, Chapter 3 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Prénom Nom Document Analysis: Non Parametric Methods for Pattern Recognition Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 2 (part 3) Bayesian Decision Theory Discriminant Functions for the Normal Density Bayes Decision Theory – Discrete Features All materials used.
Chapter 6: Multilayer Neural Networks
Chapter 3 (part 3): Maximum-Likelihood and Bayesian Parameter Estimation Hidden Markov Model: Extension of Markov Chains All materials used in this course.
1 lBayesian Estimation (BE) l Bayesian Parameter Estimation: Gaussian Case l Bayesian Parameter Estimation: General Estimation l Problems of Dimensionality.
Chapter 4 (part 2): Non-Parametric Classification
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Bayesian Estimation (BE) Bayesian Parameter Estimation: Gaussian Case
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 3 (part 1): Maximum-Likelihood & Bayesian Parameter Estimation  Introduction  Maximum-Likelihood Estimation  Example of a Specific Case  The.
Binary Variables (1) Coin flipping: heads=1, tails=0 Bernoulli Distribution.
0 Pattern Classification, Chapter 3 0 Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda,
Pattern Recognition: Baysian Decision Theory Charles Tappert Seidenberg School of CSIS, Pace University.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
Learning Theory Reza Shadmehr Linear and quadratic decision boundaries Kernel estimates of density Missing data.
1 E. Fatemizadeh Statistical Pattern Recognition.
Image Modeling & Segmentation Aly Farag and Asem Ali Lecture #2.
: Chapter 3: Maximum-Likelihood and Baysian Parameter Estimation 1 Montri Karnjanadecha ac.th/~montri.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
Intro. ANN & Fuzzy Systems Lecture 23 Clustering (4)
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Nonparametric Density Estimation Riu Baring CIS 8526 Machine Learning Temple University Fall 2007 Christopher M. Bishop, Pattern Recognition and Machine.
Lecture 3: MLE, Bayes Learning, and Maximum Entropy
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides* were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
3(+1) classifiers from the Bayesian world
Outline Parameter estimation – continued Non-parametric methods.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
LECTURE 16: NONPARAMETRIC TECHNIQUES
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Nonparametric density estimation and classification
Mathematical Foundations of BME
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Hairong Qi, Gonzalez Family Professor
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
ECE – Pattern Recognition Lecture 10 – Nonparametric Density Estimation – k-nearest-neighbor (kNN) Hairong Qi, Gonzalez Family Professor Electrical.
Presentation transcript:

Chapter 4 (Part 1): Non-Parametric Classification Introduction Density Estimation Parzen Windows All materials used in this course were taken from the textbook “Pattern Classification” by Duda et al., John Wiley & Sons, 2001 with the permission of the authors and the publisher

Introduction All Parametric densities are unimodal (have a single local maximum), whereas many practical problems involve multi-modal densities Nonparametric procedures can be used with arbitrary distributions and without the assumption that the forms of the underlying densities are known There are two types of nonparametric methods: Estimating P(x | j) Bypass probability and go directly to a-posteriori probability estimation Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Density Estimation Basic idea: Probability that a vector x will fall in region R is: P is a smoothed (or averaged) version of the density function p(x) if we have a sample of size n; therefore, the probability that k points fall in R is then: and the expected value for k is: E(k) = nP (3) Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

ML estimation of P =  is reached for Therefore, the ratio k/n is a good estimate for the probability P and hence for the density function p. p(x) is continuous and that the region R is so small that p does not vary significantly within it, we can write: where is a point within R and V the volume enclosed by R. Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Combining equation (1) , (3) and (4) yields: Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Density Estimation (cont’d) Justification of equation (4) We assume that p(x) is continuous and that region R is so small that p does not vary significantly within R. Since p(x) = constant, it is not a part of the sum. Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Where: (R) is: a surface in IR2 a volume in IR3 a hypervolume in IRn Since p(x)  p(x’) = constant, therefore in IR3: Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Condition for convergence The fraction k/(nV) is a space averaged value of p(x). p(x) is obtained only if V approaches zero. This is the case where no samples are included in R: it is an uninteresting case! In this case, the estimate diverges: it is an uninteresting case! Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

The volume V needs to approach 0 anyway if we want to use this estimation Practically, V cannot be allowed to become small since the number of samples is always limited One will have to accept a certain amount of variance in the ratio k/n Theoretically, if an unlimited number of samples is available, we can circumvent this difficulty To estimate the density of x, we form a sequence of regions R1, R2,…containing x: the first region contains one sample, the second two samples and so on. Let Vn be the volume of Rn, kn the number of samples falling in Rn and pn(x) be the nth estimate for p(x): pn(x) = (kn/n)/Vn (7) Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

(a) Shrink an initial region where Vn = 1/n and show that Three necessary conditions should apply if we want pn(x) to converge to p(x): There are two different ways of obtaining sequences of regions that satisfy these conditions: (a) Shrink an initial region where Vn = 1/n and show that This is called “the Parzen-window estimation method” (b) Specify kn as some function of n, such as kn = n; the volume Vn is grown until it encloses kn neighbors of x. This is called “the kn-nearest neighbor estimation method” Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Parzen Windows Parzen-window approach to estimate densities assume that the region Rn is a d-dimensional hypercube ((x-xi)/hn) is equal to unity if xi falls within the hypercube of volume Vn centered at x and equal to zero otherwise. Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

The number of samples in this hypercube is: By substituting kn in equation (7), we obtain the following estimate: Pn(x) estimates p(x) as an average of functions of x and the samples (xi) (i = 1,… ,n). These functions  can be general! Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Illustration The behavior of the Parzen-window method Case where p(x) N(0,1) Let (u) = (1/(2) exp(-u2/2) and hn = h1/n (n>1) (h1: known parameter) Thus: is an average of normal densities centered at the samples xi. Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Numerical results: For n = 1 and h1=1 For n = 10 and h = 0.1, the contributions of the individual samples are clearly observable ! Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Analogous results are also obtained in two dimensions as illustrated: Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Case where p(x) = 1. U(a,b) + 2 Case where p(x) = 1.U(a,b) + 2.T(c,d) (unknown density) (mixture of a uniform and a triangle density) Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Classification example In classifiers based on Parzen-window estimation: We estimate the densities for each category and classify a test point by the label corresponding to the maximum posterior The decision region for a Parzen-window classifier depends upon the choice of window function as illustrated in the following figure. Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)

Dr. Djamel Bouchaffra CSE 616 Applied Pattern Recognition, Ch4 (Part 1)