Presentation is loading. Please wait.

Presentation is loading. Please wait.

Digital Image Processing & Pattern Analysis (CSCE 563) Introduction to Pattern Analysis Prof. Amr Goneid Department of Computer Science & Engineering The.

Similar presentations


Presentation on theme: "Digital Image Processing & Pattern Analysis (CSCE 563) Introduction to Pattern Analysis Prof. Amr Goneid Department of Computer Science & Engineering The."— Presentation transcript:

1 Digital Image Processing & Pattern Analysis (CSCE 563) Introduction to Pattern Analysis Prof. Amr Goneid Department of Computer Science & Engineering The American University in Cairo

2 Prof. Amr Goneid, AUC2 Introduction to Pattern Analysis Patterns & Pattern Analysis Pattern Classes Minimum Distance (Nearest Neighbor) Classifiers Structural Matching Classifiers Information Measure Classifiers Learning-Based Classifiers

3 Prof. Amr Goneid, AUC3 1. Patterns & Pattern Analysis Patterns are Object Descriptors The objective of pattern analysis is to classify objects into distinct and separable classes A pattern consists of a finite set of “features” or “attributes” Such set is usually called a “Feature Vector” The “Dimesionality” of a pattern is the number (n) of features in the feature vector

4 Prof. Amr Goneid, AUC4 Patterns & Pattern Analysis Features can be: Quantitative or Qualitative Numerical or Binary Ordinal or Non-Ordinal The process of mapping object descriptors to a Feature Vector is called “Feature Extraction” The process of reducing the number of attributes while maximizing the seprability between classes is called “Dimensionality Reduction”

5 Prof. Amr Goneid, AUC5 Feature Vector objectMapper Dimensionality Reduction Feature Extraction k-dimensional Feature Vector n-dimensional Feature Vector x1x2..xnx1x2..xn X = n < k

6 Prof. Amr Goneid, AUC6 Feature Vector (Example) Boundary Fourier Descriptors Sort & Truncate Boundary Pixels k-dimensional Feature Vector n-dimensional Feature Vector x1x2..xnx1x2..xn X = n < k

7 Prof. Amr Goneid, AUC7 Dimensionality Reduction Removal of features with insignificant weights (ordering or sorting then truncation) Removal of non-orthogonal features (discovering association) Selecting features that maximize inter-class to intra-class variances. Discriminant Analysis PCA

8 Prof. Amr Goneid, AUC8 2. Pattern Classes Classifiers w1w1 w2w2 wMwM Classifier x

9 Prof. Amr Goneid, AUC9 Feature Space & Seprability Feature Space Clusters (Classes) Linearly Separable Classes Non-Separable Classes w1 w2 w1 w2 w1 w2

10 Prof. Amr Goneid, AUC10 3. Minimum Distance (Nearest Neighbor) Classifiers Distance Measures: Search for minimum distance to class “center” “Nearest Neighbor” w1 w2 * * x

11 Prof. Amr Goneid, AUC11 Class Means n Features: x 1, x 2, ….., x n Feature Vector of unknown pattern: X M Pattern Classes: w 1, w 2, …. w M Sample Patterns for class (j) L = 1,2,…, N j have sample feature vectors S jL Class Means for class (j): m j = (1/ N j )  L S jL = = m1m2.mnm1m2.mn j. j

12 Prof. Amr Goneid, AUC12 Euclidean Distance For the unknown pattern X, the Euclidean Distance from the “center” of class w j is: D j 2 (X) = || X – m j || 2 = (X – m j ) T (X – m j ) = X T X - 2 { X T m j – (1/2) m j T m j } The “Affinity Function”: a j (X) = X T m j – (1/2) m j T m j changes with class (j) and should be maximum

13 Prof. Amr Goneid, AUC13 Decision Boundary The decision boundary between class w i and class w j is: a ij (X) = a i (X) – a j (X) = 0 If a ij (X) > 0 pattern X belongs to class w i If a ij (X) < 0 pattern X belongs to class w j

14 Prof. Amr Goneid, AUC14 Example: w 1 : Ballet dancers, w 2 : weight lifters Features: x 1 = weight in kg, x 2 = height in cm Samples have means: m 1 = ( 60 170) T, m 2 = ( 120 160) T Unknown pattern X = (x 1 x 2 ) T The affinity functions are: a 1 (X) = 60 x 1 + 170 x 2 – 16250 a 2 (X) = 120 x 1 + 160 x 2 - 20000

15 Prof. Amr Goneid, AUC15 Example (continued) The decision boundary is: a 12 (X) = - 60 x 1 + 10 x 2 + 3750 = 0 For the unknown pattern X, if a 12 (X) > 0, it belongs to ballet dancers if a 12 (X) < 0, it belongs to weight lifters e.g. if x 1 = 100 kg, x 2 = 165 cm then it is a weight lifter weight height w1 w2 a 12 (X) = 0

16 Prof. Amr Goneid, AUC16 4. Structural Matching Classifiers String Matching Two boundaries are coded as strings A, B. A match occurs at position k if A(k) = B(k). M = # of matches Number of mismatches is: Q = max (length(A), length(B)) – M Q = 0 if A and B are identical Degree of matching is R = M / Q

17 Prof. Amr Goneid, AUC17 Example: Matching Polygons Boundaries can be approximated by polygons Traversing anti-clockwise, we compute the interior angles between segments. Angles are quantized into  regions. Quantized angles are coded by symbols. Successive angles of a polygon are represented by a string of symbols that can be matched to another string (polygon)

18 Prof. Amr Goneid, AUC18 5. Information Measure Classifiers (Information Content) For Binary Attributes Information Content: A set of n objects with only one attribute: a = # of objects having attribute (n-a) = # of objects missing attribute Information Content is: I = n log n – { a log a + (n-a) log (n-a)}for n > 1 = 0for n = 1

19 Prof. Amr Goneid, AUC19 Diversity & Similarity A set of n objects i = 1, 2,.. N and m attributes j = 1, 2, …, m B ij = 1 if object (i) has attribute (j) and zero otherwise a j = # of objects having attribute (j) (n – a j ) = # of objects missing attribute (j) =  i B ij Information Content: I (m,n,a) = m n log n -  j {a j log a j + (n – a j ) log (n – a j )} for n > 1 I (m,n,a) = 0 for n = 1

20 Prof. Amr Goneid, AUC20 Diversity & Similarity Let m = n For complete similarity, a j = n for all j I = n 2 log n -  j { n log n + 0 } = 0 Least info content For complete diversity, a j = 1 for all j I = n 2 log n - n(n-1) log (n-1) > 0 Max info content e.g. for n = m = 2, I = 4

21 Prof. Amr Goneid, AUC21 Fusion & Division Two object sets A, B Objects in A are similar. Same for B I A = I B = 0 Fusing A,B into C = A  B we gain info by  I = I C – [I A + I B ] Dividing C into A and B with A  B = 0, we lose info by  I = I C – [I A + I B ]

22 Prof. Amr Goneid, AUC22 Selecting Division Attributes Dividing set C on attribute (j) produces two sets A with n j + = a j objects and B with n j - = (n-a j ) objects. Let I j + = I (m, n j +, a), I j - = I (m, n j -, a), then the gain from fusing A and B into C is  I j = I C – [I A + I B ] = I (m,n,a) – { I j + + I j - } The division attribute (j) corresponds to max(  I j ) The process is repeated for the net attribute after removing the first one.

23 Prof. Amr Goneid, AUC23 Classification Tree Recursive computation of info gain produces a classification tree 4 567 23 1 1 1 0 0 1 0

24 Prof. Amr Goneid, AUC24 6. Learning-Based Classifiers (Neural Networks) Neurons Learning Feed-Forward NN Backpropagation NN Other NN


Download ppt "Digital Image Processing & Pattern Analysis (CSCE 563) Introduction to Pattern Analysis Prof. Amr Goneid Department of Computer Science & Engineering The."

Similar presentations


Ads by Google