Digital Image Processing & Pattern Analysis (CSCE 563) Introduction to Pattern Analysis Prof. Amr Goneid Department of Computer Science & Engineering The.

Slides:



Advertisements
Similar presentations
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Advertisements

Data Mining Classification: Alternative Techniques
K-means method for Signal Compression: Vector Quantization
Support Vector Machines
Unsupervised Learning with Artificial Neural Networks The ANN is given a set of patterns, P, from space, S, but little/no information about their classification,
Intro. ANN & Fuzzy Systems Lecture 8. Learning (V): Perceptron Learning.
Medical Imaging Mohammad Dawood Department of Computer Science University of Münster Germany.
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
Chapter 2: Pattern Recognition
1 Pattern Recognition Pattern recognition is: 1. The name of the journal of the Pattern Recognition Society. 2. A research area in which patterns in data.
CES 514 – Data Mining Lecture 8 classification (contd…)
KNN, LVQ, SOM. Instance Based Learning K-Nearest Neighbor Algorithm (LVQ) Learning Vector Quantization (SOM) Self Organizing Maps.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Lecture 09 Clustering-based Learning
Digital Image Processing & Pattern Analysis (CSCE 563) Course Outline & Introduction Prof. Amr Goneid Department of Computer Science & Engineering The.
Radial-Basis Function Networks
Data Mining Techniques
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Hubert CARDOTJY- RAMELRashid-Jalal QURESHI Université François Rabelais de Tours, Laboratoire d'Informatique 64, Avenue Jean Portalis, TOURS – France.
嵌入式視覺 Pattern Recognition for Embedded Vision Template matching Statistical / Structural Pattern Recognition Neural networks.
This week: overview on pattern recognition (related to machine learning)
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Prof. Amr Goneid Department of Computer Science & Engineering
Medical Imaging Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
CSC321: Neural Networks Lecture 13: Learning without a teacher: Autoencoders and Principal Components Analysis Geoffrey Hinton.
Image Classification 영상분류
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Pattern Recognition 1 Pattern recognition is: 1. The name of the journal of the Pattern Recognition Society. 2. A research area in which patterns in data.
Akram Bitar and Larry Manevitz Department of Computer Science
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Prof. Amr Goneid Department of Computer Science & Engineering
1 Chapter 10 Introduction to Machine Learning. 2 Chapter 10 Contents (1) l Training l Rote Learning l Concept Learning l Hypotheses l General to Specific.
1Ellen L. Walker Category Recognition Associating information extracted from images with categories (classes) of objects Requires prior knowledge about.
Chapter 12 Object Recognition Chapter 12 Object Recognition 12.1 Patterns and pattern classes Definition of a pattern class:a family of patterns that share.
Unsupervised Learning Networks 主講人 : 虞台文. Content Introduction Important Unsupervised Learning NNs – Hamming Networks – Kohonen’s Self-Organizing Feature.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Vector Quantization CAP5015 Fall 2005.
Lecture Notes for Chapter 4 Introduction to Data Mining
Overview Data Mining - classification and clustering
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #23.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 9: Review.
Medical Image Analysis Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 3. Time Complexity Calculations.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
Supervised Learning – Network is presented with the input and the desired output. – Uses a set of inputs for which the desired outputs results / classes.
CLUSTERING EE Class Presentation. TOPICS  Clustering basic and types  K-means, a type of Unsupervised clustering  Supervised clustering type.
Data Transformation: Normalization
Semi-Supervised Clustering
An Image Database Retrieval Scheme Based Upon Multivariate Analysis and Data Mining Presented by C.C. Chang Dept. of Computer Science and Information.
CS262: Computer Vision Lect 09: SIFT Descriptors
IMAGE PROCESSING RECOGNITION AND CLASSIFICATION
Unsupervised Learning Networks
Chapter 12 Object Recognition
K Nearest Neighbor Classification
Learning with information of features
Nearest-Neighbor Classifiers
Local Binary Patterns (LBP)
Design of Hierarchical Classifiers for Efficient and Accurate Pattern Classification M N S S K Pavan Kumar Advisor : Dr. C. V. Jawahar.
COSC 4335: Other Classification Techniques
Creating Data Representations
Multivariate Methods Berlin Chen
EM Algorithm and its Applications
Pattern Recognition and Training
CAMCOS Report Day December 9th, 2015 San Jose State University
Pattern Recognition and Training
ECE – Pattern Recognition Lecture 10 – Nonparametric Density Estimation – k-nearest-neighbor (kNN) Hairong Qi, Gonzalez Family Professor Electrical.
Hairong Qi, Gonzalez Family Professor
Presentation transcript:

Digital Image Processing & Pattern Analysis (CSCE 563) Introduction to Pattern Analysis Prof. Amr Goneid Department of Computer Science & Engineering The American University in Cairo

Prof. Amr Goneid, AUC2 Introduction to Pattern Analysis Patterns & Pattern Analysis Pattern Classes Minimum Distance (Nearest Neighbor) Classifiers Structural Matching Classifiers Information Measure Classifiers Learning-Based Classifiers

Prof. Amr Goneid, AUC3 1. Patterns & Pattern Analysis Patterns are Object Descriptors The objective of pattern analysis is to classify objects into distinct and separable classes A pattern consists of a finite set of “features” or “attributes” Such set is usually called a “Feature Vector” The “Dimesionality” of a pattern is the number (n) of features in the feature vector

Prof. Amr Goneid, AUC4 Patterns & Pattern Analysis Features can be: Quantitative or Qualitative Numerical or Binary Ordinal or Non-Ordinal The process of mapping object descriptors to a Feature Vector is called “Feature Extraction” The process of reducing the number of attributes while maximizing the seprability between classes is called “Dimensionality Reduction”

Prof. Amr Goneid, AUC5 Feature Vector objectMapper Dimensionality Reduction Feature Extraction k-dimensional Feature Vector n-dimensional Feature Vector x1x2..xnx1x2..xn X = n < k

Prof. Amr Goneid, AUC6 Feature Vector (Example) Boundary Fourier Descriptors Sort & Truncate Boundary Pixels k-dimensional Feature Vector n-dimensional Feature Vector x1x2..xnx1x2..xn X = n < k

Prof. Amr Goneid, AUC7 Dimensionality Reduction Removal of features with insignificant weights (ordering or sorting then truncation) Removal of non-orthogonal features (discovering association) Selecting features that maximize inter-class to intra-class variances. Discriminant Analysis PCA

Prof. Amr Goneid, AUC8 2. Pattern Classes Classifiers w1w1 w2w2 wMwM Classifier x

Prof. Amr Goneid, AUC9 Feature Space & Seprability Feature Space Clusters (Classes) Linearly Separable Classes Non-Separable Classes w1 w2 w1 w2 w1 w2

Prof. Amr Goneid, AUC10 3. Minimum Distance (Nearest Neighbor) Classifiers Distance Measures: Search for minimum distance to class “center” “Nearest Neighbor” w1 w2 * * x

Prof. Amr Goneid, AUC11 Class Means n Features: x 1, x 2, ….., x n Feature Vector of unknown pattern: X M Pattern Classes: w 1, w 2, …. w M Sample Patterns for class (j) L = 1,2,…, N j have sample feature vectors S jL Class Means for class (j): m j = (1/ N j )  L S jL = = m1m2.mnm1m2.mn j. j

Prof. Amr Goneid, AUC12 Euclidean Distance For the unknown pattern X, the Euclidean Distance from the “center” of class w j is: D j 2 (X) = || X – m j || 2 = (X – m j ) T (X – m j ) = X T X - 2 { X T m j – (1/2) m j T m j } The “Affinity Function”: a j (X) = X T m j – (1/2) m j T m j changes with class (j) and should be maximum

Prof. Amr Goneid, AUC13 Decision Boundary The decision boundary between class w i and class w j is: a ij (X) = a i (X) – a j (X) = 0 If a ij (X) > 0 pattern X belongs to class w i If a ij (X) < 0 pattern X belongs to class w j

Prof. Amr Goneid, AUC14 Example: w 1 : Ballet dancers, w 2 : weight lifters Features: x 1 = weight in kg, x 2 = height in cm Samples have means: m 1 = ( ) T, m 2 = ( ) T Unknown pattern X = (x 1 x 2 ) T The affinity functions are: a 1 (X) = 60 x x 2 – a 2 (X) = 120 x x

Prof. Amr Goneid, AUC15 Example (continued) The decision boundary is: a 12 (X) = - 60 x x = 0 For the unknown pattern X, if a 12 (X) > 0, it belongs to ballet dancers if a 12 (X) < 0, it belongs to weight lifters e.g. if x 1 = 100 kg, x 2 = 165 cm then it is a weight lifter weight height w1 w2 a 12 (X) = 0

Prof. Amr Goneid, AUC16 4. Structural Matching Classifiers String Matching Two boundaries are coded as strings A, B. A match occurs at position k if A(k) = B(k). M = # of matches Number of mismatches is: Q = max (length(A), length(B)) – M Q = 0 if A and B are identical Degree of matching is R = M / Q

Prof. Amr Goneid, AUC17 Example: Matching Polygons Boundaries can be approximated by polygons Traversing anti-clockwise, we compute the interior angles between segments. Angles are quantized into  regions. Quantized angles are coded by symbols. Successive angles of a polygon are represented by a string of symbols that can be matched to another string (polygon)

Prof. Amr Goneid, AUC18 5. Information Measure Classifiers (Information Content) For Binary Attributes Information Content: A set of n objects with only one attribute: a = # of objects having attribute (n-a) = # of objects missing attribute Information Content is: I = n log n – { a log a + (n-a) log (n-a)}for n > 1 = 0for n = 1

Prof. Amr Goneid, AUC19 Diversity & Similarity A set of n objects i = 1, 2,.. N and m attributes j = 1, 2, …, m B ij = 1 if object (i) has attribute (j) and zero otherwise a j = # of objects having attribute (j) (n – a j ) = # of objects missing attribute (j) =  i B ij Information Content: I (m,n,a) = m n log n -  j {a j log a j + (n – a j ) log (n – a j )} for n > 1 I (m,n,a) = 0 for n = 1

Prof. Amr Goneid, AUC20 Diversity & Similarity Let m = n For complete similarity, a j = n for all j I = n 2 log n -  j { n log n + 0 } = 0 Least info content For complete diversity, a j = 1 for all j I = n 2 log n - n(n-1) log (n-1) > 0 Max info content e.g. for n = m = 2, I = 4

Prof. Amr Goneid, AUC21 Fusion & Division Two object sets A, B Objects in A are similar. Same for B I A = I B = 0 Fusing A,B into C = A  B we gain info by  I = I C – [I A + I B ] Dividing C into A and B with A  B = 0, we lose info by  I = I C – [I A + I B ]

Prof. Amr Goneid, AUC22 Selecting Division Attributes Dividing set C on attribute (j) produces two sets A with n j + = a j objects and B with n j - = (n-a j ) objects. Let I j + = I (m, n j +, a), I j - = I (m, n j -, a), then the gain from fusing A and B into C is  I j = I C – [I A + I B ] = I (m,n,a) – { I j + + I j - } The division attribute (j) corresponds to max(  I j ) The process is repeated for the net attribute after removing the first one.

Prof. Amr Goneid, AUC23 Classification Tree Recursive computation of info gain produces a classification tree

Prof. Amr Goneid, AUC24 6. Learning-Based Classifiers (Neural Networks) Neurons Learning Feed-Forward NN Backpropagation NN Other NN