Line detection Assume there is a binary image, we use F(ά,X)=0 as the parametric equation of a curve with a vector of parameters ά=[α 1, …, α m ] and X=[x.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Principles of Density Estimation
More Vectors.
CSC321: Introduction to Neural Networks and Machine Learning Lecture 24: Non-linear Support Vector Machines Geoffrey Hinton.
Aggregating local image descriptors into compact codes
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Dimension reduction (1)
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Pattern recognition Professor Aly A. Farag
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
MACHINE LEARNING 9. Nonparametric Methods. Introduction Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2 
Principal Component Analysis
Offset of curves. Alina Shaikhet (CS, Technion)
CS292 Computational Vision and Language Pattern Recognition and Classification.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Dimensional reduction, PCA
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Object Class Recognition Using Discriminative Local Features Gyuri Dorko and Cordelia Schmid.
The Pinhole Camera Model
Support Feature Machine for DNA microarray data Tomasz Maszczyk and Włodzisław Duch Department of Informatics, Nicolaus Copernicus University, Toruń, Poland.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Circle Drawing algo..
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
FEATURE EXTRACTION FOR JAVA CHARACTER RECOGNITION Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi Informatics Department, Petra Christian.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
© Negnevitsky, Pearson Education, Will neural network work for my problem? Will neural network work for my problem? Character recognition neural.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
Image Classification 영상분류
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
Feature based deformable registration of neuroimages using interest point and feature selection Leonid Teverovskiy Center for Automated Learning and Discovery.
CS654: Digital Image Analysis Lecture 30: Clustering based Segmentation Slides are adapted from:
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Chapter 13 (Prototype Methods and Nearest-Neighbors )
Vector Quantization CAP5015 Fall 2005.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
An Improved Algorithm for Decision-Tree-Based SVM Sindhu Kuchipudi INSTRUCTOR Dr.DONGCHUL KIM.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
Unsupervised Classification
Cluster Analysis What is Cluster Analysis? Types of Data in Cluster Analysis A Categorization of Major Clustering Methods Partitioning Methods.
Computer Graphics CC416 Lecture 04: Bresenham Line Algorithm & Mid-point circle algorithm Dr. Manal Helal – Fall 2014.
Write Bresenham’s algorithm for generation of line also indicate which raster locations would be chosen by Bresenham’s algorithm when scan converting.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Dimensionality Reduction
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Computer Vision Lecture 13: Image Segmentation III
Mean Shift Segmentation
Computer Vision Lecture 12: Image Segmentation II
Find the derivative of the vector function r(t) = {image}
Dynamical Statistical Shape Priors for Level Set Based Tracking
Image Retrieval Longin Jan Latecki.
Learning with information of features
REMOTE SENSING Multispectral Image Classification
REMOTE SENSING Multispectral Image Classification
PRAKASH CHOCKALINGAM, NALIN PRADEEP, AND STAN BIRCHFIELD
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Chapter 3: Solving Equations
Presented by Xu Miao April 20, 2005
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
MIRA, SVM, k-NN Lirong Xia. MIRA, SVM, k-NN Lirong Xia.
Lecture 16. Classification (II): Practical Considerations
Presentation transcript:

Line detection Assume there is a binary image, we use F(ά,X)=0 as the parametric equation of a curve with a vector of parameters ά=[α 1, …, α m ] and X=[x 1,x 2 ] the coordinates of a pixel in the image.

Current clustering problem The mathematical equations of the four lines are, L 1 : x 1 +x 2 =1, L 2 : x 1 ─x 2 =1, L 3 : -x 1 - x 2 =1, and L 4 : -x 1 +x 2 =1. Each line segment in the actual digital binary image consists of 100 pixels. The problem is to detect the parametric vectors (1, 1), (1, 1), ( 1, 1) and ( 1, 1) so that each line can be labeled according to the mapping rule defined

Feature space transformation All the pixels mapped into the same parametric vector constitute a curve; each parametric vector stands for one curve in the image. The feature space in this section refers to the parametric vector space. Thus, we transform each input sample pixel X=[x 1,x 2 ], into Z=[z 1,z 2 ](which can be treated as a “virtual” data point in the feature space). Hence, the learning process is performed in the feature space with as the input stimulus.

Questions About the prior knowledge –If we do not know the data points are distributed like line –If we do not transform the original feature space to the current parametric space What results will be achieved if we only apply the clustering algorithm on the original feature space –The distance between data points can reveal the ‘real’ distance?

Discussions Any good solutions for the automatic transformation of feature space? –PCA, ICA, LDA How to deal with the high-dimensional feature space? –Kernel-based method, hypo-plane How to measure the similarity of data points? –Various feature dimension gives equal contribution to the similarity?

Clusters’ scales There are five clusters, S 1,…,S 5, in the data set. Among them S 3, S 4, and S 5 are overlapped to some degree. The numbers of sample points for these five clusters are 150, 200, 300, 250, and 250, respectively; and their corresponding Gaussian variances are 0.10, 0.12, 0.15, 0.18, and 0.20.

Current results It demonstrates the splitting processes and learning trajectories obtained by SSCL. As we can see, the splitting occurred four times. Therefore, five clusters were discovered finally; each cluster was associated with a prototype located at its center. According to the nearest neighbor condition, each sample point was labeled by its nearest prototype.

Questions How many clusters are there? 3 or 5 How to decide whether a set of data points should be split or not. Must all the clusters have the same clusters’ scales? If the clusters have different scales, how to adaptively deal with it?

Discussions Any good evaluation methods for the number of clusters Divergence and convergence Transform the data points into a If the clusters have different scales, how to adaptively deal with it?