Made by: Maor Levy, Temple University 2012 1. Many data points, no labels 2.

Slides:



Advertisements
Similar presentations
Clustering Beyond K-means
Advertisements

Expectation Maximization
Computer Vision - A Modern Approach Set: Probability in segmentation Slides by D.A. Forsyth Missing variable problems In many vision problems, if some.
Unsupervised learning
Segmentation and Fitting Using Probabilistic Methods
Machine Learning and Data Mining Clustering
Active Contours / Planes Sebastian Thrun, Gary Bradski, Daniel Russakoff Stanford CS223B Computer Vision Some slides.
EE 290A: Generalized Principal Component Analysis Lecture 6: Iterative Methods for Mixture-Model Segmentation Sastry & Yang © Spring, 2011EE 290A, University.
Lecture 6 Image Segmentation
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Principal Component Analysis
Normalized Cuts and Image Segmentation Jianbo Shi and Jitendra Malik, Presented by: Alireza Tavakkoli.
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
Segmentation Divide the image into segments. Each segment:
Announcements Project 2 more signup slots questions Picture taking at end of class.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Stanford CS223B Computer Vision, Winter 2006 Lecture 8 Structure From Motion Professor Sebastian Thrun CAs: Dan Maynes-Aminzade, Mitul Saha, Greg Corrado.
Computer Vision Segmentation Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel,...
Clustering.
Gaussian Mixture Example: Start After First Iteration.
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
Independent Component Analysis (ICA) and Factor Analysis (FA)
FACE RECOGNITION, EXPERIMENTS WITH RANDOM PROJECTION
Unsupervised Learning
Image Segmentation Today’s Readings Forsyth & Ponce, Chapter 14
Part 3 Vector Quantization and Mixture Density Model CSE717, SPRING 2008 CUBS, Univ at Buffalo.
AN ANALYSIS OF SINGLE- LAYER NETWORKS IN UNSUPERVISED FEATURE LEARNING [1] Yani Chen 10/14/
Expectation-Maximization (EM) Chapter 3 (Duda et al.) – Section 3.9
ECE 5984: Introduction to Machine Learning
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Computer Vision Segmentation Marc Pollefeys COMP 256 Some slides and illustrations from D. Forsyth, T. Darrel,...
Clustering & Dimensionality Reduction 273A Intro Machine Learning.
Computer Vision - A Modern Approach Set: Segmentation Slides by D.A. Forsyth Segmentation and Grouping Motivation: not information is evidence Obtain a.
Biointelligence Laboratory, Seoul National University
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Segmentation Techniques Luis E. Tirado PhD qualifying exam presentation Northeastern University.
CSE 185 Introduction to Computer Vision Pattern Recognition.
Segmentation using eigenvectors
Segmentation using eigenvectors Papers: “Normalized Cuts and Image Segmentation”. Jianbo Shi and Jitendra Malik, IEEE, 2000 “Segmentation using eigenvectors:
CSC321: Neural Networks Lecture 12: Clustering Geoffrey Hinton.
CPSC 502, Lecture 15Slide 1 Introduction to Artificial Intelligence (AI) Computer Science cpsc502, Lecture 16 Nov, 3, 2011 Slide credit: C. Conati, S.
EM and model selection CS8690 Computer Vision University of Missouri at Columbia Missing variable problems In many vision problems, if some variables.
CHAPTER 7: Clustering Eick: K-Means and EM (modified Alpaydin transparencies and new transparencies added) Last updated: February 25, 2014.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
Clustering and Testing in High- Dimensional Data M. Radavičius, G. Jakimauskas, J. Sušinskas (Institute of Mathematics and Informatics, Vilnius, Lithuania)
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
ECE 5984: Introduction to Machine Learning Dhruv Batra Virginia Tech Topics: –Unsupervised Learning: Kmeans, GMM, EM Readings: Barber
EE4-62 MLCV Lecture Face Recognition – Subspace/Manifold Learning Tae-Kyun Kim 1 EE4-62 MLCV.
INTRODUCTION TO MACHINE LEARNING 3RD EDITION ETHEM ALPAYDIN © The MIT Press, Lecture.
Radial Basis Function ANN, an alternative to back propagation, uses clustering of examples in the training set.
Chapter 13 (Prototype Methods and Nearest-Neighbors )
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Model fitting ECE 847: Digital Image Processing Stan Birchfield Clemson University.
CSE 446: Expectation Maximization (EM) Winter 2012 Daniel Weld Slides adapted from Carlos Guestrin, Dan Klein & Luke Zettlemoyer.
Advanced Artificial Intelligence Lecture 8: Advance machine learning.
Given a set of data points as input Randomly assign each point to one of the k clusters Repeat until convergence – Calculate model of each of the k clusters.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Clustering Usman Roshan CS 675. Clustering Suppose we want to cluster n vectors in R d into two groups. Define C 1 and C 2 as the two groups. Our objective.
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
Big Data Infrastructure Week 9: Data Mining (4/4) This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States.
Machine Learning Supervised Learning Classification and Regression K-Nearest Neighbor Classification Fisher’s Criteria & Linear Discriminant Analysis Perceptron:
K-Means Segmentation.
Advanced Artificial Intelligence
Seam Carving Project 1a due at midnight tonight.
Spectral Clustering Eric Xing Lecture 8, August 13, 2010
Segmentation (continued)
Presented by Wanxue Dong
Biointelligence Laboratory, Seoul National University
EM Algorithm and its Applications
Presentation transcript:

Made by: Maor Levy, Temple University

Many data points, no labels 2

 Google Street View Google Street View 3

Many data points, no labels 4

 Choose a fixed number of clusters  Choose cluster centers and point-cluster allocations to minimize error  can’t do this by exhaustive search, because there are too many possible allocations.  Algorithm ◦ fix cluster centers; allocate points to closest cluster ◦ fix allocation; compute best cluster centers  x could be any set of features for which we can compute a distance (careful about scaling) * From Marc Pollefeys COMP

6

7

K-means clustering using intensity alone and color alone Image Clusters on intensity Clusters on color * From Marc Pollefeys COMP Results of K-Means Clustering: 8

 Is an approximation to EM ◦ Model (hypothesis space): Mixture of N Gaussians ◦ Latent variables: Correspondence of data and Gaussians  We notice: ◦ Given the mixture model, it’s easy to calculate the correspondence ◦ Given the correspondence it’s easy to estimate the mixture models 9

 Data generated from mixture of Gaussians  Latent variables: Correspondence between Data Items and Gaussians 10

11

12

13

M-Step E-Step 14

 Converges!  Proof [Neal/Hinton, McLachlan/Krishnan] : ◦ E/M step does not decrease data likelihood ◦ Converges at local minimum or saddle point  But subject to local minima 15

16

 Number of Clusters unknown  Suffers (badly) from local minima  Algorithm: ◦ Start new cluster center if many points “unexplained” ◦ Kill cluster center that doesn’t contribute ◦ (Use AIC/BIC criterion for all this, if you want to be formal) 17

18

19

Data SimilaritiesBlock-Detection * Slides from Dan Klein, Sep Kamvar, Chris Manning, Natural Language Group Stanford University 20

 Block matrices have block eigenvectors:  Near-block matrices have near-block eigenvectors: [Ng et al., NIPS 02] eigensolver = 2 2 = 2 3 = 0 4 = eigensolver = = = = * Slides from Dan Klein, Sep Kamvar, Chris Manning, Natural Language Group Stanford University 21

 Can put items into blocks by eigenvectors:  Resulting clusters independent of row ordering: e1e1 e2e2 e1e1 e2e e1e1 e2e2 e1e1 e2e2 * Slides from Dan Klein, Sep Kamvar, Chris Manning, Natural Language Group Stanford University 22

 The key advantage of spectral clustering is the spectral space representation: * Slides from Dan Klein, Sep Kamvar, Chris Manning, Natural Language Group Stanford University 23

Intensity Texture Distance * From Marc Pollefeys COMP

* From Marc Pollefeys COMP

* From Marc Pollefeys COMP

M. Brand, MERL 27

28

 Fit multivariate Gaussian  Compute eigenvectors of Covariance  Project onto eigenvectors with largest eigenvalues 29

Mean face (after alignment) Slide credit: Santiago Serrano 30

Slide credit: Santiago Serrano 31

 Isomap  Local Linear Embedding Isomap, Science, M. Balasubmaranian and E. Schwartz 32

33

 References: ◦ Sebastian Thrun and Peter Norvig, Artificial Intelligence, Stanford University fall11.pdf fall11.pdf 34