CSE 185 Introduction to Computer Vision Pattern Recognition.

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

The blue and green colors are actually the same.
Unsupervised Learning Clustering K-Means. Recall: Key Components of Intelligent Agents Representation Language: Graph, Bayes Nets, Linear functions Inference.
Quiz 1 on Wednesday ~20 multiple choice or short answer questions
Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem
Chapter 4: Linear Models for Classification
CS4670 / 5670: Computer Vision Bag-of-words models Noah Snavely Object
Clustering and Dimensionality Reduction Brendan and Yifang April
Grouping and Segmentation Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/01/10.
Lecture 6 Image Segmentation
Effective Image Database Search via Dimensionality Reduction Anders Bjorholm Dahl and Henrik Aanæs IEEE Computer Society Conference on Computer Vision.
Segmentation. Terminology Segmentation, grouping, perceptual organization: gathering features that belong together Fitting: associating a model with observed.
Lecture 28: Bag-of-words models
Clustering Color/Intensity
Bag-of-features models
Segmentation by Clustering Reading: Chapter 14 (skip 14.5) Data reduction - obtain a compact representation for interesting image data in terms of a set.
Supervised Distance Metric Learning Presented at CMU’s Computer Vision Misc-Read Reading Group May 9, 2007 by Tomasz Malisiewicz.
Unsupervised Learning and Data Mining
Semi-Supervised Clustering Jieping Ye Department of Computer Science and Engineering Arizona State University
Revision (Part II) Ke Chen COMP24111 Machine Learning Revision slides are going to summarise all you have learnt from Part II, which should be helpful.
Machine learning Image source:
Machine learning Image source:
~5,617,000 population in each state
Image Segmentation Image segmentation is the operation of partitioning an image into a collection of connected sets of pixels. 1. into regions, which usually.
Clustering with Application to Fast Object Search
Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem
Segmentation and Grouping Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/23/10.
Methods in Medical Image Analysis Statistics of Pattern Recognition: Classification and Clustering Some content provided by Milos Hauskrecht, University.
Computer Vision James Hays, Brown
嵌入式視覺 Pattern Recognition for Embedded Vision Template matching Statistical / Structural Pattern Recognition Neural networks.
Data mining and machine learning A brief introduction.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
Machine Learning Neural Networks (3). Understanding Supervised and Unsupervised Learning.
Machine Learning Crash Course Computer Vision James Hays Slides: Isabelle Guyon, Erik Sudderth, Mark Johnson, Derek Hoiem Photo: CMU Machine Learning Department.
Image Categorization Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/15/11.
Classifiers Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/09/15.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
Dimension reduction : PCA and Clustering Slides by Agnieszka Juncker and Chris Workman modified by Hanne Jarmer.
MACHINE LEARNING 8. Clustering. Motivation Based on E ALPAYDIN 2004 Introduction to Machine Learning © The MIT Press (V1.1) 2  Classification problem:
Grouping and Segmentation
Classification Derek Hoiem CS 598, Spring 2009 Jan 27, 2009.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Machine Learning Overview Tamara Berg CS 560 Artificial Intelligence Many slides throughout the course adapted from Svetlana Lazebnik, Dan Klein, Stuart.
CHAPTER 1: Introduction. 2 Why “Learn”? Machine learning is programming computers to optimize a performance criterion using example data or past experience.
CSSE463: Image Recognition Day 23 Midterm behind us… Midterm behind us… Foundations of Image Recognition completed! Foundations of Image Recognition completed!
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Introduction to Data Mining Clustering & Classification Reference: Tan et al: Introduction to data mining. Some slides are adopted from Tan et al.
1 Kernel Machines A relatively new learning methodology (1992) derived from statistical learning theory. Became famous when it gave accuracy comparable.
Rodney Nielsen Many of these slides were adapted from: I. H. Witten, E. Frank and M. A. Hall Data Science Algorithms: The Basic Methods Clustering WFH:
Data Mining, Machine Learning, Data Analysis, etc. scikit-learn
Semi-Supervised Clustering
Photo: CMU Machine Learning Department Protests G20
Machine Learning Crash Course
IMAGE PROCESSING RECOGNITION AND CLASSIFICATION
CS 2750: Machine Learning Dimensionality Reduction
Basic machine learning background with Python scikit-learn
Segmentation Computer Vision Spring 2018, Lecture 27
Overview of machine learning for computer vision
Machine Learning Crash Course
Revision (Part II) Ke Chen
Revision (Part II) Ke Chen
15 February 2019 Welcome to a class with some curious content!
Prof. Adriana Kovashka University of Pittsburgh September 6, 2018
Data Mining, Machine Learning, Data Analysis, etc. scikit-learn
Data Mining, Machine Learning, Data Analysis, etc. scikit-learn
EM Algorithm and its Applications
Derek Hoiem CS 598, Spring 2009 Jan 27, 2009
Presentation transcript:

CSE 185 Introduction to Computer Vision Pattern Recognition

Computer vision: related topics

Pattern recognition One of the leading vision conference: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Pattern recognition and machine learning Goal: Making predictions or decisions from data

Machine learning applications

Image categorization Training Labels Training Images Classifier Training Training Image Features Trained Classifier

Image categorization Training Labels Training Images Classifier Training Training Image Features Testing Test Image Trained Classifier Outdoor Prediction

Machine learning framework Apply a prediction function to a feature representation of the image to get the desired output: f( ) = “apple” f( ) = “tomato” f( ) = “cow”

Machine learning framework y = f(x) Training: given a training set of labeled examples {(x 1,y 1 ), …, (x N,y N )}, estimate the prediction function f by minimizing the prediction error on the training set Testing: apply f to a never before seen test example x and output the predicted value y = f(x) outputprediction function Image feature

Example: Scene categorization Is this a kitchen?

Image features Training Labels Training Images Classifier Training Training Image Features Trained Classifier

Image representation Coverage –Ensure that all relevant info is captured Concision –Minimize number of features without sacrificing coverage Directness –Ideal features are independently useful for prediction

Image representations Templates –Intensity, gradients, etc. Histograms –Color, texture, SIFT descriptors, etc. Features –PCA, local features, corners, etc.

Classifiers Training Labels Training Images Classifier Training Training Image Features Trained Classifier

Learning a classifier Given some set of features with corresponding labels, learn a function to predict the labels from the features xx x x x x x x o o o o o x2 x1

Many classifiers to choose from SVM Neural networks Naïve Bayes Bayesian network Logistic regression Randomized Forests Boosted Decision Trees K-nearest neighbor RBMs Etc. Which is the best one?

One way to think about it… Training labels dictate that two examples are the same or different, in some sense Features and distance measures define visual similarity Classifiers try to learn weights or parameters for features and distance measures so that visual similarity predicts label similarity

Machine learning Topics

Dimensionality reduction Principal component analysis (PCA) is the most important technique to know. It takes advantage of correlations in data dimensions to produce the best possible lower dimensional representation, according to reconstruction error. PCA should be used for dimensionality reduction, not for discovering patterns or making predictions. Don't try to assign semantic meaning to the bases. Independent component analysis (ICA), Locally liner embedding (LLE), Isometric mapping (Isomap), …

Clustering example: image segmentation Goal: Break up the image into meaningful or perceptually similar regions

Segmentation for feature support 50x50 Patch

Segmentation for efficiency [Felzenszwalb and Huttenlocher 2004] [Hoiem et al. 2005, Mori 2005] [Shi and Malik 2001]

Segmentation as a result

Types of segmentations OversegmentationUndersegmentation Multiple Segmentations

Segmentation approaches Bottom-up: group tokens with similar features Top-down: group tokens that likely belong to the same object [Levin and Weiss 2006]

Clustering Clustering: group together similar points and represent them with a single token Key Challenges: –What makes two points/images/patches similar? –How do we compute an overall grouping from pairwise similarities? Slide: Derek Hoiem

Why do we cluster? Summarizing data –Look at large amounts of data –Patch-based compression or denoising –Represent a large continuous vector with the cluster number Counting –Histograms of texture, color, SIFT vectors Segmentation –Separate the image into different regions Prediction –Images in the same cluster may have the same labels

How do we cluster? K-means –Iteratively re-assign points to the nearest cluster center Agglomerative clustering –Start with each point as its own cluster and iteratively merge the closest clusters Mean-shift clustering –Estimate modes of pdf Spectral clustering –Split the nodes in a graph based on assigned links with similarity weights

Clustering for Summarization Goal: cluster to minimize variance in data given clusters –Preserve information Whether x j is assigned to c i Cluster center Data

K-means algorithm Illustration: 1. Randomly select K centers 2. Assign each point to nearest center 3. Compute new center (mean) for each cluster

K-means algorithm Illustration: 1. Randomly select K centers 2. Assign each point to nearest center 3. Compute new center (mean) for each cluster Back to 2

K-means 1. Initialize cluster centers: c 0 ; t=0 2. Assign each point to the closest center 3. Update cluster centers as the mean of the points 1. Repeat 2-3 until no points are re-assigned (t=t+1) Slide: Derek Hoiem

K-means converges to a local minimum

K-means: design choices Initialization –Randomly select K points as initial cluster center –Or greedily choose K points to minimize residual Distance measures –Traditionally Euclidean, could be others Optimization –Will converge to a local minimum –May want to perform multiple restarts

Image Clusters on intensityClusters on color K-means clustering using intensity or color

How to choose the number of clusters? Minimum Description Length (MDL) principal for model comparison Minimize Schwarz Criterion –also called Bayes Information Criteria (BIC)

How to evaluate clusters? Generative –How well are points reconstructed from the clusters? Discriminative –How well do the clusters correspond to labels? Purity –Note: unsupervised clustering does not aim to be discriminative Slide: Derek Hoiem

How to choose the number of clusters? Validation set –Try different numbers of clusters and look at performance When building dictionaries (discussed later), more clusters typically work better Slide: Derek Hoiem

K-Means pros and cons Pros Finds cluster centers that minimize conditional variance (good representation of data) Simple and fast* Easy to implement Cons Need to choose K Sensitive to outliers Prone to local minima All clusters have the same parameters (e.g., distance measure is non-adaptive) *Can be slow: each iteration is O(KNd) for N d-dimensional points Usage Rarely used for pixel segmentation

Building Visual Dictionaries 1. Sample patches from a database –E.g., 128 dimensional SIFT vectors 2. Cluster the patches –Cluster centers are the dictionary 3. Assign a codeword (number) to each new patch, according to the nearest cluster

Examples of learned codewords Most likely codewords for 4 learned “topics” EM with multinomial (problem 3) to get topics