Data Mining Course 0 Manifold learning Xin Yang. Data Mining Course 1 Outline Manifold and Manifold Learning Classical Dimensionality Reduction Semi-Supervised.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Non-linear Dimensionality Reduction by Locally Linear Inlaying Presented by Peng Zhang Tianjin University, China Yuexian Hou, Peng Zhang, Xiaowei Zhang,
1 Manifold Alignment for Multitemporal Hyperspectral Image Classification H. Lexie Yang 1, Melba M. Crawford 2 School of Civil Engineering, Purdue University.
Edge Preserving Image Restoration using L1 norm
Pattern Recognition and Machine Learning
A Geometric Perspective on Machine Learning 何晓飞 浙江大学计算机学院 1.
Computer vision: models, learning and inference Chapter 8 Regression.
Manifold Learning Dimensionality Reduction. Outline Introduction Dim. Reduction Manifold Isomap Overall procedure Approximating geodesic dist. Dijkstra’s.
Two-View Geometry CS Sastry and Yang
Presented by: Mingyuan Zhou Duke University, ECE April 3, 2009
“Random Projections on Smooth Manifolds” -A short summary
Principal Component Analysis
LLE and ISOMAP Analysis of Robot Images Rong Xu. Background Intuition of Dimensionality Reduction Linear Approach –PCA(Principal Component Analysis) Nonlinear.
Dimensional reduction, PCA
Image Manifolds : Learning-based Methods in Vision Alexei Efros, CMU, Spring 2007 © A.A. Efros With slides by Dave Thompson.
Uncalibrated Geometry & Stratification Sastry and Yang
Three Algorithms for Nonlinear Dimensionality Reduction Haixuan Yang Group Meeting Jan. 011, 2005.
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
A Global Geometric Framework for Nonlinear Dimensionality Reduction Joshua B. Tenenbaum, Vin de Silva, John C. Langford Presented by Napat Triroj.
Lecture 10: Support Vector Machines
Atul Singh Junior Undergraduate CSE, IIT Kanpur.  Dimension reduction is a technique which is used to represent a high dimensional data in a more compact.
NonLinear Dimensionality Reduction or Unfolding Manifolds Tennenbaum|Silva|Langford [Isomap] Roweis|Saul [Locally Linear Embedding] Presented by Vikas.
Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee.
Nonlinear Dimensionality Reduction by Locally Linear Embedding Sam T. Roweis and Lawrence K. Saul Reference: "Nonlinear dimensionality reduction by locally.
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Graph Embedding: A General Framework for Dimensionality Reduction Dong XU School of Computer Engineering Nanyang Technological University
IEEE TRANSSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
A Generalization of PCA to the Exponential Family Collins, Dasgupta and Schapire Presented by Guy Lebanon.
Data Reduction. 1.Overview 2.The Curse of Dimensionality 3.Data Sampling 4.Binning and Reduction of Cardinality.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
THE MANIFOLDS OF SPATIAL HEARING Ramani Duraiswami | Vikas C. Raykar Perceptual Interfaces and Reality Lab University of Maryland, College park.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
GRASP Learning a Kernel Matrix for Nonlinear Dimensionality Reduction Kilian Q. Weinberger, Fei Sha and Lawrence K. Saul ICML’04 Department of Computer.
Ch 12. Continuous Latent Variables Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by S.-J. Kim and J.-K. Rhee Revised by D.-Y.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Manifold learning: MDS and Isomap
Chapter 8 The Tangent Space. Contents: 8.1 The Tangent Space at a Point 8.2 The Differential of a Map 8.3 The Chain Rule 8.4 Bases for the Tangent Space.
Nonlinear Dimensionality Reduction Approach (ISOMAP)
Jan Kamenický.  Many features ⇒ many dimensions  Dimensionality reduction ◦ Feature extraction (useful representation) ◦ Classification ◦ Visualization.
H. Lexie Yang1, Dr. Melba M. Crawford2
Non-Linear Dimensionality Reduction
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
Over-fitting and Regularization Chapter 4 textbook Lectures 11 and 12 on amlbook.com.
Data Mining Course 2007 Eric Postma Clustering. Overview Three approaches to clustering 1.Minimization of reconstruction error PCA, nlPCA, k-means clustering.
Geology 5670/6670 Inverse Theory 27 Feb 2015 © A.R. Lowry 2015 Read for Wed 25 Feb: Menke Ch 9 ( ) Last time: The Sensitivity Matrix (Revisited)
Optimal Reverse Prediction: Linli Xu, Martha White and Dale Schuurmans ICML 2009, Best Overall Paper Honorable Mention A Unified Perspective on Supervised,
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
Nonlinear Dimension Reduction: Semi-Definite Embedding vs. Local Linear Embedding Li Zhang and Lin Liao.
Dimension reduction (1) Overview PCA Factor Analysis Projection persuit ICA.
Nonlinear Dimensionality Reduction
Ch 12. Continuous Latent Variables ~ 12
Shuang Hong Yang College of Computing, Georgia Tech, USA Hongyuan Zha
Probability Theory and Parameter Estimation I
Ch3: Model Building through Regression
Unsupervised Riemannian Clustering of Probability Density Functions
Linear Regression (continued)
René Vidal Time/Place: T-Th 4.30pm-6pm, Hodson 301
Parameter estimation class 5
Outline Nonlinear Dimension Reduction Brief introduction Isomap LLE
Object Modeling with Layers
George Mason University
6.5 Taylor Series Linearization
Biointelligence Laboratory, Seoul National University
3.5 Solving Nonlinear Systems
NonLinear Dimensionality Reduction or Unfolding Manifolds
Presentation transcript:

Data Mining Course 0 Manifold learning Xin Yang

Data Mining Course 1 Outline Manifold and Manifold Learning Classical Dimensionality Reduction Semi-Supervised Nonlinear Dimensionality Reduction Experiment Results Conclusions

Data Mining Course 2 What is a manifold?

Data Mining Course 3 Examples: sphere and torus

Data Mining Course 4 Why we need manifold?

Data Mining Course 5

6 Manifold learning Raw format of natural data is often high dimensional, but in many cases it is the outcome of some process involving only few degrees of freedom.

Data Mining Course 7 Manifold learning Intrinsic Dimensionality Estimation Dimensionality Reduction

Data Mining Course 8 Dimensionality Reduction Classical Method: Linear: MDS & PCA (Hastie 2001) Nonlinear: LLE (Roweis & Saul, 2000), ISOMAP (Tenebaum 2000), LTSA (Zhang & Zha 2004) -- in general, low dimensional coordinates lack physical meaning

Data Mining Course 9 Semi-supervised NDR Prior information Can be obtained from experts or by performing experiments Eg: moving object tracking

Data Mining Course 10 Semi-supervised NDR Assumption: Assuming the prior information has a physical meaning, then the global low dimensional coordinates bear the same physical meaning.

Data Mining Course 11 Basic LLE

Data Mining Course 12 Basic LTSA Characterized the geometry by computing an approximate tangent space

Data Mining Course 13 SS-LLE & SS-LTSA Give m the exact mapping data points. Partition Y as Our problem :

Data Mining Course 14 SS-LLE & SS-LTSA To solve this minimization problem, partition M as: Then the minimization problem can be written as

Data Mining Course 15 SS-LLE & SS-LTSA Or equivalently Solve it by setting its gradient to be zero, we get:

Data Mining Course 16 Sensitivity Analysis With the increase of prior points, the condition number of the coefficient matrix gets smaller and smaller, the computed solution gets less sensitive to the noise in and

Data Mining Course 17 Sensitivity Analysis The sensitivity of the solution depends on the condition number of the matrix

Data Mining Course 18 Inexact Prior Information Add a regularization term, weighted with a parameter

Data Mining Course 19 Inexact Prior Information Its minimizer can be computed by solving the following linear system:

Data Mining Course 20 Experiment Results “incomplete tire” --compare with basic LLE and LTSA --test on different number of prior points Up body tracking --use SSLTSA --test on inexact prior information algorithm

Data Mining Course 21 Incomplete Tire

Data Mining Course 22

Data Mining Course 23 Relative error with different number of prior points

Data Mining Course 24 Up body tracking

Data Mining Course 25 Results of SSLTSA

Data Mining Course 26 Results of inexact prior information algorithm

Data Mining Course 27 Conclusions Manifold and manifold learning Semi-supervised manifold learning Future work

Data Mining Course 28