Dimensionality Reduction

Slides:



Advertisements
Similar presentations
3D Geometry for Computer Graphics
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Dimensionality Reduction PCA -- SVD
Presented by: Mingyuan Zhou Duke University, ECE April 3, 2009
Non-linear Dimensionality Reduction CMPUT 466/551 Nilanjan Ray Prepared on materials from the book Non-linear dimensionality reduction By Lee and Verleysen,
University of Joensuu Dept. of Computer Science P.O. Box 111 FIN Joensuu Tel fax Isomap Algorithm.
“Random Projections on Smooth Manifolds” -A short summary
Principal Component Analysis
Computer Graphics Recitation 5.
LLE and ISOMAP Analysis of Robot Images Rong Xu. Background Intuition of Dimensionality Reduction Linear Approach –PCA(Principal Component Analysis) Nonlinear.
Dimensional reduction, PCA
1 Numerical geometry of non-rigid shapes Spectral Methods Tutorial. Spectral Methods Tutorial 6 © Maks Ovsjanikov tosca.cs.technion.ac.il/book Numerical.
Manifold Learning: ISOMAP Alan O'Connor April 29, 2008.
SVD and PCA COS 323. Dimensionality Reduction Map points in high-dimensional space to lower number of dimensionsMap points in high-dimensional space to.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Three Algorithms for Nonlinear Dimensionality Reduction Haixuan Yang Group Meeting Jan. 011, 2005.
A Global Geometric Framework for Nonlinear Dimensionality Reduction Joshua B. Tenenbaum, Vin de Silva, John C. Langford Presented by Napat Triroj.
Atul Singh Junior Undergraduate CSE, IIT Kanpur.  Dimension reduction is a technique which is used to represent a high dimensional data in a more compact.
Dimensionality Reduction
NonLinear Dimensionality Reduction or Unfolding Manifolds Tennenbaum|Silva|Langford [Isomap] Roweis|Saul [Locally Linear Embedding] Presented by Vikas.
Lightseminar: Learned Representation in AI An Introduction to Locally Linear Embedding Lawrence K. Saul Sam T. Roweis presented by Chan-Su Lee.
Dimensionality Reduction. Multimedia DBs Many multimedia applications require efficient indexing in high-dimensions (time-series, images and videos, etc)
Nonlinear Dimensionality Reduction by Locally Linear Embedding Sam T. Roweis and Lawrence K. Saul Reference: "Nonlinear dimensionality reduction by locally.
COSC 4335 DM: Preprocessing Techniques
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Empirical Modeling Dongsup Kim Department of Biosystems, KAIST Fall, 2004.
Summarized by Soo-Jin Kim
Data Reduction. 1.Overview 2.The Curse of Dimensionality 3.Data Sampling 4.Binning and Reduction of Cardinality.
Learning a Kernel Matrix for Nonlinear Dimensionality Reduction By K. Weinberger, F. Sha, and L. Saul Presented by Michael Barnathan.
Computer Vision Lab. SNU Young Ki Baik Nonlinear Dimensionality Reduction Approach (ISOMAP, LLE)
ISOMAP TRACKING WITH PARTICLE FILTER Presented by Nikhil Rane.
GRASP Learning a Kernel Matrix for Nonlinear Dimensionality Reduction Kilian Q. Weinberger, Fei Sha and Lawrence K. Saul ICML’04 Department of Computer.
Dimensionality Reduction
Manifold learning: MDS and Isomap
Dimensionality Reduction Part 2: Nonlinear Methods
Nonlinear Dimensionality Reduction Approach (ISOMAP)
Jan Kamenický.  Many features ⇒ many dimensions  Dimensionality reduction ◦ Feature extraction (useful representation) ◦ Classification ◦ Visualization.
Non-Linear Dimensionality Reduction
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
Data Projections & Visualization Rajmonda Caceres MIT Lincoln Laboratory.
Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.
Math 285 Project Diffusion Maps Xiaoyan Chong Department of Mathematics and Statistics San Jose State University.
CS685 : Special Topics in Data Mining, UKY The UNIVERSITY of KENTUCKY Dimensionality Reduction CS 685: Special Topics in Data Mining Spring 2008 Jinze.
Nonlinear Dimension Reduction: Semi-Definite Embedding vs. Local Linear Embedding Li Zhang and Lin Liao.
Manifold Learning JAMES MCQUEEN – UW DEPARTMENT OF STATISTICS.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL Dimensionality Reduction Part 1: Linear Methods Comp Spring 2007.
Eric Xing © Eric CMU, Machine Learning Data visualization and dimensionality reduction Eric Xing Lecture 7, August 13, 2010.
1 C.A.L. Bailer-Jones. Machine Learning. Data exploration and dimensionality reduction Machine learning, pattern recognition and statistical data modelling.
CSE 554 Lecture 8: Alignment
Spectral Methods for Dimensionality
Nonlinear Dimensionality Reduction
Intrinsic Data Geometry from a Training Set
Dimensionality Reduction
Dipartimento di Ingegneria «Enzo Ferrari»,
Principal Component Analysis (PCA)
Dimensionality Reduction
Spectral Methods Tutorial 6 1 © Maks Ovsjanikov
Machine Learning Dimensionality Reduction
Multidimensional Scaling and Correspondence Analysis
ISOMAP TRACKING WITH PARTICLE FILTERING
Outline Nonlinear Dimension Reduction Brief introduction Isomap LLE
Learning with information of features
Dimensionality Reduction
Principal Component Analysis (PCA)
Feature space tansformation methods
Multidimensional Scaling
Nonlinear Dimension Reduction:
NonLinear Dimensionality Reduction or Unfolding Manifolds
Marios Mattheakis and Pavlos Protopapas
Presentation transcript:

Dimensionality Reduction CS 485: Special Topics in Data Mining Jinze Liu

Overview What is Dimensionality Reduction? Simplifying complex data Using dimensionality reduction as a Data Mining “tool” Useful for both “data modeling” and “data analysis” Tool for “clustering” and “regression” Linear Dimensionality Reduction Methods Principle Component Analysis (PCA) Multi-Dimensional Scaling (MDS) Non-Linear Dimensionality Reduction

What is Dimensionality Reduction? Given N objects, each with M measurements, find the best D-dimensional parameterization Goal: Find a “compact parameterization” or “Latent Variable” representation Given N examples of find where Underlying assumptions to DimRedux Measurements over-specify data, M > D The number of measurements exceed the number of “true” degrees of freedom in the system The measurements capture all of the significant variability                   

Uses for DimRedux Build a “compact” model of the data Compression for storage, transmission, & retrieval Parameters for indexing, exploring, and organizing Generate “plausible” new data Answer fundamental questions about data What is its underlying dimensionality? How many degrees of freedom are exhibited? How many “latent variables”? How independent are my measurements? Is there a projection of my data set where important relationships stand out?

DimRedux in Data Modeling Data Clustering - Continuous to Discrete The curse of dimensionality: the sampling density is proportional to N1/p. Need a mapping to a lower-dimensional space that preserves “important” relations Regression Modeling – Continuous to Continuous A functional model that generates input data Useful for interpolation Embedding Space

Today’s Focus Linear DimRedux methods “Linear” Assumption PCA – Pearson (1901); Hotelling (1935) MDS – Torgerson (1952), Shepard (1962) “Linear” Assumption Data is a linear function of the parameters (latent variables) Data lies on a linear (Affine) subspace where the matrix M is m x d

PCA: What problem does it solve? Minimizes “least-squares” (Euclidean) error The D-dimensional model provided by PCA has the smallest Euclidean error of any D-parameter linear model. where is the model predicted by the D-dimensional PCA. Projects data s.t. the variance is maximized Find an optimal “orthogonal” basis set for describing the given data

Principle Component Analysis Also known to engineers as the Karhunen-Loéve Transform (KLT) Rotate data points to align successive axes with directions of greatest variance Subtract mean from data Normalize variance along each direction, and reorder according to the variance magnitude from high to low Normalized variance direction = principle component Eigenvectors of system’s Covariance Matrix permute to order eigenvectors in descending order

Simple PCA Example Simple 3D example >> x = rand(2, 500); >> z = [1,0; 0,1; -1,-1] * x + [0;0;1] * ones(1, 500); >> m = (100 * rand(3,3)) * z + rand(3, 500); >> scatter3(m(1,:), m(2,:), m(3,:), 'filled');

Simple PCA Example (cont) >> mm = (m- mean(m')' * ones(1, 500));; >> [E,L] = eig(cov(mm ‘ )); >> E E = 0.8029 -0.5958 0.0212 0.1629 0.2535 0.9535 0.5735 0.7621 -0.3006 >> L L = 172.2525 0 0 0 116.2234 0 0 0 0.0837 >> newm = E’ * (m - mean(m’)' * ones(1, 500)); >> scatter3(newm(1,:), newm(2,:), newm(3,:), 'filled'); axis([-50,50, -50,50, -50,50]);

Simple PCA Example (cont)

PCA Applied to Reillumination Illumination can be modeled as an additive linear system. ) ( R i xy w

Simulating New Lighting We can simulate the appearance of a model under new illumination by combining images taken from a set of basis lights We can then capture real-world lighting and use it to modulate our basis lighting functions

Problems There are too many basis lighting functions These have to be stored in order to use them The resulting lighting model can be huge, in particular when representing high frequency lighting Lighting differences can be very subtle The cost of modulation is excessive Every basis image must be scaled and added together Each image requires a high-dynamic range Is there a more compact representation? Yes, use PCA.

PCA Applied to Illumination More than 90% variance is captured in the first five principle components Generate new illumination by combining only 5 basis images V0 for n lights

Results Video

MDS: What problem does it solve? Takes as input a dissimilarity matrix M, containing pairwise dissimilarities between N-dimensional data points Finds the best D-dimensional linear parameterization compatible with M (in other words, outputs a projection of data in D-dimensional space where the pairwise distances match the original dissimilarities as faithfully as possible)

Multidimensional Scaling (MDS) Dissimilarities can be metric or non-metric Useful when absolute measurements are unavailable; uses relative measurements Computation is invariant to dimensionality of data

An example: map of the US Given only the distance between a bunch of cities

An example: map of the US MDS finds suitable coordinates for the points of the specified dimension.

MDS Properties Parameterization is not unique – Axes are meaningless Not surprising since Euclidean transformations and reflections preserve distances between points Useful for visualizing relationships in high dimensional data. Define a dissimilarity measure Map to a lower-dimensional space using MDS Common preprocess before cluster analysis Aids in understanding patterns and relationships in data Widely used in marketing and psychometrics

Dissimilarities Dissimilarities are distance-like quantities that satisfy the following conditions: A dissimilarity is metric if, in addition, it satisfies: “The triangle inequality”

Relating MDS to PCA Special case: when distances are Euclidean PCA = eigendecomposition of covariance matrix MTM Convert the pair-wise distance matrix to the covariance matrix

How to get MTM from Euclidean Pair-wise Distances Law of cosines k j Definition of a dot product Eigendecomposition on b to get VSVT VS1/2 = matrix of new coordinates

Algebraically… So we “centered” the matrix The “Matrix Average” The distance between points pi and pj The *Column Average* the average distance that a given point is from pj The “Matrix Average” The *Row Average* the average distance that a given point is from pi So we “centered” the matrix

MDS Mechanics Given a Dissimilarity matrix, D, the MDS model is computed as follows: Where, H, the so called “centering” matrix, is a scaled identity matrix computed as follows: MDS coordinates given by (in order of decreasing :

MDS Stress The residual variance of B (i.e. the sum of the remaining eigenvalues) indicate the goodness of fit for the selected d-dimensional model This term is often called MDS “stress” Examining the residual variance gives an indication of the inherent dimensionality

Reflectance Modeling Example The top row of white, grey, and black balls have the same “physical” reflectance parameters, however, the bottom row is “perceptually” more consistent. From Pellacini, et. al. “Toward a Psychophysically-Based Light Reflection Model for Image Synthesis,” SIGGRAPH 2000 Objective – Find a perceptually meaningful parameterization for reflectance modeling

Reflectance Modeling Example User Task – Subjects were presented with 378 pairs of rendered spheres an asked to rate their difference in “glossiness” on a scale of 0 (no difference) to 100. A dissimilarity 27 x 27 dissimilarity matrix was constructed and MDS applied

Reflectance Modeling Example Parameters of a 2D embedding space were determined Two axes of “gloss” were established

Limitations of Linear methods What if the data does not lie within a linear subspace? Do all convex combinations of the measurements generate plausible data? Low-dimensional non-linear Manifold embedded in a higher dimensional space Next time: Nonlinear Dimensionality Reduction

Nonlinear Dimensionality Reduction Many data sets contain essential nonlinear structures that invisible to PCA and MDS Resorts to some nonlinear dimensionality reduction approaches. Kernel methods Depend on the kernels Most kernels are not data dependent

Nonlinear Approaches- Isomap Josh. Tenenbaum, Vin de Silva, John langford 2000 Constructing neighbourhood graph G For each pair of points in G, Computing shortest path distances ---- geodesic distances. Use Classical MDS with geodesic distances. Euclidean distance Geodesic distance

Sample points with Swiss Roll Altogether there are 20,000 points in the “Swiss roll” data set. We sample 1000 out of 20,000.

Construct neighborhood graph G K- nearest neighborhood (K=7) DG is 1000 by 1000 (Euclidean) distance matrix of two neighbors (figure A)

Compute all-points shortest path in G Now DG is 1000 by 1000 geodesic distance matrix of two arbitrary points along the manifold (figure B)

Use MDS to embed graph in Rd Find a d-dimensional Euclidean space Y (Figure c) to preserve the pariwise diatances.

The Isomap algorithm

PCA, MD vs ISOMAP

Isomap: Advantages Nonlinear Globally optimal Still produces globally optimal low-dimensional Euclidean representation even though input space is highly folded, twisted, or curved. Guarantee asymptotically to recover the true dimensionality.

Isomap: Disadvantages May not be stable, dependent on topology of data Guaranteed asymptotically to recover geometric structure of nonlinear manifolds As N increases, pairwise distances provide better approximations to geodesics, but cost more computation If N is small, geodesic distances will be very inaccurate.

Applications Isomap and Nonparametric Models of Image Deformation LLE and Isomap Analysis of Spectra and Colour Images Image Spaces and Video Trajectories: Using Isomap to Explore Video Sequences Mining the structural knowledge of high-dimensional medical data using isomap Isomap Webpage: http://isomap.stanford.edu/

Summary Linear dimensionality reduction tools are widely used for Data analysis Data preprocessing Data compression PCA transforms the measurement data s. t. successive directions of greatest variance are mapped to orthogonal axis directions (bases) An D-dimensional embedding space (parameterization) can be established by modeling the data using only the first d of these basis vectors Residual modeling error is the sum of the remaining eigenvalues

Summary (cont) MDS finds a d-dimensional parameterization that best preserves a given dissimilarity matrix Resulting model can be Euclidean transformed to align data with a more intuitive parameterization An D-dimensional embedding spaces (parameterization) are established by modeling the data using only the first d coordinates of the scaled eigenvectors Residual modeling error (MDS stress) is the sum of the remaining eigenvalues If Euclidean metric dissimilarity matrix is used for MDS the resulting d-dimensional model will match the PCA weights for the same dimensional model