Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

3D Geometry for Computer Graphics
Ch 7.7: Fundamental Matrices
Chapter 28 – Part II Matrix Operations. Gaussian elimination Gaussian elimination LU factorization LU factorization Gaussian elimination with partial.
Covariance Matrix Applications
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Lecture 19 Singular Value Decomposition
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Principal Component Analysis
Linear Transformations
Symmetric Matrices and Quadratic Forms
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Face Recognition Recognized Person Face Recognition.
Computer Graphics Recitation 5.
Principal component analysis (PCA)
Canonical correlations
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
Lecture 20 SVD and Its Applications Shang-Hua Teng.
6 1 Linear Transformations. 6 2 Hopfield Network Questions.
Ch. 10: Linear Discriminant Analysis (LDA) based on slides from
Principal component analysis (PCA) Purpose of PCA Covariance and correlation matrices PCA using eigenvalues PCA using singular value decompositions Selection.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Matrices CS485/685 Computer Vision Dr. George Bebis.
5.1 Orthogonality.
Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka Virginia de Sa (UCSD) Cogsci 108F Linear.
Summarized by Soo-Jin Kim
Principle Component Analysis Presented by: Sabbir Ahmed Roll: FH-227.
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Chapter 2 Dimensionality Reduction. Linear Methods
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
D. van Alphen1 ECE 455 – Lecture 12 Orthogonal Matrices Singular Value Decomposition (SVD) SVD for Image Compression Recall: If vectors a and b have the.
Additive Data Perturbation: data reconstruction attacks.
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Chapter 5 Eigenvalues and Eigenvectors 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
N– variate Gaussian. Some important characteristics: 1)The pdf of n jointly Gaussian R.V.’s is completely described by means, variances and covariances.
Vector Norms and the related Matrix Norms. Properties of a Vector Norm: Euclidean Vector Norm: Riemannian metric:
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Linear Subspace Transforms PCA, Karhunen- Loeve, Hotelling C306, 2000.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Principal Component Analysis (PCA)
Principal Component Analysis Zelin Jia Shengbin Lin 10/20/2015.
Algorithm Development with Higher Order SVD
Feature Extraction 主講人:虞台文. Content Principal Component Analysis (PCA) PCA Calculation — for Fewer-Sample Case Factor Analysis Fisher’s Linear Discriminant.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Matrix Factorization & Singular Value Decomposition Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
Chapter 13 Discrete Image Transforms
Principal Components Analysis ( PCA)
Unsupervised Learning II Feature Extraction
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
CS246 Linear Algebra Review. A Brief Review of Linear Algebra Vector and a list of numbers Addition Scalar multiplication Dot product Dot product as a.
Review of Linear Algebra
Matrices and Vectors Review Objective
Lecture: Face Recognition and Feature Reduction
Outline Multilinear Analysis
Singular Value Decomposition
Principal Component Analysis
Recitation: SVD and dimensionality reduction
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Outline Singular Value Decomposition Example of PCA: Eigenfaces.
Feature space tansformation methods
Symmetric Matrices and Quadratic Forms
Principal Components What matters most?.
Lecture 13: Singular Value Decomposition (SVD)
Principal Component Analysis
Eigenvalues and Eigenvectors
Presentation transcript:

Tensors and Component Analysis Musawir Ali

Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor

Reshaping Tensors Matrix to vector = “Vectorizing” a matrix

Reshaping Tensors Order-3 tensor to matrix “Flattening” an order-3 tensor x y z 1,1,12,1,11,1,22,1,2 1,2,12,2,11,2,22,2,2 1,1,11,2,12,1,12,2,1 1,1,21,2,22,1,22,2,2 1,1,11,1,21,2,11,2,2 2,1,12,1,22,2,12,2,2 A (1) A (3) A (2) A

n-Mode Multiplication Multiplying Matrices and order-3 tensors A x n M = M A (n) Multiplying a tensor A with matrix M A is flattened over dimension n (permute dimensions so that the dimension n is along the columns, and then flatten), and then regular matrix multiplication of matrix M and flattened tensor A (n) is performed

Singular Value Decomposition (SVD) SVD of a Matrix =** M = U S V T U and V are orthogonal matrices, and S is a diagonal matrix consisting of singular values.

Singular Value Decomposition (SVD) SVD of a Matrix: observations M = U S V T Multiply both sides by M T M T M = (U S V T ) T U S V T M T M = (V S U T ) U S V T U T U = I M T M = V S 2 V T MM T = U S V T (U S V T ) T MM T = U S V T (V S U T ) V T V = I MM T = U S 2 U T Multiplying on the leftMultiplying on the right

Singular Value Decomposition (SVD) M T M = V S 2 V T MM T = U S 2 U T diagonalizations Diagonalization of a Matrix: (finding eigenvalues) A = W Λ W T where: A is a square, symmetric matrix Columns of W are eigenvectors of A Λ is a diagonal matrix containing the eigenvalues Therefore, if we know U (or V) and S, we basically have found out the eigenvectors and eigenvalues of MM T (or M T M) ! SVD of a Matrix: observations

Principal Component Analysis (PCA) What is PCA? Analysis of n-dimensional data Observes correspondence between different dimensions Determines principal dimensions along which the variance of the data is high Why PCA? Determines a (lower dimensional) basis to represent the data Useful compression mechanism Useful for decreasing dimensionality of high dimensional data

Principal Component Analysis (PCA) Steps in PCA: #1 Calculate Adjusted Data Set … n dims data samples Data Set: DMean values: M - … Adjusted Data Set: A = M i is calculated by taking the mean of the values in dimension i

Principal Component Analysis (PCA) Steps in PCA: #2 Calculate Co-variance matrix, C, from Adjusted Data Set, A Co-variance Matrix: C n n C ij = cov(i,j) Note: Since the means of the dimensions in the adjusted data set, A, are 0, the covariance matrix can simply be written as: C = (A A T ) / (n-1)

Principal Component Analysis (PCA) Steps in PCA: #3 Calculate eigenvectors and eigenvalues of C Eigenvectors Eigenvalues If some eigenvalues are 0 or very small, we can essentially discard those eigenvalues and the corresponding eigenvectors, hence reducing the dimensionality of the new basis. Eigenvectors Eigenvalues xx Matrix E

Principal Component Analysis (PCA) Steps in PCA: #4 Transforming data set to the new basis F = E T A where: F is the transformed data set E T is the transpose of the E matrix containing the eigenvectors A is the adjusted data set Note that the dimensions of the new dataset, F, are less than the data set A To recover A from F: (E T ) -1 F = (E T ) -1 E T A (E T ) T F = A EF = A * E is orthogonal, therefore E -1 = E T

PCA using SVD Recall: In PCA we basically try to find eigenvalues and eigenvectors of the covariance matrix, C. We showed that C = (AA T ) / (n-1), and thus finding the eigenvalues and eigenvectors of C is the same as finding the eigenvalues and eigenvectors of AA T Recall: In SVD, we decomposed a matrix A as follows: A = U S V T and we showed that: AA T = U S 2 U T where the columns of U contain the eigenvectors of AA T and the eigenvalues of AA T are the squares of the singular values in S Thus SVD gives us the eigenvectors and eigenvalues that we need for PCA

Generalization of SVD N-Mode SVD Order-2 SVD: Written in terms of mode-n product: By definition of mode-n multiplication: Note: S T = S since S is a diagonal matrix Note: D = S x 1 U x 2 V = S x 2 V x 1 U D = U S V T D = S x 1 U x 2 V D = U ( V S (2) ) (1) D = U ( V S ) T D = U S T V T D = U S V T

Generalization of SVD N-Mode SVD Order-3 SVD: D = Z x 1 U 1 x 2 U 2 x 3 U 3 Z, the core tensor, is the counterpart of S in Order-2 SVD U 1, U 2, and U 3 are known as mode matrices Mode matrix U i is obtained as follows: perform Order-2 SVD on D (i), the matrix obtained by flattening the tensor D on the i’th dimension. U i is the leftmost (column-space) matrix obtained from the SVD above

PCA on Higher Order Tensors Bidirectional Texture Function (BTF) as a Order-3 Tensor Changing illumination, changing view, and changing texels (position) along respective dimensions of the tensor D = Z x 1 U t x 2 U i x 3 U v

PCA on Higher Order Tensors Performing PCA on a Order-3 Tensor texels view + illumination The matrix, U t (shown above), resulting from the PCA, consists of the eigenvectors (eigentextures) along its columns Retrieving/Rendering image: T = Z x 1 U t Image = T x 1 F UtUt PCA Transform data to new basis: F = U t T A where A is the data in original basis

TensorTextures A cheaper equivalent of PCA: TensorTextures T = Z x 1 U t PCA: T = D x 2 U i T x 3 U v T TensorTextures: Recall: D = Z x 1 U t x 2 U i x 3 U v D = U v (U i ( U t Z (1) ) (2) ) (3) D = (U v U i ) (U t Z (1) ) (2) (U v U i ) -1 D (2) = U t Z (1) (U i T U v T ) D (3) = Z x 1 U t D x 2 U i T x 3 U v T = Z x 1 U t

TensorTextures Advantages of TensorTextures over PCA T = Z x 1 U t PCA: T = D x 2 U i T x 3 U v T TensorTextures: Independent compression control of illumination and view in TensorTextures, whereas in PCA, the illumination and view are coupled together and thus change/compression of eigenvectors will effect both parameters. Images represented using fewer coefficients in TensorTextures. PCA: ( v * i ) basis vectors, each of size t TensorTextures: ( v + i ) basis vectors, of size v and I where v = # of view directions, i = # of illumination conditions, and t = # of texels in each image used in the creation of the tensor

TensorTextures Compression vs PCA compression Notice that PCA has one set of basis vectors where as TensorTextures has two: one for illumination, and one for view. This enables selective compression to a desired level of tolerance of perceptual error