Eigenvalues and eigenvectors

Slides:



Advertisements
Similar presentations
Eigen Decomposition and Singular Value Decomposition
Advertisements

Eigen Decomposition and Singular Value Decomposition
3D Geometry for Computer Graphics
Lecture 4 The Gauß scheme A linear system of equations Matrix algebra deals essentially with linear linear systems. Multiplicative elements. A non-linear.
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
1cs542g-term High Dimensional Data  So far we’ve considered scalar data values f i (or interpolated/approximated each component of vector values.
Variance and covariance M contains the mean Sums of squares General additive models.
Lecture 4 The Gauß scheme A linear system of equations Matrix algebra deals essentially with linear linear systems. Multiplicative elements. A non-linear.
Principal Component Analysis
Lecture 6 Ordination Ordination contains a number of techniques to classify data according to predefined standards. The simplest ordination technique is.
Computer Graphics Recitation 5.
Simple population models Births Deaths Population increase Population increase = Births – deaths t Equilibrium N: population size b: birthrate d: deathrate.
Ch 7.3: Systems of Linear Equations, Linear Independence, Eigenvalues
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
1 Neural Nets Applications Vectors and Matrices. 2/27 Outline 1. Definition of Vectors 2. Operations on Vectors 3. Linear Dependence of Vectors 4. Definition.
Lecture 20 SVD and Its Applications Shang-Hua Teng.
Ordinary least squares regression (OLS)
Tables, Figures, and Equations
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
1cs542g-term Notes  Extra class next week (Oct 12, not this Friday)  To submit your assignment: me the URL of a page containing (links to)
Linear regression models in matrix terms. The regression function in matrix terms.
Matrices CS485/685 Computer Vision Dr. George Bebis.
Stats & Linear Models.
Multivariate Data and Matrix Algebra Review BMTRY 726 Spring 2012.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
A vector can be interpreted as a file of data A matrix is a collection of vectors and can be interpreted as a data base The red matrix contain three column.
SVD(Singular Value Decomposition) and Its Applications
Eigenvectors and Eigenvalues
Summarized by Soo-Jin Kim
Presented By Wanchen Lu 2/25/2013
Linear Algebra Review 1 CS479/679 Pattern Recognition Dr. George Bebis.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
8.1 Vector spaces A set of vector is said to form a linear vector space V Chapter 8 Matrices and vector spaces.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Matrices and Vectors Objective.
Statistics and Linear Algebra (the real thing). Vector A vector is a rectangular arrangement of number in several rows and one column. A vector is denoted.
Matrices CHAPTER 8.1 ~ 8.8. Ch _2 Contents  8.1 Matrix Algebra 8.1 Matrix Algebra  8.2 Systems of Linear Algebra Equations 8.2 Systems of Linear.
Linear algebra: matrix Eigen-value Problems
Multivariate Statistics Matrix Algebra I W. M. van der Veld University of Amsterdam.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Principal Component Analysis (PCA). Data Reduction summarization of data with many (p) variables by a smaller set of (k) derived (synthetic, composite)
Elementary Linear Algebra Anton & Rorres, 9 th Edition Lecture Set – 07 Chapter 7: Eigenvalues, Eigenvectors.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Basic Matrix Operations Matrix – a rectangular array of numbers, variables, and both Order (dimensions) – describes the number of rows and columns in a.
L 7: Linear Systems and Metabolic Networks. Linear Equations Form System.
Instructor: Mircea Nicolescu Lecture 8 CS 485 / 685 Computer Vision.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Unsupervised Learning II Feature Extraction
ALGEBRAIC EIGEN VALUE PROBLEMS
Introduction to Vectors and Matrices
CS479/679 Pattern Recognition Dr. George Bebis
Matrices and vector spaces
Regression.
Matrices and Vectors Review Objective
Systems of First Order Linear Equations
CS485/685 Computer Vision Dr. George Bebis
Numerical Analysis Lecture 16.
Chapter 7: Matrices and Systems of Equations and Inequalities
Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors.
Feature space tansformation methods
Eigen Decomposition Based on the slides by Mani Thomas
Principal Components What matters most?.
Introduction to Vectors and Matrices
Principal Component Analysis
Subject :- Applied Mathematics
Presentation transcript:

Eigenvalues and eigenvectors Equilibrium Population increase Deaths Births t Population increase = Births – deaths If the population is age structured and contains k age classes we get N: population size b: birthrate d: deathrate The numbers of surviving individuals from class i to class j are given by The net reproduction rate R = (1+bt-dt)

Leslie matrix Assume you have a population of organisms that is age structured. Let fX denote the fecundity (rate of reproduction) at age class x. Let sx denote the fraction of individuals that survives to the next age class x+1 (survival rates). Let nx denote the number of individuals at age class x We can denote this assumptions in a matrix model called the Leslie model. We have w-1 age classes, w is the maximum age of an individual. L is a square matrix. Numbers per age class at time t=1 are the dot product of the Leslie matrix with the abundance vector N at time t

v v v The sum of all fecundities gives the number of newborns n0s0 gives the number of individuals in the first age class v Nw-1sw-2 gives the number of individuals in the last class v The Leslie model is a linear approach. It assumes stable fecundity and mortality rates The effect pof the initial age composition disappears over time Age composition approaches an equilibrium although the whole population might go extinct. Population growth or decline is often exponential

At the long run the population dies out. An example Important properties: Eventually all age classes grow or shrink at the same rate Initial growth depends on the age structure Early reproduction contributes more to population growth than late reproduction At the long run the population dies out. Reproduction rates are too low to counterbalance the high mortality rates

Leslie matrix Does the Leslie approach predict a stationary point where population abundances doesn’t change any more? We’re looking for a vector that doesn’t change direction when multiplied with the Leslie matrix. This vector is called the eigenvector U of the matrix. Eigenvectors are only defined for square matrices. I: identity matrix

The insulin – glycogen system At high blood glucose levels insulin stimulates glycogen synthesis and inhibits glycogen breakdown. The change in glycogen concentration can be modelled by the sum of constant production and concentration dependent breakdown At equilibrium we have The vector {-f,g} is the eigenvector of the dispersion matrix and gives the stationary point. The value -1 is called the eigenvalue of this system.

How to transform vector A into vector B? Y Multiplication of a vector with a square matrix defines a new vector that points to a different direction. The matrix defines a transformation in space B A The vectors that don’t change during transformation are the eigenvectors. X Y In general we define B U is the eigenvector and l the eigenvalue of the square matrix X A X Image transformation X contains all the information necesssary to transform the image

The basic equation The matrices A and L have the same properties. We have diagonalized the matrix A. We have reduced the information contained in A into a characteristic value l, the eigenvalue.

A nxn matrix has n eigenvalues and n eigenvectors Symmetric matrices and their transposes have identical eigenvectors and eigenvalues Eigenvectors of symmetric matrices are orthogonal.

How to calculate eigenvectors and eigenvalues? The equation is either zero for the trivial case u=0 or if [A-lI] =0

The general solutions of 2x2 matrices Distance matrix Dispersion matrix

This system is indeterminate Matrix reduction

Characteristic polynomial Higher order matrices Characteristic polynomial Eigenvalues and eigenvectors can only be computed analytically to the fourth power of m. Higher order matrices need numerical solutions

The power method to find the largest eigenvalue. The power method is an interative process that starts from an initial guess of the eigenvector to approximate the eigenvalue Let the first component u11 of u1 being 1. Rescale u1 to become 1 for the first component. This gives a second guess for l. Repeat this procedure until the difference ln+1 – ln is less than a predefined number e. Having the eigenvalues thew eigenvectors come immediately from solving the linear system using matrix reduction

Some properties of eigenvectors If L is the diagonal matrix of eigenvalues: The eigenvectors of symmetric matrices are orthogonal The product of all eigenvalues equals the determinant of a matrix. Eigenvectors do not change after a matrix is multiplied by a scalar k. Eigenvalues are also multiplied by k. The determinant is zero if at least one of the eigenvalues is zero. In this case the matrix is singular. If A is trianagular or diagonal the eigenvalues of A are the diagonal entries of A.

Page Rank In large webs (1-d)/N is very small A standard eigenvector problem The requested raking is simply contained in the largest eigenvector of P.

A B C D

The data points of the new system are close to the new x-axis The data points of the new system are close to the new x-axis. The variance within the new system is much smaller. Principal axes u2 u1 u1 u2 Principal axes define the longer and shorter radius of an oval around the scatter of data points. The quotient of longer to short principal axes measure how close the data points are associated (similar to the coefficient of correlation). The principal axes span a new Cartesian system . Principal axes are orthogonal.

Major axis regression The largest major axis defines a regression line through the data points {xi,yi}. u1 The major axis is identical with the largest eigenvector of the associated covariance matrix. The length of the axes are given by the eigenvalues. The eigenvalues measure therefore the association between a and y. u2 The first principal axis is given by the largest eigenvalue Major axis regression minimizes the Euclidean distances of the data points to the regression line.

The relationship between ordinary least square (OLS) and major axis (MAR) regression

Going Excel

Ordinary least squares regression (OLS) Major axis regression (MAR) Errors in the x and y variables cause OLS regression to predict lower slopes. Major axis regression is closer to the correxct slope. Ordinary least squares regression (OLS) Major axis regression (MAR) The MAR slope is always steeper than the OLS slope. If both variables have error terms MAR should be preferred.

MAR is not stable after rescaling of only one of the variables Days/360 MAR should not be used for comparing slopes if variables have different dimensions and were measured in different units, because the slope depends on the way of measurement. If both variables are rescaled in the same manner (by the same factor) this problem doesn’t appear. OLS regression retains the correct scaling factor, MAR does not.

A simple way to take the power of a square matrix Scaling a matrix A simple way to take the power of a square matrix

The variance and covariance of data according to the principal axis y (x;y) The vector of data points in the new system comes from the transformation according to the principal axes u1 u2 x Eigenvectors are normalized to the length of one The eigenvectors are orthogonal The variance of variable k in the new system is equal to the eigenvalue of the kth principal axis. The covariance of variables j and k in the new system is zero. The new variables are independent.