EMPIRICAL ORTHOGONAL FUNCTIONS 2 different modes SabrinaKrista Gisselle Lauren.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Noise & Data Reduction. Paired Sample t Test Data Transformation - Overview From Covariance Matrix to PCA and Dimension Reduction Fourier Analysis - Spectrum.
Canonical Correlation
Covariance Matrix Applications
Tensors and Component Analysis Musawir Ali. Tensor: Generalization of an n-dimensional array Vector: order-1 tensor Matrix: order-2 tensor Order-3 tensor.
Factor Analysis Continued
1er. Escuela Red ProTIC - Tandil, de Abril, 2006 Principal component analysis (PCA) is a technique that is useful for the compression and classification.
Notes on Principal Component Analysis Used in: Moore, S.K., N.J. Mantua, J.P. Kellogg and J.A. Newton, 2008: Local and large-scale climate forcing of Puget.
Eigenvalues and Eigenvectors
Principal component analysis (PCA)
Principle Component Analysis What is it? Why use it? –Filter on your data –Gain insight on important processes The PCA Machinery –How to do it –Examples.
CHAPTER 19 Correspondence Analysis From: McCune, B. & J. B. Grace Analysis of Ecological Communities. MjM Software Design, Gleneden Beach, Oregon.
Canonical correlations
Chapter 3 Determinants and Matrices
EOF ANALYSIS Means of examining variations in beach profiles in a compact fashion Describes data variability in terms of orthogonal functions or statistical.
Ordinary least squares regression (OLS)
What is EOF analysis? EOF = Empirical Orthogonal Function Method of finding structures (or patterns) that explain maximum variance in (e.g.) 2D (space-time)
Principal Component Analysis Principles and Application.
Dirac Notation and Spectral decomposition
Correlation. The sample covariance matrix: where.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
A vector can be interpreted as a file of data A matrix is a collection of vectors and can be interpreted as a data base The red matrix contain three column.
Summarized by Soo-Jin Kim
Principle Component Analysis Presented by: Sabbir Ahmed Roll: FH-227.
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Review for Chapter 4 Important Terms, Symbols, Concepts 4.1. Systems of Linear Equations in Two Variables.
The Multiple Correlation Coefficient. has (p +1)-variate Normal distribution with mean vector and Covariance matrix We are interested if the variable.
ECE 8443 – Pattern Recognition LECTURE 03: GAUSSIAN CLASSIFIERS Objectives: Normal Distributions Whitening Transformations Linear Discriminants Resources.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Inc.
Eigen Decomposition Based on the slides by Mani Thomas Modified and extended by Longin Jan Latecki.
A matrix equation has the same solution set as the vector equation which has the same solution set as the linear system whose augmented matrix is Therefore:
Principal Component Analysis Bamshad Mobasher DePaul University Bamshad Mobasher DePaul University.
1. Inverse of A 2. Inverse of a 2x2 Matrix 3. Matrix With No Inverse 4. Solving a Matrix Equation 1.
13.6 MATRIX SOLUTION OF A LINEAR SYSTEM.  Examine the matrix equation below.  How would you solve for X?  In order to solve this type of equation,
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Curve-Fitting Regression
What is the determinant of What is the determinant of
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
Review of Matrix Operations Vector: a sequence of elements (the order is important) e.g., x = (2, 1) denotes a vector length = sqrt(2*2+1*1) orientation.
Arab Open University Faculty of Computer Studies M132: Linear Algebra
GUIDED PRACTICE for Example – – 2 12 – 4 – 6 A = Use a graphing calculator to find the inverse of the matrix A. Check the result by showing.
Multivariate Statistics with Grouped Units Hal Whitehead BIOL4062/5062.
5 5.1 © 2016 Pearson Education, Ltd. Eigenvalues and Eigenvectors EIGENVECTORS AND EIGENVALUES.
Oceanography 569 Oceanographic Data Analysis Laboratory Kathie Kelly Applied Physics Laboratory 515 Ben Hall IR Bldg class web site: faculty.washington.edu/kellyapl/classes/ocean569_.
1 1.4 Linear Equations in Linear Algebra THE MATRIX EQUATION © 2016 Pearson Education, Ltd.
Central limit theorem revisited Throw a dice twelve times- the distribution of values is not Gaussian Dice Value Number Of Occurrences.
Central limit theorem - go to web applet. Correlation maps vs. regression maps PNA is a time series of fluctuations in 500 mb heights PNA = 0.25 *
Complex Empirical Orthogonal Functions – James River Data Linear combination of spatial predictors or modes that are normal or orthogonal to each other.
REVIEW Linear Combinations Given vectors and given scalars
Linear Equations in Linear Algebra
Information Management course
Review of Matrix Operations
EMPIRICAL ORTHOGONAL FUNCTIONS
The Inverse of a Square Matrix
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Principal Component Analysis
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Linear Equations in Linear Algebra
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Chapter 4 Systems of Linear Equations; Matrices
Introduction to Statistical Methods for Measuring “Omics” and Field Data PCA, PcoA, distance measure, AMOVA.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Eigen Decomposition Based on the slides by Mani Thomas and book by Gilbert Strang. Modified and extended by Longin Jan Latecki.
Principal Component Analysis
Chapter 4 Systems of Linear Equations; Matrices
Eigenvalues and Eigenvectors
Eigen Decomposition Based on the slides by Mani Thomas
Outline Variance Matrix of Stochastic Variables and Orthogonal Transforms Principle Component Analysis Generalized Eigenvalue Decomposition.
Presentation transcript:

EMPIRICAL ORTHOGONAL FUNCTIONS 2 different modes SabrinaKrista Gisselle Lauren

Principal Component Analysis or Empirical Orthogonal Functions Linear combination of spatial predictors or modes that are normal or orthogonal to each other cm/s EOF is equivalent to “factor analysis” a data reduction method in social sciences Gives a compact representation of the temporal and spatial variability of several (or many) time series in terms of orthogonal functions (statistical modes) Subtidal Flow at Chesapeake Bay Entrance

Drum Head (circular membrane) vibrating modes

Write data series U m (t) = U(z m, t) as: f im are orthogonal spatial functions, also known as eigenvectors or EOFs are the eigenvalues of the problem (represent the variance explained by each mode i) a i (t) are the amplitudes or weights of the spatial functions as they change in time m are each of the time series (function of depth or horizontal distance)

Subtidal Flow at Chesapeake Bay Entrance (cm/s)

Eigenvectors (spatial functions) or EOFs f 1m 85% of variability 13% of variability f 2m f 3m a1a1 a2a2 1%

Measured Mode 1+2 Mode 1+2+3

Goal: Write data series U at any location m as the sum of M orthogonal spatial functions f im : a i is the amplitude of ith orthogonal mode at any time t For f im to be orthogonal, we require that: Two functions are orthogonal when sum (or integral) of their product over a space or time is zero Orthogonality condition means that the time-averaged covariance of the amplitudes satisfies: (overbar denotes time average) variance of each orthogonal mode

If we form the co-variance matrixof the data Multiplying both sides times f ik, summing over all k and using the orthogonality condition: Canonical form of eigenvalue problem eigenvectors eigenvalues eigenvalues of mean product Covariance matrix if means of U m (t) are removed use to get use to get

C mk is the covariance matrix; I is the unit matrix and  are the EOFs Eigenvalue problem corresponding to a linear system of equations:

For a non-trivial solution (   0): Sum of variances in data = sum of variance in eigenvalues time-dependent amplitudes of i th mode

Matrix = [6637,18] rows > columns

Matrix ul = [6637,18] >> uc=cov(ul); >> u1=ul(:,1); >> sum((u1-mean(u1)).^2)/(length(u1)-1) ans = >> u2=ul(:,2); >> sum((u1-mean(u1)).*(u2-mean(u2)))/(length(u1)-1) ans =

Covariance Matrix Maximum covariance at surface

>> uc=cov(ul); >> [v,d]=eig(uc); eigenvalues (or lambda) >> lambda=diag(d)/sum(diag(d));

>> uc=cov(ul); >> [v,d]=eig(uc);

>> uc=cov(ul); >> [v,d]=eig(uc); >> v=fliplr(v); %flips matrix left to right

Mode % Mode %

Mode % Mode % >> ts=ul*v; ts=[6637,18] Mode % Mode %

>> for k=1:nz vt(k,:,:)=ts(:,k)*v(:,k)'; end vt=[18, 6637,18] mode # evolution in time time series # >> v1=squeeze(vt(1,:,:))’; >> v2=squeeze(vt(2,:,:))’; Depth (m)

Suggestions for Final Project: 1)Calculate Complex EOFs of separate records (raw and filtered) 2)Calculate Complex EOFs of all records at the same time (raw and filtered) 3)Describe and understand spatial variability of EOF modes 4)Describe and understand temporal variability of EOF coefficients (amplitudes) 5)Perform wavelet analysis (with coherence & cross-wavelet) of the EOF coefficients (vary in time) and possible parameters (e.g wind) linked to EOF coefficient temporal variability 6)You could also calculate coherence squared between EOF coefficients and possible parameters causing the variability 7)Write up your story