Lecture 11 Vector Spaces and Singular Value Decomposition.

Slides:



Advertisements
Similar presentations
Copula Representation of Joint Risk Driver Distribution
Advertisements

Environmental Data Analysis with MatLab
Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability.
Lecture 20 Continuous Problems Linear Operators and Their Adjoints.
Lecture 10 Nonuniqueness and Localized Averages. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
Environmental Data Analysis with MatLab Lecture 8: Solving Generalized Least Squares Problems.
Component Analysis (Review)
Lecture 13 L1 , L∞ Norm Problems and Linear Programming
Lecture 23 Exemplary Inverse Problems including Earthquake Location.
Lecture 22 Exemplary Inverse Problems including Filter Design.
Lecture 18 Varimax Factors and Empircal Orthogonal Functions.
Environmental Data Analysis with MatLab Lecture 16: Orthogonal Functions.
Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance.
Lecture 3 Probability and Measurement Error, Part 2.
Principal Component Analysis CMPUT 466/551 Nilanjan Ray.
Lecture 2 Probability and Measurement Error, Part 1.
Lecture 5 A Priori Information and Weighted Least Squared.
Lecture 19 Continuous Problems: Backus-Gilbert Theory and Radon’s Problem.
Lecture 15 Nonlinear Problems Newton’s Method. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals.
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 17 Factor Analysis. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 21 Continuous Problems Fréchet Derivatives.
Lecture 24 Exemplary Inverse Problems including Vibrational Problems.
Lecture 6 Resolution and Generalized Inverses. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
3D Geometry for Computer Graphics
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Environmental Data Analysis with MatLab Lecture 5: Linear Models.
Lecture 3 Review of Linear Algebra Simple least-squares.
Curve-Fitting Regression
Lecture 12 Equality and Inequality Constraints. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 7 Advanced Topics in Least Squares. the multivariate normal distribution for data, d p(d) = (2  ) -N/2 |C d | -1/2 exp{ -1/2 (d-d) T C d -1 (d-d)
Lecture 4: Practical Examples. Remember this? m est = m A + M [ d obs – Gm A ] where M = [G T C d -1 G + C m -1 ] -1 G T C d -1.
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
The Terms that You Have to Know! Basis, Linear independent, Orthogonal Column space, Row space, Rank Linear combination Linear transformation Inner product.
3D Geometry for Computer Graphics
Orthogonality and Least Squares
Basics of regression analysis
Lecture 3: Inferences using Least-Squares. Abstraction Vector of N random variables, x with joint probability density p(x) expectation x and covariance.
Environmental Data Analysis with MatLab Lecture 7: Prior Information.
Boot Camp in Linear Algebra Joel Barajas Karla L Caballero University of California Silicon Valley Center October 8th, 2008.
Today Wrap up of probability Vectors, Matrices. Calculus
SVD(Singular Value Decomposition) and Its Applications
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
1 February 24 Matrices 3.2 Matrices; Row reduction Standard form of a set of linear equations: Chapter 3 Linear Algebra Matrix of coefficients: Augmented.
Making Models from Data A Basic Overview of Parameter Estimation and Inverse Theory or Four Centuries of Linear Algebra in 10 Equations Rick Aster Professor.
R. Kass/W03P416/Lecture 7 1 Lecture 7 Some Advanced Topics using Propagation of Errors and Least Squares Fitting Error on the mean (review from Lecture.
AN ORTHOGONAL PROJECTION
G(m)=d mathematical model d data m model G operator d=G(m true )+  = d true +  Forward problem: find d given m Inverse problem (discrete parameter estimation):
Least SquaresELE Adaptive Signal Processing 1 Method of Least Squares.
Method of Least Squares. Least Squares Method of Least Squares:  Deterministic approach The inputs u(1), u(2),..., u(N) are applied to the system The.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Elementary Linear Algebra Anton & Rorres, 9th Edition
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
Kinematic Redundancy A manipulator may have more DOFs than are necessary to control a desired variable What do you do w/ the extra DOFs? However, even.
Introduction to Linear Algebra Mark Goldman Emily Mackevicius.
1 Chapter 8 – Symmetric Matrices and Quadratic Forms Outline 8.1 Symmetric Matrices 8.2Quardratic Forms 8.3Singular ValuesSymmetric MatricesQuardratic.
Chapter 61 Chapter 7 Review of Matrix Methods Including: Eigen Vectors, Eigen Values, Principle Components, Singular Value Decomposition.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 14: Applications of Filters.
Boot Camp in Linear Algebra TIM 209 Prof. Ram Akella.
1 Objective To provide background material in support of topics in Digital Image Processing that are based on matrices and/or vectors. Review Matrices.
Lecture 16: Image alignment
Introduction The central problems of Linear Algebra are to study the properties of matrices and to investigate the solutions of systems of linear equations.
Some useful linear algebra
Singular Value Decomposition SVD
5.4 General Linear Least-Squares
Unfolding with system identification
Presentation transcript:

Lecture 11 Vector Spaces and Singular Value Decomposition

Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and Measurement Error, Part 2 Lecture 04The L 2 Norm and Simple Least Squares Lecture 05A Priori Information and Weighted Least Squared Lecture 06Resolution and Generalized Inverses Lecture 07Backus-Gilbert Inverse and the Trade Off of Resolution and Variance Lecture 08The Principle of Maximum Likelihood Lecture 09Inexact Theories Lecture 10Nonuniqueness and Localized Averages Lecture 11Vector Spaces and Singular Value Decomposition Lecture 12Equality and Inequality Constraints Lecture 13L 1, L ∞ Norm Problems and Linear Programming Lecture 14Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15Nonlinear Problems: Newton’s Method Lecture 16Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17Factor Analysis Lecture 18Varimax Factors, Empirical Orthogonal Functions Lecture 19Backus-Gilbert Theory for Continuous Problems; Radon’s Problem Lecture 20Linear Operators and Their Adjoints Lecture 21Fréchet Derivatives Lecture 22 Exemplary Inverse Problems, incl. Filter Design Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location Lecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Purpose of the Lecture View m and d as points in the space of model parameters and data Develop the idea of transformations of coordinate axes Show how transformations can be used to convert a weighted problem into an unweighted one Introduce the Natural Solution and the Singular Value Decomposition

Part 1 the spaces of model parameters and data

what is a vector? algebraic viewpoint a vector is a quantity that is manipulated (especially, multiplied) via a specific set of rules geometric viewpoint a vector is a direction and length in space

what is a vector? algebraic viewpoint a vector is a quantity that is manipulated (especially, multiplied) via a specific set of rules geometric viewpoint a vector is a direction and length in space column- in our case, a space of very high dimension

S(m) m3m3 m2m2 m1m1 d1d1 d2d2 d3d3 S(d) m d

forward problem d = Gm maps an m onto a d maps a point in S(m) to a point in S(d)

m3m3 m2m2 m1m1 d1d1 d2d2 d3d3 m d Forward Problem: Maps S(m) onto S(d)

inverse problem m = G -g d maps a d onto an m maps a point in S(m) to a point in S(d)

m3m3 m2m2 m1m1 d1d1 d2d2 d3d3 m d Inverse Problem: Maps S(d) onto S(m)

Part 2 Transformations of coordinate axes

coordinate axes are arbitrary given M linearly-independent basis vectors m (i) we can write any vector m* as...

span space m3m3 m2m2 m1m1 m2m2 d m3m3 m1m1 m2m2 don’t span space

... as a linear combination of these basis vectors

components of m* in new coordinate system m i *’ = α i

might it be fair to say that the components of a vector are a column-vector ?

matrix formed from basis vectors M ij = v j (i)

transformation matrix T

same vector different components

Q: does T preserve “length” ? (in the sense that m T m = m’ T m’) A: only when T T = T -1

transformation of the model space axes d = Gm = GIm = [GT m -1 ] [T m m] = G’m’ d = Gm d = G’m’ same equation different coordinate system for m

transformation of the data space axes d’ = T d d = [T d G] m = G’’m d = Gm d’ = G’’m same equation different coordinate system for d

transformation of both data space and model space axes d’ = T d d = [T d GT m -1 ] [T m m] = G’’’m’ d = Gm d’ = G’’’m’ same equation different coordinate systems for d and m

Part 3 how transformations can be used to convert a weighted problem into an unweighted one

when are transformations useful ? remember this?

when are transformations useful ? remember this? massage this into a pair of transformations

mTWmmmTWmm W m =D T D or W m =W m ½ W m ½ =W m ½T W m ½ OK since W m symmetric m T W m m = m T D T Dm = [Dm] T [Dm] TmTm

when are transformations useful ? remember this? massage this into a pair of transformations

eTWeeeTWee W e =W e ½ W e ½ =W e ½T W e ½ OK since W e symmetric e T W e e = e T W e ½T W e ½ e = [W e ½ m] T [W e ½ m] TdTd

we have converted weighted least-squares minimize: E’ + L’ = e’ T e’ +m’ T m’ into unweighted least-squares

steps 1: Compute Transformations T m =D=W m ½ and T e =W e ½ 2: Transform data kernel and data to new coordinate system G’’’=[T e GT m -1 ] and d’=T e d 3: solve G’’’ m’ = d’ for m’ using unweighted method 4: Transform m’ back to original coordinate system m=T m -1 m’

steps 1: Compute Transformations T m =D=W m ½ and T e =W e ½ 2: Transform data kernel and data to new coordinate system G’’’=[T e GT m -1 ] and d’=T e d 3: solve G’’’ m’ = d’ for m’ using unweighted method 4: Transform m’ back to original coordinate system m=T m -1 m’ extra work

steps 1: Compute Transformations T m =D=W m ½ and T e =W e ½ 2: Transform data kernel and data to new coordinate system G’’’=[T e GT m -1 ] and d’=T e d 3: solve G’’’ m’ = d’ for m’ using unweighted method 4: Transform m’ back to original coordinate system m=T m -1 m’ to allow simpler solution method

Part 4 The Natural Solution and the Singular Value Decomposition (SVD)

Gm = d suppose that we could divide up the problem like this...

Gm = d only m p can affect d since Gm 0 =0

Gm = d Gm p can only affect d p since no m can lead to a d 0

determined by data determined by a priori information determined by m p not possible to reduce

natural solution determine m p by solving d p -Gm p =0 set m 0 =0

what we need is a way to do Gm = d

Singular Value Decomposition (SVD)

singular value decomposition U T U=I and V T V=I

suppose only p λ ’s are non-zero

only first p columns of U only first p columns of V

U p T U p =I and V p T V p =I since vectors mutually pependicular and of unit length U p U p T ≠ I and V p V p T ≠ I since vectors do not span entire space

The part of m that lies in V 0 cannot effect d since V p T V 0 =0 so V 0 is the model null space

The part of d that lies in U 0 cannot be affected by m since Λ p V p T m is multiplied by U p and U 0 U p T =0 so U 0 is the data null space

The Natural Solution

The part of m est in V 0 has zero length

The error has no component in U p

computing the SVD

determining p use plot of λ i vs. i however case of a clear division between λ i >0 and λ i =0 rare

(A) (B) (C)(D) j i j i index number, i λiλi λiλi λiλi p p?

Natural Solution

(A)(B) Gmd obs m true (m est ) N (m est ) DML

resolution and covariance

large covariance if any λ p are small

Is the Natural Solution the best solution? Why restrict a priori information to the null space when the data are known to be in error? A solution that has slightly worse error but fits the a priori information better might be preferred...