Lecture 24 Exemplary Inverse Problems including Vibrational Problems.

Slides:



Advertisements
Similar presentations
The Maximum Likelihood Method
Advertisements

Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability.
Lecture 20 Continuous Problems Linear Operators and Their Adjoints.
Environmental Data Analysis with MatLab Lecture 21: Interpolation.
Lecture 10 Nonuniqueness and Localized Averages. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Lecture 13 L1 , L∞ Norm Problems and Linear Programming
Lecture 23 Exemplary Inverse Problems including Earthquake Location.
Lecture 22 Exemplary Inverse Problems including Filter Design.
Fundamentals of Data Analysis Lecture 12 Methods of parametric estimation.
Lecture 18 Varimax Factors and Empircal Orthogonal Functions.
Environmental Data Analysis with MatLab Lecture 16: Orthogonal Functions.
Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance.
Lecture 3 Probability and Measurement Error, Part 2.
Visual Recognition Tutorial
Lecture 2 Probability and Measurement Error, Part 1.
Lecture 5 A Priori Information and Weighted Least Squared.
Lecture 19 Continuous Problems: Backus-Gilbert Theory and Radon’s Problem.
Lecture 15 Nonlinear Problems Newton’s Method. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals.
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 17 Factor Analysis. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 21 Continuous Problems Fréchet Derivatives.
Lecture 6 Resolution and Generalized Inverses. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
Environmental Data Analysis with MatLab Lecture 5: Linear Models.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Lecture 12 Equality and Inequality Constraints. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4: Practical Examples. Remember this? m est = m A + M [ d obs – Gm A ] where M = [G T C d -1 G + C m -1 ] -1 G T C d -1.
Maximum likelihood (ML) and likelihood ratio (LR) test
Evaluating Hypotheses
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
Lecture 11 Vector Spaces and Singular Value Decomposition.
Linear and generalised linear models
Chi Square Distribution (c2) and Least Squares Fitting
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
Linear and generalised linear models
Lecture 3: Inferences using Least-Squares. Abstraction Vector of N random variables, x with joint probability density p(x) expectation x and covariance.
Environmental Data Analysis with MatLab Lecture 7: Prior Information.
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
Maximum likelihood (ML)
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Physics 114: Lecture 15 Probability Tests & Linear Fitting Dale E. Gary NJIT Physics Department.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
880.P20 Winter 2006 Richard Kass 1 Confidence Intervals and Upper Limits Confidence intervals (CI) are related to confidence limits (CL). To calculate.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Modern Navigation Thomas Herring
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
Lecture 12: Linkage Analysis V Date: 10/03/02  Least squares  An EM algorithm  Simulated distribution  Marker coverage and density.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Eric Prebys, FNAL.  In our earlier lectures, we found the general equations of motion  We initially considered only the linear fields, but now we will.
Geology 5670/6670 Inverse Theory 20 Feb 2015 © A.R. Lowry 2015 Read for Mon 23 Feb: Menke Ch 9 ( ) Last time: Nonlinear Inversion Solution appraisal.
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Lecture 8: Measurement Errors 1. Objectives List some sources of measurement errors. Classify measurement errors into systematic and random errors. Study.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Richard Kass/F02P416 Lecture 6 1 Lecture 6 Chi Square Distribution (  2 ) and Least Squares Fitting Chi Square Distribution (  2 ) (See Taylor Ch 8,
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
Physics 114: Lecture 13 Probability Tests & Linear Fitting
The Maximum Likelihood Method
Probabilistic Surrogate Models
Presentation transcript:

Lecture 24 Exemplary Inverse Problems including Vibrational Problems

Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and Measurement Error, Part 2 Lecture 04The L 2 Norm and Simple Least Squares Lecture 05A Priori Information and Weighted Least Squared Lecture 06Resolution and Generalized Inverses Lecture 07Backus-Gilbert Inverse and the Trade Off of Resolution and Variance Lecture 08The Principle of Maximum Likelihood Lecture 09Inexact Theories Lecture 10Nonuniqueness and Localized Averages Lecture 11Vector Spaces and Singular Value Decomposition Lecture 12Equality and Inequality Constraints Lecture 13L 1, L ∞ Norm Problems and Linear Programming Lecture 14Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15Nonlinear Problems: Newton’s Method Lecture 16Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17Factor Analysis Lecture 18Varimax Factors, Empircal Orthogonal Functions Lecture 19Backus-Gilbert Theory for Continuous Problems; Radon’s Problem Lecture 20Linear Operators and Their Adjoints Lecture 21Fréchet Derivatives Lecture 22 Exemplary Inverse Problems, incl. Filter Design Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location Lecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Purpose of the Lecture solve a few exemplary inverse problems tomography vibrational problems determining mean directions

Part 1 tomography

d i = ∫ ray i m(x(s), y(s)) ds ray i tomography: data is line integral of model function assume ray path is known x y

discretization: model function divided up into M pixels m j

data kernel G ij = length of ray i in pixel j

here’s an easy, approximate way to calculate it

ray i start with G set to zero then consider each ray in sequence

∆s divide each ray into segments of arc length ∆s and step from segment to segment

determine the pixel index, say j, that the center of each line segment falls within add ∆s to G ij repeat for every segment of every ray

You can make this approximation indefinitely accurate simply by decreasing the size of ∆ s (albeit at the expense of increase the computation time)

Suppose that there are M=L 2 voxels A ray passes through about L voxels G has NL 2 elements NL of which are non-zero so the fraction of non-zero elements is 1/L hence G is very sparse

In a typical tomographic experiment some pixels will be missed entirely and some groups of pixels will be sampled by only one ray

the value of these pixels is completely undetermined only the average value of these pixels is determined hence the problem is mixed-determined (and usually M>N as well)

so you must introduce some sort of a priori information to achieve a solution say a priori information that the solution is small or a priori information that the solution is smooth

Solution Possibilities 1.Damped Least Squares (implements smallness): Matrix G is sparse and very large use bicg() with damped least squares function 2. Weighted Least Squares (implements smoothness): Matrix F consists of G plus second derivative smoothing use bicg() with weighted least squares function

Solution Possibilities 1.Damped Least Squares: Matrix G is sparse and very large use bicg() with damped least squares function 2. Weighted Least Squares: Matrix F consists of G plus second derivative smoothing use bicg() with weighted least squares function test case has very good ray coverage, so smoothing probably unnecessary

True model x y sources and receivers

x y Ray Coverage x y just a “few” rays shown else image is black

Data, plotted in Radon-style coordinates angle θ of ray distance r of ray to center of image Lesson from Radon’s Problem: Full data coverage need to achieve exact solution minor data gaps

Estimated model x y True model

Estimated model x y streaks due to minor data gaps they disappear if ray density is doubled

but what if the observational geometry is poor so that broads swaths of rays are missing ?

(A) (B) (C) (D) x y x y x y θ r complete angular coverage

(A) (B) (C) (D) x y x y x y θ r incomplete angular coverage

Part 2 vibrational problems

statement of the problem Can you determine the structure of an object just knowing the characteristic frequencies at which it vibrates? frequency

the Fréchet derivative of frequency with respect to velocity is usually computed using perturbation theory hence a quick discussion of what that is...

perturbation theory a technique for computing an approximate solution to a complicated problem, when 1. The complicated problem is related to a simple problem by a small perturbation 2. The solution of the simple problem must be known

simple example

we know the solution to this equation: x 0 =±c

Here’s the actual vibrational problem acoustic equation with spatially variable sound velocity v

acoustic equation with spatially variable sound velocity v frequencies of vibration or eigenfrequencies patterns of vibration or eigenfunctions or modes

v(x) = v (0) (x) + εv (1) (x) +... assume velocity can be written as a perturbation around some simple structure v (0) (x)

eigenfunctions known to obey orthonormality relationship

now represent eigenfrequencies and eigenfunctions as power series in ε

represent first-order perturbed shapes as sum of unperturbed shapes now represent eigenfrequencies and eigenfunctions as power series in ε

plug series into original differential equation group terms of equal power of ε solve for first-order perturbation in eigenfrequencies ω n (1) and eigenfunction coefficients b nm (use orthonormality in process)

result

result for eigenfrequencies write as standard inverse problem

standard continuous inverse problem

perturbation in the eigenfrequencies are the data perturbation in the velocity structure is the model function

standard continuous inverse problem depends upon the unperturbed velocity structure, the unperturbed eigenfrequency and the unperturbed mode data kernel or Fréchet derivative

1D organ pipe unperturbed problem has constant velocity 0 h x open end, p=0 closed end dp/dx=0 perturbed problem has variable velocity

0 h x p=0 dp/dx=0 p1p1 xx p2p2 p3p3 x 1 modes frequencies 23 0

solution to unperturbed problem

position, x velocity, v perturbed unperturbed velocity structure

How to discretize the model function? m is veloctity function evaluated at sequence of points equally spaced in x our choice is very simple

the data a list of frequencies of vibration true, unperturbed true, perturbed observed = true, perturbed + noise frequency

ωiωi mjmj the data kernel

Solution Possibilities 1.Damped Least Squares (implements smallness): Matrix G is not sparse use bicg() with damped least squares function 2. Weighted Least Squares (implements smoothness): Matrix F consists of G plus second derivative smoothing use bicg() with weighted least squares function

Solution Possibilities 1.Damped Least Squares (implements smallness): Matrix G is not sparse use bicg() with damped least squares function 2. Weighted Least Squares (implements smoothness): Matrix F consists of G plus second derivative smoothing use bicg() with weighted least squares function our choice

position, x velocity, v the solution true estimated

position, x velocity, v the solution true estimated

mimi mjmj the model resolution matrix

mimi mjmj what is this?

This problem has a type of nonuniqueness that arises from its symmetry a positive velocity anomaly at one end of the organ pipe trades off with a negative anomaly at the other end

this behavior is very common and is why eigenfrequency data are usually supplemented with other data e.g. travel times along rays that are not subject to this nonuniqueness

Part 3 determining mean directions

statement of the problem you measure a bunch of directions (unit vectors) what’s their mean? x y z data mean

what’s a reasonable probability density function for directional data? Gaussian doesn’t quite work because its defined on the wrong interval (- ∞, +∞)

θ ϕ central vector datum coordinate system distribution should be symmetric in ϕ

angle, θ π-π-πp(θ) Fisher distribution similar in shape to a Gaussian but on a sphere

angle, θ π-π-πp(θ) Fisher distribution similar in shape to a Gaussian but on a sphere “precision parameter” quantifies width of p.d.f. κ=5 κ=1

solve by direct application of principle of maximum likelihood

maximize joint p.d.f. of data with respect to κ and cos(θ)

x : Cartesian components of observed unit vectors m : Cartesian components of central unit vector; must constrain |m|=1

likelihood function constraint unknowns m, κ C = = 0

Lagrange multiplier equations

Results valid when κ>5

Results central vector is parallel to the vector that you get by putting all the observed unit vectors end-to-end

Solution Possibilities Determine m by evaluating simple formula 1.Determine κ using simple but approximate formula 2. Determine κ using bootstrap method our choice only valid when κ>5

Application to Subduction Zone Stresses Determine the mean direction of P-axes of deep ( km) earthquakes in the Kurile-Kamchatka subduction zone

N E datacentral directionbootstrap