Lecture 19 Continuous Problems: Backus-Gilbert Theory and Radon’s Problem.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability.
Lecture 20 Continuous Problems Linear Operators and Their Adjoints.
Environmental Data Analysis with MatLab Lecture 21: Interpolation.
Lecture 10 Nonuniqueness and Localized Averages. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
General Linear Model With correlated error terms  =  2 V ≠  2 I.
Environmental Data Analysis with MatLab Lecture 8: Solving Generalized Least Squares Problems.
Lecture 13 L1 , L∞ Norm Problems and Linear Programming
Lecture 23 Exemplary Inverse Problems including Earthquake Location.
Lecture 22 Exemplary Inverse Problems including Filter Design.
Data Modeling and Parameter Estimation Nov 9, 2005 PSCI 702.
Lecture 18 Varimax Factors and Empircal Orthogonal Functions.
Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance.
Lecture 3 Probability and Measurement Error, Part 2.
Inverse Theory CIDER seismology lecture IV July 14, 2014
Data mining and statistical learning - lecture 6
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Environmental Data Analysis with MatLab Lecture 11: Lessons Learned from the Fourier Transform.
Lecture 2 Probability and Measurement Error, Part 1.
Lecture 5 A Priori Information and Weighted Least Squared.
Lecture 15 Nonlinear Problems Newton’s Method. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals.
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 17 Factor Analysis. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 21 Continuous Problems Fréchet Derivatives.
Lecture 24 Exemplary Inverse Problems including Vibrational Problems.
Lecture 6 Resolution and Generalized Inverses. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Environmental Data Analysis with MatLab Lecture 5: Linear Models.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Lecture 3 Review of Linear Algebra Simple least-squares.
Curve-Fitting Regression
Lecture 12 Equality and Inequality Constraints. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 7 Advanced Topics in Least Squares. the multivariate normal distribution for data, d p(d) = (2  ) -N/2 |C d | -1/2 exp{ -1/2 (d-d) T C d -1 (d-d)
Maximum likelihood (ML) and likelihood ratio (LR) test
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 11 Vector Spaces and Singular Value Decomposition.
Ordinary Differential Equations Final Review Shurong Sun University of Jinan Semester 1,
Linear and generalised linear models
Lecture 3: Inferences using Least-Squares. Abstraction Vector of N random variables, x with joint probability density p(x) expectation x and covariance.
Environmental Data Analysis with MatLab Lecture 7: Prior Information.
Maximum likelihood (ML)
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 171 Least.
Lecture II-2: Probability Review
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Today Wrap up of probability Vectors, Matrices. Calculus
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 ~ Curve Fitting ~ Least Squares Regression Chapter.
LINEAR REGRESSION Introduction Section 0 Lecture 1 Slide 1 Lecture 5 Slide 1 INTRODUCTION TO Modern Physics PHYX 2710 Fall 2004 Intermediate 3870 Fall.
1 Chapter 8 Ordinary differential equation Mathematical methods in the physical sciences 3rd edition Mary L. Boas Lecture 5 Introduction of ODE.
Chapter 17 Boundary Value Problems. Standard Form of Two-Point Boundary Value Problem In total, there are n 1 +n 2 =N boundary conditions.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Curve Fitting.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Chapter 8 Curve Fitting.
Curve-Fitting Regression
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
1 EEE 431 Computational Methods in Electrodynamics Lecture 18 By Dr. Rasime Uyguroglu
Lecture 17 - Approximation Methods CVEN 302 July 17, 2002.
Geology 5670/6670 Inverse Theory 28 Jan 2015 © A.R. Lowry 2015 Read for Fri 30 Jan: Menke Ch 4 (69-88) Last time: Ordinary Least Squares: Uncertainty The.
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 14: Applications of Filters.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
Probability Theory and Parameter Estimation I
Special Topics In Scientific Computing
Linear regression Fitting a straight line to observations.
Singular Value Decomposition SVD
SKTN 2393 Numerical Methods for Nuclear Engineers
Presentation transcript:

Lecture 19 Continuous Problems: Backus-Gilbert Theory and Radon’s Problem

Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and Measurement Error, Part 2 Lecture 04The L 2 Norm and Simple Least Squares Lecture 05A Priori Information and Weighted Least Squared Lecture 06Resolution and Generalized Inverses Lecture 07Backus-Gilbert Inverse and the Trade Off of Resolution and Variance Lecture 08The Principle of Maximum Likelihood Lecture 09Inexact Theories Lecture 10Nonuniqueness and Localized Averages Lecture 11Vector Spaces and Singular Value Decomposition Lecture 12Equality and Inequality Constraints Lecture 13L 1, L ∞ Norm Problems and Linear Programming Lecture 14Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15Nonlinear Problems: Newton’s Method Lecture 16Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17Factor Analysis Lecture 18Varimax Factors, Empircal Orthogonal Functions Lecture 19Backus-Gilbert Theory for Continuous Problems; Radon’s Problem Lecture 20Linear Operators and Their Adjoints Lecture 21Fréchet Derivatives Lecture 22 Exemplary Inverse Problems, incl. Filter Design Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location Lecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Purpose of the Lecture Extend Backus-Gilbert theory to continuous problems Discuss the conversion of continuous inverse problems to discrete problems Solve Radon’s Problem the simplest tomography problem

Part 1 Backus-Gilbert Theory

Continuous Inverse Theory the data are discrete but the model parameter is a continuous function

One or several dimensions

model functiondata One or several dimensions

hopeless to try to determine estimates of model function at a particular depth m(z 0 ) = ? localized average is the only way to go

hopeless to try to determine estimates of model function at a particular depth m(z 0 ) = ? localized average is the only way to go the problem is that an integral, such as the data kernel integral, does not depend upon the value of m(z) at a “single point” z 0 continuous version of resolution matrix

let’s retain the idea that the “solution” depends linearly on the data

continuous version of generalized inverse

implies a formula for R

=G -g d comparison to discrete case d=Gm =Rm R=G -g G

implies a formula for R

Now define the spread of resolution as

fine generalized inverse that minimizes the spread J with the constraint that = 1

J has exactly the same form as the discrete case only the definition of S is different

Hence the solution is the same as in the discrete case where

furthermore, just as we did in the discrete case, we can add the size of the covariance where

and leads to a trade-off of resolution and variance as before this just changes the definition of S spread of resolutionsize of variance α=1 α=0

Part 2 Approximating a Continuous Problem as a Discrete Problem

approximation using finite number of known functions

known functions unknown coefficients = discrete model parameters continuous function

posssible f j (x)’s voxels (and their lower dimension equivalents) polynomials splines Fourier (and similar) series and many others

does the choice of f j (x) matter? Yes! The choice implements prior information about the properties of the solution The solution will be different depending upon the choice

conversion to discrete Gm=d

special case of voxels f i (x) = 1 if x inside V i 0 otherwise integral over voxel j size controlled by the scale of variation of m(x)

center of voxel j approximation when G i (x) slowly varying size controlled by the scale of variation of G i (x) more stringent condition than scale of variation of m(x)

Part 3 Tomography

Greek Root tomos a cut, cutting, slice, section

“tomography” as it is used in geophysics data are line integrals of the model function curve i

you can force this into the form if you want Gi(x)Gi(x) but the Dirac delta function is not square- integrable, which leads to problems

Radon’s Problem straight line rays data d treated as a continuous variable

x θ y u s (u, θ ) coordinate system for Radon Transform integrate over this line

Radon Transform m(x,y) → d(u,θ)

(A) m(x,y) (B) d(u,θ) y θ x u

Inverse Problem find m(x,y) given d(u,θ)

Solution via Fourier Transforms x→k x k x → x

now Fourier transform u→k u now change variables (u,θ) →(x,y)

now Fourier transform u→k u Fourier transform of m(x,y) evaluated on a line of slope θ now change variables (s,u) →(x,y) Fourier transform of d(u,θ) J=1, by the way

x θ0θ0 y u d(u,θ 0 ) kyky kxkx θ0θ0 m(x,y)m(k x,k y ) FT ^ ^

Learned two things 1.Proof that solution exists and unique, based on “well-known” properties of Fourier Transform 2.Recipe how to invert a Radon transform using Fourier transforms

(A)(B)(C) yy θ xxu