Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability.

Slides:



Advertisements
Similar presentations
Feichter_DPG-SYKL03_Bild-01. Feichter_DPG-SYKL03_Bild-02.
Advertisements

Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. *See PowerPoint Lecture Outline for a complete, ready-made.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Appendix 01.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 28.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Chapter 38.
Chapter 1 Image Slides Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Addition Facts
Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA
ZMQS ZMQS
The Maximum Likelihood Method
ABC Technology Project
Copyright © 2014 John Wiley & Sons, Inc. All rights reserved.
Environmental Data Analysis with MatLab Lecture 10: Complex Fourier Series.
15. Oktober Oktober Oktober 2012.
Environmental Data Analysis with MatLab
Squares and Square Root WALK. Solve each problem REVIEW:
We are learning how to read the 24 hour clock
Formal models of design 1/28 Radford, A D and Gero J S (1988). Design by Optimization in Architecture, Building, and Construction, Van Nostrand Reinhold,
Motion in Two and Three Dimensions
Addition 1’s to 20.
25 seconds left…...
Week 1.
Evaluation of precision and accuracy of a measurement
We will resume in: 25 Minutes.
1 Random Sampling - Random Samples. 2 Why do we need Random Samples? Many business applications -We will have a random variable X such that the probability.
Lecture 20 Continuous Problems Linear Operators and Their Adjoints.
Multiple Regression and Model Building
Murach’s OS/390 and z/OS JCLChapter 16, Slide 1 © 2002, Mike Murach & Associates, Inc.
Environmental Data Analysis with MatLab Lecture 21: Interpolation.
State Variables.
Lecture 10 Nonuniqueness and Localized Averages. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Environmental Data Analysis with MatLab Lecture 15: Factor Analysis.
Depositional Environments (Paleogeography)
Lecture 14 Nonlinear Problems Grid Search and Monte Carlo Methods.
Digital Communication
Environmental Data Analysis with MatLab Lecture 8: Solving Generalized Least Squares Problems.
Lecture 13 L1 , L∞ Norm Problems and Linear Programming
Lecture 23 Exemplary Inverse Problems including Earthquake Location.
Lecture 22 Exemplary Inverse Problems including Filter Design.
Lecture 18 Varimax Factors and Empircal Orthogonal Functions.
Lecture 7 Backus-Gilbert Generalized Inverse and the Trade Off of Resolution and Variance.
Lecture 3 Probability and Measurement Error, Part 2.
The Basics of Inversion
Lecture 2 Probability and Measurement Error, Part 1.
Lecture 5 A Priori Information and Weighted Least Squared.
Lecture 19 Continuous Problems: Backus-Gilbert Theory and Radon’s Problem.
Lecture 15 Nonlinear Problems Newton’s Method. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 16 Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals.
Lecture 9 Inexact Theories. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 17 Factor Analysis. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and.
Lecture 21 Continuous Problems Fréchet Derivatives.
Lecture 24 Exemplary Inverse Problems including Vibrational Problems.
Lecture 6 Resolution and Generalized Inverses. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Environmental Data Analysis with MatLab Lecture 5: Linear Models.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Lecture 12 Equality and Inequality Constraints. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4: Practical Examples. Remember this? m est = m A + M [ d obs – Gm A ] where M = [G T C d -1 G + C m -1 ] -1 G T C d -1.
Lecture 8 The Principle of Maximum Likelihood. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 11 Vector Spaces and Singular Value Decomposition.
Environmental Data Analysis with MatLab Lecture 7: Prior Information.
Maximum likelihood (ML)
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
G(m)=d mathematical model d data m model G operator d=G(m true )+  = d true +  Forward problem: find d given m Inverse problem (discrete parameter estimation):
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
Geology 5670/6670 Inverse Theory 20 Feb 2015 © A.R. Lowry 2015 Read for Mon 23 Feb: Menke Ch 9 ( ) Last time: Nonlinear Inversion Solution appraisal.
Environmental Data Analysis with MatLab 2 nd Edition Lecture 22: Linear Approximations and Non Linear Least Squares.
Presentation transcript:

Lecture 1 Describing Inverse Problems

Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and Measurement Error, Part 2 Lecture 04The L 2 Norm and Simple Least Squares Lecture 05A Priori Information and Weighted Least Squared Lecture 06Resolution and Generalized Inverses Lecture 07Backus-Gilbert Inverse and the Trade Off of Resolution and Variance Lecture 08The Principle of Maximum Likelihood Lecture 09Inexact Theories Lecture 10Nonuniqueness and Localized Averages Lecture 11Vector Spaces and Singular Value Decomposition Lecture 12Equality and Inequality Constraints Lecture 13L 1, L ∞ Norm Problems and Linear Programming Lecture 14Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15Nonlinear Problems: Newton’s Method Lecture 16Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17Factor Analysis Lecture 18Varimax Factors, Empircal Orthogonal Functions Lecture 19Backus-Gilbert Theory for Continuous Problems; Radon’s Problem Lecture 20Linear Operators and Their Adjoints Lecture 21Fréchet Derivatives Lecture 22 Exemplary Inverse Problems, incl. Filter Design Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location Lecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

Purpose of the Lecture distinguish forward and inverse problems categorize inverse problems examine a few examples enumerate different kinds of solutions to inverse problems

Part 1 Lingo for discussing the relationship between observations and the things that we want to learn from them

three important definitions

things that are measured in an experiment or observed in nature… data, d = [d 1, d 2, … d N ] T things you want to know about the world … model parameters, m = [m 1, m 2, … m M ] T relationship between data and model parameters quantitative model (or theory)

gravitational accelerations travel time of seismic waves data, d = [d 1, d 2, … d N ] T model parameters, m = [m 1, m 2, … m M ] T quantitative model (or theory) density seismic velocity Newton’s law of gravity seismic wave equation

Quantitative Model m est d pre m est d obs Forward Theory Inverse Theory estimatespredictions observations estimates

Quantitative Model m true d pre d obs ≠ m est due to observational error

Quantitative Model m true d pre d obs ≠ m est due to observational error ≠ due to error propagation

Understanding the effects of observational error is central to Inverse Theory

Part 2 types of quantitative models (or theories)

A. Implicit Theory L relationships between the data and the model are known

Example mass = density ⨉ length ⨉ width ⨉ height M H M = ρ ⨉ L ⨉ W ⨉ H L ρ

weight = density ⨉ volume measure mass, d 1 size, d 2, d 3, d 4, want to know density, m 1 d 1 = m 1 d 2 d 3 d 4 or d 1 - m 1 d 2 d 3 d 4 = 0 d=[ d 1, d 2, d 3, d 4 ] T and N=4 m=[ m 1 ] T and M=1 f 1 (d,m)=0 and L=1

note No guarantee that f(d,m)=0 contains enough information for unique estimate m determining whether or not there is enough is part of the inverse problem

B. Explicit Theory the equation can be arranged so that d is a function of m L = N one equation per datum d = g(m) or d - g(m) = 0

Example Circumference = 2 ⨉ length + 2 ⨉ height L rectangle H Area = length ⨉ height

C = 2L+2H measure C= d 1 A= d 2 want to know L= m 1 H= m 2 d=[ d 1, d 2 ] T and N=2 m=[ m 1, m 2 ] T and M=2 Circumference = 2 ⨉ length + 2 ⨉ height Area = length ⨉ height A=LH d 1 = 2m 1 + 2m 2 d 2 m 1 m 2 d=g(m)

C. Linear Explicit Theory the function g(m) is a matrix G times m G has N rows and M columns d = Gm

C. Linear Explicit Theory the function g(m) is a matrix G times m G has N rows and M columns d = Gm “data kernel”

Example M = ρ g ⨉ V g + ρ q ⨉ V q gold quartz total mass = density of gold ⨉ volume of gold + density of quartz ⨉ volume of quartz V = V g + V q total volume = volume of gold + volume of quartz

M = ρ g ⨉ V g + ρ q ⨉ V q V = V g + V q measure V = d 1 M = d 2 want to know V g =m 1 V q =m 2 assume ρ g d=[ d 1, d 2 ] T and N=2 m=[ m 1, m 2 ] T and M=2 d = 11 ρgρg ρqρq m known

D. Linear Implicit Theory The L relationships between the data are linear L rows N+M columns

in all these examples m is discrete one could have a continuous m(x) instead discrete inverse theory continuous inverse theory

as a discrete vector m in this course we will usually approximate a continuous m(x) m = [m( Δx ), m(2 Δx ), m(3 Δx ) … m(M Δx )] T but we will spend some time later in the course dealing with the continuous problem directly

Part 3 Some Examples

time, t (calendar years) temperature anomaly, T i (deg C) A. Fitting a straight line to data T = a + bt

each data point is predicted by a straight line

matrix formulation d = G m M=2

B. Fitting a parabola T = a + bt+ ct 2

each data point is predicted by a strquadratic curve

matrix formulation d = G m M=3

straight line note similarity parabola

in MatLab G=[ones(N,1), t, t.^2];

C. Acoustic Tomography h h source, S receiver, R travel time = length ⨉ slowness

collect data along rows and columns

matrix formulation d = G m M=16 N=8

In MatLab G=zeros(N,M); for i = [1:4] for j = [1:4] % measurements over rows k = (i-1)*4 + j; G(i,k)=1; % measurements over columns k = (j-1)*4 + i; G(i+4,k)=1; end

D. X-ray Imaging S R1R1 R2R2 R3R3 R4R4 R5R5 enlarged lymph node (A) (B)

theory I = Intensity of x-rays (data) s = distance c = absorption coefficient (model parameters)

Taylor Series approximation

Taylor Series approximation discrete pixel approximation

Taylor Series approximation discrete pixel approximation length of beam i in pixel j d = G m

matrix formulation M≈10 6 N≈10 6

note that G is huge 10 6 ⨉10 6 but it is sparse (mostly zero) since a beam passes through only a tiny fraction of the total number of pixels

in MatLab G = spalloc( N, M, MAXNONZEROELEMENTS);

E. Spectral Curve Fitting

single spectral peak area, A position, f width, c z p(z)

q spectral peaks “Lorentzian” d = g(m)

e1e1 e2e2 e3e3 e4e4 e5e5 e1e1 e2e2 e3e3 e4e4 e5e5 s1s1 s2s2 ocean sediment F. Factor Analysis

d = g(m)

Part 4 What kind of solution are we looking for ?

A: Estimate of model parameters meaning numerical values m 1 = 10.5 m 2 = 7.2

But we really need confidence limits, too m 1 = 10.5 ± 0.2 m 2 = 7.2 ± 0.1 m 1 = 10.5 ± 22.3 m 2 = 7.2 ± 9.1 or completely different implications!

B: probability density functions if p(m 1 ) simple not so different than confidence intervals

m is about 5 plus or minus 1.5 m is either about 3 plus of minus 1 or about 8 plus or minus 1 but that’s less likely we don’t really know anything useful about m

C: localized averages A = 0.2m m m 11 might be better determined than either m 9 or m 10 or m 11 individually

Is this useful? Do we care about A = 0.2m m m 11 ? Maybe …

Suppose if m is a discrete approximation of m(x) m(x) x m 10 m 11 m9m9

m(x) x m 10 m 11 m9m9 A= 0.2m m m 11 weighted average of m(x) in the vicinity of x 10 x 10

m(x) x m 10 m 11 m9m9 average “localized’ in the vicinity of x 10 x 10 weights of weighted average

Localized average mean can’t determine m(x) at x=10 but can determine average value of m(x) near x=10 Such a localized average might very well be useful