Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability.

Similar presentations


Presentation on theme: "Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability."— Presentation transcript:

1 Lecture 1 Describing Inverse Problems

2 Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability and Measurement Error, Part 2 Lecture 04The L 2 Norm and Simple Least Squares Lecture 05A Priori Information and Weighted Least Squared Lecture 06Resolution and Generalized Inverses Lecture 07Backus-Gilbert Inverse and the Trade Off of Resolution and Variance Lecture 08The Principle of Maximum Likelihood Lecture 09Inexact Theories Lecture 10Nonuniqueness and Localized Averages Lecture 11Vector Spaces and Singular Value Decomposition Lecture 12Equality and Inequality Constraints Lecture 13L 1, L ∞ Norm Problems and Linear Programming Lecture 14Nonlinear Problems: Grid and Monte Carlo Searches Lecture 15Nonlinear Problems: Newton’s Method Lecture 16Nonlinear Problems: Simulated Annealing and Bootstrap Confidence Intervals Lecture 17Factor Analysis Lecture 18Varimax Factors, Empircal Orthogonal Functions Lecture 19Backus-Gilbert Theory for Continuous Problems; Radon’s Problem Lecture 20Linear Operators and Their Adjoints Lecture 21Fréchet Derivatives Lecture 22 Exemplary Inverse Problems, incl. Filter Design Lecture 23 Exemplary Inverse Problems, incl. Earthquake Location Lecture 24 Exemplary Inverse Problems, incl. Vibrational Problems

3 Purpose of the Lecture distinguish forward and inverse problems categorize inverse problems examine a few examples enumerate different kinds of solutions to inverse problems

4 Part 1 Lingo for discussing the relationship between observations and the things that we want to learn from them

5 three important definitions

6 things that are measured in an experiment or observed in nature… data, d = [d 1, d 2, … d N ] T things you want to know about the world … model parameters, m = [m 1, m 2, … m M ] T relationship between data and model parameters quantitative model (or theory)

7 gravitational accelerations travel time of seismic waves data, d = [d 1, d 2, … d N ] T model parameters, m = [m 1, m 2, … m M ] T quantitative model (or theory) density seismic velocity Newton’s law of gravity seismic wave equation

8 Quantitative Model m est d pre m est d obs Forward Theory Inverse Theory estimatespredictions observations estimates

9 Quantitative Model m true d pre d obs ≠ m est due to observational error

10 Quantitative Model m true d pre d obs ≠ m est due to observational error ≠ due to error propagation

11 Understanding the effects of observational error is central to Inverse Theory

12 Part 2 types of quantitative models (or theories)

13 A. Implicit Theory L relationships between the data and the model are known

14 Example mass = density ⨉ length ⨉ width ⨉ height M H M = ρ ⨉ L ⨉ W ⨉ H L ρ

15 weight = density ⨉ volume measure mass, d 1 size, d 2, d 3, d 4, want to know density, m 1 d 1 = m 1 d 2 d 3 d 4 or d 1 - m 1 d 2 d 3 d 4 = 0 d=[ d 1, d 2, d 3, d 4 ] T and N=4 m=[ m 1 ] T and M=1 f 1 (d,m)=0 and L=1

16 note No guarantee that f(d,m)=0 contains enough information for unique estimate m determining whether or not there is enough is part of the inverse problem

17 B. Explicit Theory the equation can be arranged so that d is a function of m L = N one equation per datum d = g(m) or d - g(m) = 0

18 Example Circumference = 2 ⨉ length + 2 ⨉ height L rectangle H Area = length ⨉ height

19 C = 2L+2H measure C= d 1 A= d 2 want to know L= m 1 H= m 2 d=[ d 1, d 2 ] T and N=2 m=[ m 1, m 2 ] T and M=2 Circumference = 2 ⨉ length + 2 ⨉ height Area = length ⨉ height A=LH d 1 = 2m 1 + 2m 2 d 2 m 1 m 2 d=g(m)

20 C. Linear Explicit Theory the function g(m) is a matrix G times m G has N rows and M columns d = Gm

21 C. Linear Explicit Theory the function g(m) is a matrix G times m G has N rows and M columns d = Gm “data kernel”

22 Example M = ρ g ⨉ V g + ρ q ⨉ V q gold quartz total mass = density of gold ⨉ volume of gold + density of quartz ⨉ volume of quartz V = V g + V q total volume = volume of gold + volume of quartz

23 M = ρ g ⨉ V g + ρ q ⨉ V q V = V g + V q measure V = d 1 M = d 2 want to know V g =m 1 V q =m 2 assume ρ g d=[ d 1, d 2 ] T and N=2 m=[ m 1, m 2 ] T and M=2 d = 11 ρgρg ρqρq m known

24 D. Linear Implicit Theory The L relationships between the data are linear L rows N+M columns

25 in all these examples m is discrete one could have a continuous m(x) instead discrete inverse theory continuous inverse theory

26 as a discrete vector m in this course we will usually approximate a continuous m(x) m = [m( Δx ), m(2 Δx ), m(3 Δx ) … m(M Δx )] T but we will spend some time later in the course dealing with the continuous problem directly

27 Part 3 Some Examples

28 time, t (calendar years) temperature anomaly, T i (deg C) A. Fitting a straight line to data T = a + bt

29 each data point is predicted by a straight line

30 matrix formulation d = G m M=2

31 B. Fitting a parabola T = a + bt+ ct 2

32 each data point is predicted by a strquadratic curve

33 matrix formulation d = G m M=3

34 straight line note similarity parabola

35 in MatLab G=[ones(N,1), t, t.^2];

36 C. Acoustic Tomography 1234 5678 13141516 h h source, S receiver, R travel time = length ⨉ slowness

37 collect data along rows and columns

38 matrix formulation d = G m M=16 N=8

39 In MatLab G=zeros(N,M); for i = [1:4] for j = [1:4] % measurements over rows k = (i-1)*4 + j; G(i,k)=1; % measurements over columns k = (j-1)*4 + i; G(i+4,k)=1; end

40 D. X-ray Imaging S R1R1 R2R2 R3R3 R4R4 R5R5 enlarged lymph node (A) (B)

41 theory I = Intensity of x-rays (data) s = distance c = absorption coefficient (model parameters)

42 Taylor Series approximation

43 Taylor Series approximation discrete pixel approximation

44 Taylor Series approximation discrete pixel approximation length of beam i in pixel j d = G m

45 matrix formulation M≈10 6 N≈10 6

46 note that G is huge 10 6 ⨉10 6 but it is sparse (mostly zero) since a beam passes through only a tiny fraction of the total number of pixels

47 in MatLab G = spalloc( N, M, MAXNONZEROELEMENTS);

48 E. Spectral Curve Fitting

49 single spectral peak area, A position, f width, c z p(z)

50 q spectral peaks “Lorentzian” d = g(m)

51 e1e1 e2e2 e3e3 e4e4 e5e5 e1e1 e2e2 e3e3 e4e4 e5e5 s1s1 s2s2 ocean sediment F. Factor Analysis

52 d = g(m)

53 Part 4 What kind of solution are we looking for ?

54 A: Estimate of model parameters meaning numerical values m 1 = 10.5 m 2 = 7.2

55 But we really need confidence limits, too m 1 = 10.5 ± 0.2 m 2 = 7.2 ± 0.1 m 1 = 10.5 ± 22.3 m 2 = 7.2 ± 9.1 or completely different implications!

56 B: probability density functions if p(m 1 ) simple not so different than confidence intervals

57 m is about 5 plus or minus 1.5 m is either about 3 plus of minus 1 or about 8 plus or minus 1 but that’s less likely we don’t really know anything useful about m

58 C: localized averages A = 0.2m 9 + 0.6m 10 + 0.2m 11 might be better determined than either m 9 or m 10 or m 11 individually

59 Is this useful? Do we care about A = 0.2m 9 + 0.6m 10 + 0.2m 11 ? Maybe …

60 Suppose if m is a discrete approximation of m(x) m(x) x m 10 m 11 m9m9

61 m(x) x m 10 m 11 m9m9 A= 0.2m 9 + 0.6m 10 + 0.2m 11 weighted average of m(x) in the vicinity of x 10 x 10

62 m(x) x m 10 m 11 m9m9 average “localized’ in the vicinity of x 10 x 10 weights of weighted average

63 Localized average mean can’t determine m(x) at x=10 but can determine average value of m(x) near x=10 Such a localized average might very well be useful


Download ppt "Lecture 1 Describing Inverse Problems. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture 03Probability."

Similar presentations


Ads by Google