Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 3: Inferences using Least-Squares. Abstraction Vector of N random variables, x with joint probability density p(x) expectation x and covariance.

Similar presentations


Presentation on theme: "Lecture 3: Inferences using Least-Squares. Abstraction Vector of N random variables, x with joint probability density p(x) expectation x and covariance."— Presentation transcript:

1 Lecture 3: Inferences using Least-Squares

2 Abstraction Vector of N random variables, x with joint probability density p(x) expectation x and covariance C x x2x2 x1x1 Shown as 2D here, but actually N- dimensional

3 the multivariate normal distribution p(x) = (2  ) -N/2 |C x | -1/2 exp{ -1/2 (x-x) T C x -1 (x-x) } has expectation x covariance C x And is normalized to unit area

4 examples

5 x = 2 C x = 1 0 1 0 1 p(x,y)

6 x = 2 C x = 2 0 1 0 1 p(x,y)

7 x = 2 C x = 1 0 1 0 2 p(x,y)

8 x = 2 C x = 1 0.5 1 0.5 1 p(x,y)

9 x = 2 C x = 1 -0.5 1 -0.5 1 p(x,y)

10 Remember this from last lecture ? x2x2 x1x1 x2x2 x1x1 x1x1 p(x 1 ) p(x 1 ) =  p(x 1,x 2 ) dx 2 x2x2 p(x 2 ) p(x 2 ) =  p(x 1,x 2 ) dx 1 distribution of x 1 (irrespective of x 2 ) distribution of x 2 (irrespective of x 1 )

11 p(x,y) p(y) y y x p(y) =  p(x,y) dx

12 p(x) x p(x,y) y x p(x) =  p(x,y) dy

13 Remember p(x,y) = p(x|y) p(y) = p(y|x) p(x) from the last lecture ? we can compute p(x|y) and p(y,x) as follows P(x|y) = P(x,y) / P(y) P(y|x) = P(x,y) / P(x )

14 p(x,y) p(x|y) p(y|x)

15 Any linear function of a normal distribution is a normal distribution p(x) = (2  ) -N/2 |C x | -1/2 exp{ -1/2 (x-x) T C x -1 (x-x) } And y=Mx then p(y) = (2  ) -N/2 |C y | -1/2 exp{ -1/2 (y-y) T C y -1 (y-y) } with y=Mx and C y =MC x M T Memorize!

16 Do you remember this from a previous lecture? then the standard Least-squares solution is m est = [G T G] -1 G T and the rule for error propagation gives C m =  d 2 [G T G] -1 if d = G m

17 Example – all the data assumed to have the same true value, m 1, and each measured with the same variance,  d 2 d 1 1 d 2 1 d 3 = 1 m 1 … d N 1 G G T G = N so [G T G] -1 = N -1 G T d =  i d i m est =[G T G] -1 G T d = (  i d i ) / N C m =  d 2 / N

18 m 1 est = (  i d i ) / N … the traditional formula for the mean! the estimated mean has variance C m =  d 2 / N =  m 2 note then that  m =  d /  N the estimated mean is a normally-distributed random variable the width of this distribution,  m, decreases with the square root of the number of measurements

19 Accuracy grows only slowly with N N=1 N=100 N=10 N=1000

20 Estimating the variance of the data What  2 d do you use in this formula?

21 Prior estimates of  d Based on knowledge of the limits of you measuring technique … my ruler has only mm tics, so I’m going to assume that  d = 0.5 mm the manufacturer claims that the instrument is accurate to 0.1%, so since my typical measurement is 25, I’ll assume  d =0.025

22 posterior estimate of the error Based on error measured with respect to best fit  2 d = (1/N)  i (d i obs -d i pre ) 2 = (1/N)  i e i 2

23 1 x 1 ay 1 1 x 2 b = y 2 … … … 1 x N y 3 G m = d m est = [G T G] -1 G T d is normally distributed with variance C m =  d 2 [G T G] -1

24 p(m) = p(a,b) = p(intercept, slope) slope intercept

25 How probable is a dataset ?

26 N data d are all drawn from the same distribution p(d) the probable-ness of a single measurement d i is p(d i ) So the probable-ness of the whole dataset is p(d 1 )  p(d 2 )  …  p(d N ) =  i p(d i ) L = ln  i p(d i ) =  i ln p(d i ) called then “Likelihood” of the data

27 Now imagine that the distribution p(d) is known up to a vector m of unknown parameters write p(d; m) with semicolon as a reminder that its not a joint probability The L is a function of m L(m) =  i ln p(d i ; m)

28 The Principle of Maximum Likelihood choose m so that it maximizes L(m) the dataset that was in fact observed is the most probable one that could have been observed The best choice of parameters m are the ones that make the dataset likely

29 the multivariate normal distribution for data, d p(d) = (2  ) -N/2 |C d | -1/2 exp{ -1/2 (d-d) T C d -1 (d-d) } Let’s assume that the expectation d is given by a general linear model d = Gm And that the covariance C d is known (prior covariance)

30 Then we have a distribution P(d; m) with unknown parameters, m p(d)=(2  ) -N/2 |C d | -1/2 exp{ -½ (d-Gm) T C d -1 (d-Gm) } We can now apply the principle of maximum likelihood To estimate the unknown parameters m

31 Find the m that maximizes L(m) = ln p(d; m) with p(d;m)=(2  ) -N/2 |C d | -1/2 exp{ -½ (d-Gm) T C d -1 (d-Gm) }

32 L(m) = ln p(d; m) = - ½Nln (2  ) - ½ln (|C d |) - ½(d-Gm) T C d -1 (d-Gm) The first two terms do not contain m, so the principle of maximum likelihood is Maximize -½ (d-Gm) T C d -1 (d-Gm) or Minimize (d-Gm) T C d -1 (d-Gm)

33 Special case of uncorrelated data with equal variance C d =  d 2 I Minimize  d -2 (d-Gm) T (d-Gm) with respect to m Which is the same as Minimize (d-Gm) T (d- Gm) with respect to m This is the Principle of Least Squares

34 But back to the general case … What formula for m does the rule Minimize (d-Gm) T C d -1 (d-Gm) imply ?

35 Answer (after a lot of algebra) m = [G T C d -1 G] -1 G T C d -1 d And then by the usual rules of error propagation C m = [G T C d -1 G] -1

36 This special case is often called Weighted Least Squares Note that the total error is E = e T C d -1 e =  i  i -2 e i 2 Each individual error is weighted by the reciprocal of its variance, so errors involving data with SMALL variance get MORE weight weight

37 Example: fitting a straight line 100 data, first 50 have a different  d than the last 50

38 Equal variance Left 50:  d = 5 right 50:  d = 5

39 Left has smaller variance first 50:  d = 5 last 50:  d = 100

40 Right has smaller variance first 50:  d = 100 last 50:  d = 5

41 What can go wrong in least-squares m = [G T G] -1 G T d the matrix [G T G] -1 is singular

42 m = d1d2d3…dNd1d2d3…dN 1x11x21x3…1xN1x11x21x3…1xN EXAMPLE - a straight line fit N  i x i  i x i S i x i 2 G T G = det(G T G) = N  i x i 2 – [  i x i ] 2 [G T G] -1 singular when determinant is zero

43 N=1, only one measurement (x,d) N  i x i 2 – [  i x i ] 2 = x 2 - x 2 = 0 you can’t fit a straight line to only one point N  1, all data measured at the same x N  i x i 2 – [  i x i ] 2 = N 2 x 2 – N 2 x 2 = 0 measuring the same point over and over doesn’t help det(G T G) = N  i x i 2 – [  i x i ] 2 = 0

44 This sort of ‘missing measurement’ might be difficult to recognize in a complicated problem but it happens all the time …

45 Example - Tomography

46 in this method, you try to plaster the subject with X-ray beams made at every possible position and direction, but you can easily wind up missing some small region … no data coverage here

47 What to do ? Introduce prior information assumptions about the behavior of the unknowns that ‘fill in’ the data gaps

48 Examples of Prior Information The unknowns: are close to some already-known value the density of the mantle is close to 3000 kg/m 3 vary smoothly with time or with geographical position ocean currents have length scales of 10’s of km obey some physical law embodied in a PDE water is incompressible and thus its velocity satisfies div(v)=0

49 Are you only fooling yourself ? It depends … are your assumptions good ones?

50 Application of the Maximum Likelihood Method to this problem so, let’s have a foray into the world of probability

51 Overall Strategy 1. Represent the observed data as a probability distribution 2. Represent prior information as a probability distribution 3. Represent the relationship between data and model parameters as a probability distribution 4. Combine the three distributions in a way that embodies combining the information that they contain 5. Apply maximum likelihood to the combined distribution

52 How to combine distributions in a way that embodies combining the information that they contain … Short answer: multiply them x p 1 (x) x p 2 (x) x p T (x) x1x1 x2x2 x3x3 x between x 1 and x 3 x between x 2 and x 4 x between x 2 and x 3 x4x4

53 Overall Strategy 1. Represent the observed data as a Normal probability distribution p A (d)  exp{ -½ (d-d obs ) T C d -1 (d-d obs ) } In the absence of any other information, the best estimate of the mean of the data is the observed data itself. Prior covariance of the data. I don’t feel like typing the normalization

54 Overall Strategy 2. Represent prior information as a Normal probability distribution p A (m)  exp{ -½ (m-m A ) T C m -1 (m-m A ) } Prior estimate of the model, your best guess as to what it would be, in the absence of any observation s. Prior covariance of the model quantifies how good you think your prior estimate is …

55 example one observation d obs = 0.8 ± 0.4 one model parameter with m A =1.0 ± 1.25

56 m A =1 d obs =0.8 02 2 0 p A (d) p A (m)

57 Overall Strategy 3. Represent the relationship between data and model parameters as a probability distribution p T (d,m)  exp{ -½ (d-Gm) T C G -1 (d-Gm) } Prior covariance of the theory quantifies how good you think your linear theory is. linear theory, Gm=d, relating data, d, to model parameters, m.

58 example theory: d=m but only accurate to ± 0.2

59 m A =1 d obs =0.8 02 2 0 p T (d,m)

60 Overall Strategy 4. Combine the three distributions in a way that embodies combining the information that they contain p (m,d) = p A (d) p A (m) p T (m,d)  exp{ -½ [ (d-d obs ) T C d -1 (d-d obs ) + (m-m A ) T C m -1 (m-m A ) + (d-Gm) T C G -1 (d-Gm) ]} a bit of a mess, but it can be simplified,,,

61 02 2 0 p(d,m)=p A (d) p A (m) p T (d,m)

62 Overall Strategy 5. Apply maximum likelihood to the combined distribution, p(d,m) = p A (d) p A (m) p T (m,d)

63 m est d pre 02 2 0 p(d,m) Maximum likelihood point

64 special case of an exact theory Exact Theory: the covariance C G is very small: limit C G  0 After projecting p(d,m) to p(m) by integrating over all d p(m)  exp{-½(Gm-d obs ) T C d -1 (Gm-d obs )+(m-m A ) T C m -1 (m-m A )]}

65 maximizing p(m) is equivalent to minimizing (Gm-d obs ) T C d -1 (Gm-d obs ) + (m-m A ) T C m -1 (m-m A ) weighted “prediction error”weighted “distance of the model from its prior value” +

66 solution calculated via the usual messy minimization process m est = m A + M [ d obs – Gm A ] where M = [G T C d -1 G + C m -1 ] -1 G T C d -1 Don’t Memorize, but be prepared to use

67 interesting interpretation m est - m A = M [ d obs – Gm A ] estimated model minus its prior observed data minus the prediction of the prior model linear connection between the two is a generalized form of least squares

68 special uncorrelated case C m =  m 2 I and C d =  d 2 I M = [G T C d -1 G + C m -1 ] -1 G T C d -1 = [ G T G + (  d /  m ) 2 I ] -1 G T this formula is sometimes called “damped least squares”, with “damping factor”  =  d /  m

69 Damped Least Squares makes the process of avoiding singular matrices associated with insufficient data trivially easy you just add  2 I to G T G before computing the inverse

70 G T G  G T G +  2 I this process regularizes the matrix, so its inverse always exists its interpretation is : in the absence of relevant data, assume the model parameter has its prior value

71 Are you only fooling yourself ? It depends … is the assumption - that you know the prior value - a good one?


Download ppt "Lecture 3: Inferences using Least-Squares. Abstraction Vector of N random variables, x with joint probability density p(x) expectation x and covariance."

Similar presentations


Ads by Google