Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Modeling and Least Squares Fitting COS 323. Data Modeling Given: data points, functional form, find constants in functionGiven: data points, functional.

Similar presentations


Presentation on theme: "Data Modeling and Least Squares Fitting COS 323. Data Modeling Given: data points, functional form, find constants in functionGiven: data points, functional."— Presentation transcript:

1 Data Modeling and Least Squares Fitting COS 323

2 Data Modeling Given: data points, functional form, find constants in functionGiven: data points, functional form, find constants in function Example: given (x i, y i ), find line through them; i.e., find a and b in y = ax+bExample: given (x i, y i ), find line through them; i.e., find a and b in y = ax+b (x 1,y 1 ) (x 2,y 2 ) (x 3,y 3 ) (x 4,y 4 ) (x 5,y 5 ) (x 6,y 6 ) (x 7,y 7 ) y=ax+b

3 Data Modeling You might do this because you actually care about those numbers…You might do this because you actually care about those numbers… – Example: measure position of falling object, fit parabola p = – 1 / 2 gt 2 position time  Estimate g from fit

4 Data Modeling … or because some aspect of behavior is unknown and you want to ignore it… or because some aspect of behavior is unknown and you want to ignore it – Example: measuring relative resonant frequency of two ions, want to ignore magnetic field drift

5 Least Squares Nearly universal formulation of fitting: minimize squares of differences between data and functionNearly universal formulation of fitting: minimize squares of differences between data and function – Example: for fitting a line, minimize with respect to a and b – Most general solution technique: take derivatives w.r.t. unknown variables, set equal to zero

6 Least Squares Computational approaches:Computational approaches: – General numerical algorithms for function minimization – Take partial derivatives; general numerical algorithms for root finding – Specialized numerical algorithms that take advantage of form of function – Important special case: linear least squares

7 Linear Least Squares General pattern:General pattern: Note that dependence on unknowns is linear, not necessarily function!Note that dependence on unknowns is linear, not necessarily function!

8 Solving Linear Least Squares Problem Take partial derivatives:Take partial derivatives:

9 Solving Linear Least Squares Problem For convenience, rewrite as matrix:For convenience, rewrite as matrix: Factor:Factor:

10 Linear Least Squares There’s a different derivation of this: overconstrained linear systemThere’s a different derivation of this: overconstrained linear system A has n rows and m<n columns: more equations than unknownsA has n rows and m<n columns: more equations than unknowns

11 Linear Least Squares Interpretation: find x that comes “closest” to satisfying Ax=bInterpretation: find x that comes “closest” to satisfying Ax=b – i.e., minimize b–Ax – i.e., minimize |b–Ax| – Equivalently, minimize |b–Ax| 2 or (b–Ax)  (b–Ax)

12 Linear Least Squares If fitting data to linear function:If fitting data to linear function: – Rows of A are functions of x i – Entries in b are y i – Minimizing sum of squared differences!

13 Linear Least Squares Compare two expressions we’ve derived – equal!Compare two expressions we’ve derived – equal!

14 Ways of Solving Linear Least Squares Option 1: for each x i,y i compute f(x i ), g(x i ), etc. store in row i of A store y i in b compute (A T A) -1 A T bOption 1: for each x i,y i compute f(x i ), g(x i ), etc. store in row i of A store y i in b compute (A T A) -1 A T b (A T A) -1 A T is known as “pseudoinverse” of A(A T A) -1 A T is known as “pseudoinverse” of A

15 Ways of Solving Linear Least Squares Option 2: for each x i,y i compute f(x i ), g(x i ), etc. store in row i of A store y i in b compute A T A, A T b solve A T Ax=A T bOption 2: for each x i,y i compute f(x i ), g(x i ), etc. store in row i of A store y i in b compute A T A, A T b solve A T Ax=A T b These are known as the “normal equations” of the least squares problemThese are known as the “normal equations” of the least squares problem

16 Ways of Solving Linear Least Squares These can be inefficient, since A typically much larger than A T A and A T bThese can be inefficient, since A typically much larger than A T A and A T b Option 3: for each x i,y i compute f(x i ), g(x i ), etc. accumulate outer product in U accumulate product with y i in v solve Ux=vOption 3: for each x i,y i compute f(x i ), g(x i ), etc. accumulate outer product in U accumulate product with y i in v solve Ux=v

17 Special Case: Constant Let’s try to model a function of the form y = aLet’s try to model a function of the form y = a In this case, f(x i )=1 and we are solvingIn this case, f(x i )=1 and we are solving Punchline: mean is least-squares estimator for best constant fitPunchline: mean is least-squares estimator for best constant fit

18 Special Case: Line Fit to y=a+bxFit to y=a+bx

19 Weighted Least Squares Common case: the (x i,y i ) have different uncertainties associated with themCommon case: the (x i,y i ) have different uncertainties associated with them Want to give more weight to measurements of which you are more certainWant to give more weight to measurements of which you are more certain Weighted least squares minimizationWeighted least squares minimization If uncertainty is , best to takeIf uncertainty is , best to take

20 Weighted Least Squares Define weight matrix W asDefine weight matrix W as Then solve weighted least squares viaThen solve weighted least squares via

21 Error Estimates from Linear Least Squares For many applications, finding values is useless without estimate of their accuracyFor many applications, finding values is useless without estimate of their accuracy Residual is b – AxResidual is b – Ax Can compute  2 = (b – Ax)  (b – Ax)Can compute  2 = (b – Ax)  (b – Ax) How do we tell whether answer is good?How do we tell whether answer is good? – Lots of measurements –  2 is small –  2 increases quickly with perturbations to x

22 Error Estimates from Linear Least Squares Let’s look at increase in  2 :Let’s look at increase in  2 : So, the bigger A T A is, the faster error increases as we move away from current xSo, the bigger A T A is, the faster error increases as we move away from current x

23 Error Estimates from Linear Least Squares C=(A T A) –1 is called covariance of the dataC=(A T A) –1 is called covariance of the data The “standard variance” in our estimate of x isThe “standard variance” in our estimate of x is This is a matrix:This is a matrix: – Diagonal entries give variance of estimates of components of x – Off-diagonal entries explain mutual dependence n–m is (# of samples) minus (# of degrees of freedom in the fit): consult a statistician…n–m is (# of samples) minus (# of degrees of freedom in the fit): consult a statistician…

24 Special Case: Constant “standard deviation of mean” “standard deviation of samples”

25 Things to Keep in Mind In general, uncertainty in estimated parameters goes down slowly: like 1/sqrt(# samples)In general, uncertainty in estimated parameters goes down slowly: like 1/sqrt(# samples) Formulas for special cases (like fitting a line) are messy: simpler to think of A T Ax=A T b formFormulas for special cases (like fitting a line) are messy: simpler to think of A T Ax=A T b form All of these minimize “vertical” squared distanceAll of these minimize “vertical” squared distance – Square not always appropriate – Vertical distance not always appropriate


Download ppt "Data Modeling and Least Squares Fitting COS 323. Data Modeling Given: data points, functional form, find constants in functionGiven: data points, functional."

Similar presentations


Ads by Google