Fitting a line to N data points – 1 If we use then a, b are not independent. To make a, b independent, compute: Then use: Intercept = optimally weighted.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Things to do in Lecture 1 Outline basic concepts of causality
Computational Statistics. Basic ideas  Predict values that are hard to measure irl, by using co-variables (other properties from the same measurement.
Linear Equations Review. Find the slope and y intercept: y + x = -1.
Data Modeling and Parameter Estimation Nov 9, 2005 PSCI 702.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
P M V Subbarao Professor Mechanical Engineering Department
Lecture (14,15) More than one Variable, Curve Fitting, and Method of Least Squares.
Determinants Bases, linear Indep., etc Gram-Schmidt Eigenvalue and Eigenvectors Misc
3D Geometry for Computer Graphics. 2 The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
CS CS 175 – Week 2 Processing Point Clouds Local Surface Properties, Moving Least Squares.
Linear fits You know how to use the solver to minimize the chi^2 to do linear fits… Where do the errors on the slope and intercept come from?
Lecture 12 Projection and Least Square Approximation Shang-Hua Teng.
Lecture 12 Least Square Approximation Shang-Hua Teng.
3D Geometry for Computer Graphics
Ordinary least squares regression (OLS)
Probability & Statistics for Engineers & Scientists, by Walpole, Myers, Myers & Ye ~ Chapter 11 Notes Class notes for ISE 201 San Jose State University.
Orthogonality and Least Squares
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Computer Graphics Recitation The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Techniques for studying correlation and covariance structure
Classification and Prediction: Regression Analysis
Linear Functions.
1 Statistical Analysis Professor Lynne Stokes Department of Statistical Science Lecture 5QF Introduction to Vector and Matrix Operations Needed for the.
Least-Squares Regression
Regression Analysis Regression analysis is a statistical technique that is very useful for exploring the relationships between two or more variables (one.
Empirical Modeling Dongsup Kim Department of Biosystems, KAIST Fall, 2004.
Linear Least Squares Approximation. 2 Definition (point set case) Given a point set x 1, x 2, …, x n  R d, linear least squares fitting amounts to find.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Applications The General Linear Model. Transformations.
MECN 3500 Inter - Bayamon Lecture 9 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
AN ORTHOGONAL PROJECTION
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
Thomas Knotts. Engineers often: Regress data  Analysis  Fit to theory  Data reduction Use the regression of others  Antoine Equation  DIPPR.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
1 Principal stresses/Invariants. 2 In many real situations, some of the components of the stress tensor (Eqn. 4-1) are zero. E.g., Tensile test For most.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
Ch14: Linear Least Squares 14.1: INTRO: Fitting a pth-order polynomial will require finding (p+1) coefficients from the data. Thus, a straight line (p=1)
The chi-squared statistic  2 N Measures “goodness of fit” Used for model fitting and hypothesis testing e.g. fitting a function C(p 1,p 2,...p M ; x)
. 5.1 write linear equation in slope intercept form..5.2 use linear equations in slope –intercept form..5.3 write linear equation in point slope form..5.4.
Curve Fitting Pertemuan 10 Matakuliah: S0262-Analisis Numerik Tahun: 2010.
EIGENSYSTEMS, SVD, PCA Big Data Seminar, Dedi Gadot, December 14 th, 2014.
Signal & Weight Vector Spaces
Performance Surfaces.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Ch 2. Probability Distributions (1/2) Pattern Recognition and Machine Learning, C. M. Bishop, Summarized by Joo-kyung Kim Biointelligence Laboratory,
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 10: PRINCIPAL COMPONENTS ANALYSIS Objectives:
Method of Least Squares Advanced Topic of Lecture on Astrometry.
Part 5 - Chapter
Section 1.3 Lines.
Lesson 5.3 How do you write linear equations in point-slope form?
The regression model in matrix form
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
Linear regression Fitting a straight line to observations.
What is the x-intercept?
Chapter 1 – Linear Relations and Functions
The Simple Linear Regression Model: Specification and Estimation
5.4 General Linear Least-Squares
Ch11 Curve Fitting II.
2.1 Relations and Functions
Nonlinear Fitting.
Regression Statistics
Discrete Least Squares Approximation
Generally Discriminant Analysis
Solving simultaneous linear and quadratic equations
Slope-intercept Form of Equations of Straight Lines
Performance Surfaces.
Eigenvalues and Eigenvectors
Topic 11: Matrix Approach to Linear Regression
Regression and Correlation of Data
Presentation transcript:

Fitting a line to N data points – 1 If we use then a, b are not independent. To make a, b independent, compute: Then use: Intercept = optimally weighted mean value: Variance of intercept:

Fitting a line to N data points – 2 Slope = optimally weighted mean value: Optimal weights: Hence get optimal slope and its variance:

Linear regression If fitting a straight line, minimize: To minimize, set derivatives to zero: Note that these are a pair of simultaneous linear equations -- the “normal equations”.

The Normal Equations Solve as simultaneous linear equations in matrix form – the “normal equations”: In vector-matrix notation: Solve using standard matrix-inversion methods (see Press et al for implementation). Note that the matrix M is diagonal if: In this case we have chosen an orthogonal basis.

General linear regression Suppose you wish to fit your data points y i with the sum of several scaled functions of the x i : Example: fitting a polynomial: Goodness of fit to data x i, y i,  i : where: To minimise  2, then for each k we have an equation:

Normal equations Normal equations are constructed as before: Or in matrix form:

Uncertainties of the answers We want to know the uncertainties of the best- fit values of the parameters a j. For a one-parameter fit we’ve seen that: By analogy, for a multi-parameter fit the covariance of any pair of parameters is: Hence get local quadratic approximation to  2 surface using Hessian matrix H:

The Hessian matrix Defined as It’s the same matrix M we derived from the normal equations! Example: y = ax + b.

Principal axes of  2 ellipsoid The eigenvectors of H define the principal axes of the  2 ellipsoid. H is diagonalised by replacing the coordinates x i with: This gives And so orthogonalises the parameters. b a b a

Principal axes for general linear models In the general linear case where we fit K functions P k with scale factors a k : The Hessian matrix has elements: Normal equations are This gives K-dimensional ellipsoidal surfaces of constant  2 whose principal axes are eigenvectors of the Hessian matrix H. Use standard matrix methods to find linear combinations of x i, y i that diagonalise H.