Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.

Slides:



Advertisements
Similar presentations
Linear Inverse Problems
Advertisements

Chapter 2 Functions and Graphs
ENGG2013 Unit 15 Rank, determinant, dimension, and the links between them Mar, 2011.
Scientific Computing Linear Systems – Gaussian Elimination.
Solving Quadratic Equations Algebraically Lesson 2.2.
The Characteristic Equation of a Square Matrix (11/18/05) To this point we know how to check whether a given number is an eigenvalue of a given square.
Chapter 5 Orthogonality
Linear System of Equations MGT 4850 Spring 2008 University of Lethbridge.
3D Geometry for Computer Graphics. 2 The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Lecture 12 Least Square Approximation Shang-Hua Teng.
Ordinary least squares regression (OLS)
Lagrange interpolation Gives the same results as Newton, but different method.
10.1 Gaussian Elimination Method
Solving Quadratic Equations Tammy Wallace Varina High.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Chapter 10 Real Inner Products and Least-Square (cont.)
Solving System of Linear Equations. 1. Diagonal Form of a System of Equations 2. Elementary Row Operations 3. Elementary Row Operation 1 4. Elementary.
Solving quadratic equations Factorisation Type 1: No constant term Solve x 2 – 6x = 0 x (x – 6) = 0 x = 0 or x – 6 = 0 Solutions: x = 0 or x = 6 Graph.
Barnett/Ziegler/Byleen Finite Mathematics 11e1 Chapter 2 Review Important Terms, Symbols, Concepts 2.1. Functions Point-by-point plotting may be used to.
3.5 Quadratic Equations OBJ:To solve a quadratic equation by factoring.
Scientific Computing Linear Systems – LU Factorization.
ECON 1150 Matrix Operations Special Matrices
Systems of Linear Equation and Matrices
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
MA2213 Lecture 4 Numerical Integration. Introduction Definition is the limit of Riemann sums I(f)
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Scientific Computing Linear Least Squares. Interpolation vs Approximation Recall: Given a set of (x,y) data points, Interpolation is the process of finding.
Matrix Solutions to Linear Systems. 1. Write the augmented matrix for each system of linear equations.
Section 3.6 – Solving Systems Using Matrices
Notes Over 6.7 Finding the Number of Solutions or Zeros
OBJ: To solve a quadratic equation by factoring
Orthogonality and Least Squares
Solving Quadratic Equations Pulling It All Together.
Vectors and Matrices In MATLAB a vector can be defined as row vector or as a column vector. A vector of length n can be visualized as matrix of size 1xn.
ES 240: Scientific and Engineering Computation. Chapter 13: Linear Regression 13. 1: Statistical Review Uchechukwu Ofoegbu Temple University.
Computing Eigen Information for Small Matrices The eigen equation can be rearranged as follows: Ax = x  Ax = I n x  Ax - I n x = 0  (A - I n )x = 0.
Solving Quadratic Equations – Part 1 Methods for solving quadratic equations : 1. Taking the square root of both sides ( simple equations ) 2. Factoring.
Linear algebra: matrix Eigen-value Problems Eng. Hassan S. Migdadi Part 1.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 3 - Chapter 9 Linear Systems of Equations: Gauss Elimination.
Lecture 16 - Approximation Methods CVEN 302 July 15, 2002.
Chapter 6 Systems of Linear Equations and Matrices Sections 6.3 – 6.5.
Eigenvalues The eigenvalue problem is to determine the nontrivial solutions of the equation Ax= x where A is an n-by-n matrix, x is a length n column.
ALGEBRA 1 SECTION 10.4 Use Square Roots to Solve Quadratic Equations Big Idea: Solve quadratic equations Essential Question: How do you solve a quadratic.
L 7: Linear Systems and Metabolic Networks. Linear Equations Form System.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Chapter 15 General Least Squares and Non- Linear.
1 Chapter 4 Interpolation and Approximation Lagrange Interpolation The basic interpolation problem can be posed in one of two ways: The basic interpolation.
Autar Kaw Benjamin Rigsby Transforming Numerical Methods Education for STEM Undergraduates.
Chapter 12: Data Analysis by linear least squares Overview: Formulate problem as an over-determined linear system of equations Solve the linear system.
GEOMETRIC CAMERA CALIBRATION The Calibration Problem Least-Squares Techniques Linear Calibration from Points Reading: Chapter 3.
Section 8 Numerical Analysis CSCI E-24 José Luis Ramírez Herrán October 20, 2015.
Matrices and systems of Equations. Definition of a Matrix * Rectangular array of real numbers m rows by n columns * Named using capital letters * First.
Numerical Computation Lecture 6: Linear Systems – part II United International College.
Let W be a subspace of R n, y any vector in R n, and the orthogonal projection of y onto W. …
Chapter 2 Functions and Graphs
Equations Quadratic in form factorable equations
Simultaneous Linear Equations
Linear regression Fitting a straight line to observations.
Chapter 7: Matrices and Systems of Equations and Inequalities
Matrix Solutions to Linear Systems
Using Factoring To Solve
Quadratic Equations.
Discrete Least Squares Approximation
Vectors and Matrices In MATLAB a vector can be defined as row vector or as a column vector. A vector of length n can be visualized as matrix of size 1xn.
Systems of Equations Solve by Graphing.
Major: All Engineering Majors Authors: Autar Kaw
Equations Quadratic in form factorable equations
Matrices.
Solving Special Cases.
Chapter 12: Data Analysis by linear least squares
Regression and Correlation of Data
Presentation transcript:

Scientific Computing General Least Squares

Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all possible polynomials of degree less than or equal to n. We want to minimize

Matrix Formulation of Polynomial Least Squares To minimize Let Then, we want to find the vector c that minimizes the length squared of the error vector Ac-y (or y – Ac) That is, minimize

Matrix Formulation of Polynomial Least Squares By a similar calculation that we did for the Linear Least Squares we get the Normal Equation for A t Ac = A t y The solution c=[c 0 c 1... c n ] gives the constants for the best polynomial fit f * to the data:

Matrix Formulation of Polynomial Least Squares Definition: The matrix is called a Vandermonde Matrix. A Vandermonde matrix is a matrix whose columns (or rows) are successive powers of an independent variable.

Polynomial Least Squares Example Problem: Find the best quadratic polynomial fit to the data Normal Equation: A t Ac = A t y where x y

Polynomial Least Squares Example x y

Polynomial Least Squares Example A t y =

Polynomial Least Squares Example So, A t Ac = A t y becomes We could use Gaussian Elimination or use Matlab: D = [ ; ; ]; y = [ ]‘; c = D \ y c =

Polynomial Least Squares Example So, the best degree fit to the data is the polynomial x x 2.

Class Project Write a Matlab function that will take a vector of x values and a vector of y-values and will return the vector of coefficients for the best quadratic fit to the data.

Class Project 2 Exercise 9.15 in Pav: Write a Matlab function that will find the coefficients (a,b) for the function a e x + b e -x that best approximates a set of data {x i, y i }. Your function should have as input the x and y vectors and should output the vector c=(a,b). Use the Normal equation to solve this problem. Test your method on the data x = ( ), y = (1.194, 0.43, 0.103, 0.322, 1.034) Graph the data and your best fit function.

QR and Least Squares Given the normal equation A t Ac = A t y it is often the case that solving this directly can be unstable numerically. For example, consider the matrix Then the matrix becomes singular if eps is less than the square root of the machine epsilon.

QR and Least Squares To resolve this problem, assume we have carried out a QR factorization of A (this can be done for any matrix – square or rectangular). Then, A t Ac = A t y -> Note: where R n is an nxn matrix. Claim: To solve the least squares problem, it is enough to find the solution c to: R n c = Q t y

QR and Least Squares Proof: The least squares solution is to find the minimum value of For we have QED Note: Once R n and Q are determined, it is relatively straight- forward to solve R n c = Q t y for c. (Back-substitution)