Nonlinear least squares Given m data points (t i, y i ) i=1,2,…m, we wish to find a vector x of n parameters that gives a best fit in the least squares.

Slides:



Advertisements
Similar presentations
Instabilities of SVD Small eigenvalues -> m+ sensitive to small amounts of noise Small eigenvalues maybe indistinguishable from 0 Possible to remove small.
Advertisements

Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Least Squares example There are 3 mountains u,y,z that from one site have been measured as 2474 ft., 3882 ft., and 4834 ft.. But from u, y looks 1422 ft.
Solving Systems of Equations. Rule of Thumb: More equations than unknowns  system is unlikely to have a solution. Same number of equations as unknowns.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Read Chapter 17 of the textbook
Steepest Decent and Conjugate Gradients (CG). Solving of the linear equation system.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
1cs542g-term Notes  Extra class this Friday 1-2pm  Assignment 2 is out  Error in last lecture: quasi-Newton methods based on the secant condition:
Principal Component Analysis
3D Geometry for Computer Graphics. 2 The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
Prénom Nom Document Analysis: Linear Discrimination Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.
Methods For Nonlinear Least-Square Problems
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Mechanics of the Simplex Method
Unconstrained Optimization Problem
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Advanced Topics in Optimization
Computer Graphics Recitation The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
PETE 603 Lecture Session #29 Thursday, 7/29/ Iterative Solution Methods Older methods, such as PSOR, and LSOR require user supplied iteration.
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM.
Solving quadratic equations Factorisation Type 1: No constant term Solve x 2 – 6x = 0 x (x – 6) = 0 x = 0 or x – 6 = 0 Solutions: x = 0 or x = 6 Graph.
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
Collaborative Filtering Matrix Factorization Approach
Neural Networks Lecture 8: Two simple learning algorithms
CpE- 310B Engineering Computation and Simulation Dr. Manal Al-Bzoor
Taylor Series.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Chapter 17 Boundary Value Problems. Standard Form of Two-Point Boundary Value Problem In total, there are n 1 +n 2 =N boundary conditions.
Basic Numerical methods and algorithms
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
ENCI 303 Lecture PS-19 Optimization 2
Ordinary Least-Squares Emmanuel Iarussi Inria. Many graphics problems can be seen as finding the best set of parameters for a model, given some data Surface.
Newton's Method for Functions of Several Variables Joe Castle & Megan Grywalski.
Application of Differential Applied Optimization Problems.
CISE301_Topic41 CISE301: Numerical Methods Topic 4: Least Squares Curve Fitting Lectures 18-19: KFUPM Read Chapter 17 of the textbook.
1 Chapter 7 Linear Programming. 2 Linear Programming (LP) Problems Both objective function and constraints are linear. Solutions are highly structured.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Solution of Nonlinear Functions
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Chapter 2-OPTIMIZATION
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Geology 5670/6670 Inverse Theory 27 Feb 2015 © A.R. Lowry 2015 Read for Wed 25 Feb: Menke Ch 9 ( ) Last time: The Sensitivity Matrix (Revisited)
ABE425 Engineering Measurement Systems Ordinary Least Squares (OLS) Fitting Dr. Tony E. Grift Dept. of Agricultural & Biological Engineering University.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Ch. Eick: Num. Optimization with GAs Numerical Optimization General Framework: objective function f(x 1,...,x n ) to be minimized or maximized constraints:
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Section 8 Numerical Analysis CSCI E-24 José Luis Ramírez Herrán October 20, 2015.
Let W be a subspace of R n, y any vector in R n, and the orthogonal projection of y onto W. …
Optimal Control.
MathematicalMarketing Slide 3c.1 Mathematical Tools Chapter 3: Part c – Parameter Estimation We will be discussing  Nonlinear Parameter Estimation  Maximum.
CSCE 441: Computer Graphics Forward/Inverse kinematics
Ellipse Fitting COMP 4900C Winter 2008.
Non-linear Least-Squares
Collaborative Filtering Matrix Factorization Approach
Chapter 10. Numerical Solutions of Nonlinear Systems of Equations
Linear regression Fitting a straight line to observations.
Nonlinear regression.
~ Least Squares example
CS5321 Numerical Optimization
Discrete Least Squares Approximation
~ Least Squares example
Some Comments on Root finding
CS5321 Numerical Optimization
CISE-301: Numerical Methods Topic 1: Introduction to Numerical Methods and Taylor Series Lectures 1-4: KFUPM CISE301_Topic1.
Presentation transcript:

Nonlinear least squares Given m data points (t i, y i ) i=1,2,…m, we wish to find a vector x of n parameters that gives a best fit in the least squares sense to a model m. For example consider the exponential decaying model: m(x,t)=x 1 e -x 2 t where x 1 and x 2 are unknowns. Here n is 2. If x 2 were known, the model would be linear. Define the residual r of m components = r i (x) = y i - m(x,t i ) and wish to minimize.5.

Why consider this special case: common problem Derivatives have special structure Let J be the Jacobian of r. (Each column of J would give a derivative of an element of x for r). The gradient g = J T r. The matrix of second partials H= J T J + S where S is zero for exact fit. For the model x 1 e -x 2 t J has the form

Gauss Newton Let H = J T J ( i.e. ignore second term in Hessian) Perform Newton Iteration: Until convergence: Let s be the solution of (J(x) T J(x))s = -J(x) T r(x) Set x=x+ s But is just the normal equation form of the linear least squares problem to find s. (J(x) T J(x))s = -J(x) T r(x)

Gauss Newton on exponential example with model x 1 e -x 2 t If data was t y and initially x= [1 0] T so initially J = = Course of iterations X||r||

Levenberg-Marquardt Let H = J T J + kI Perform Newton Iteration: Until convergence: Let s be the solution of (J(x) T J(x)+kI)s = -J(x) T r(x) Set x=x+ s Rational If k is big, just get gradient step which is good far from solution Data could be noisy and second term is smoother. Project Alert: how do you choose k

How to get derivatives of difficult function: Automatic differentiation- differentiate the program- Hot topic- good project Numerical Differentiation Bite the bullet and hope you can analytically differentiate the function accurately

Numerical Differentiation of sin(1.0) Using derivative= (f(x+h)-f(x))/h As h get smaller, truncation error decreases but roundoff error increases. Choosing h becomes an art

Linear Programming Example: company, which makes steel bands and steel coils, needs to allocate next weeks time on a rolling mill. Bands Coils Rate of Production200 tons/hr.140 tons/hr. Profits per ton: $25$30 Orders:6000 tons4000 tons Make x tons of Bands and y tons of Coils to maximize 25x +30y such that x/200 + y/140 <= 40 0 <= x <= 6000 and 0 <= y <= 4000

Linear Programming: maximizing linear function subject to linear constraints Quadratic Programming: maximizing quadratic function subject to linear constraints Mathematical programming- maximizing general functions subject to general constraints

Approaches to Linear Programming 1. (Simplex-Dantzig-1940s)Solution lies on boundary of region, so go from vertex to vertex continuing to increase function. Each iteration involves solving a linear system- 0(n 3 ) multiplications As one jumps to next vertex the linear system loses one row and column and gains one row and column-0(n 2 ) multiplications (Golub/Bartels-1970) 2. (Karmarkar-1983)Scale steepest ascent by distance to constraint and go almost to boundary Requires fewer iterations, structure of system does not change