A B C k1k1 k2k2 Consecutive Reaction d[A] dt = -k 1 [A] d[B] dt = k 1 [A] - k 2 [B] d[C] dt = k 2 [B] [A] = [A] 0 exp (-k 1 t) [B] = [A] 0 (k 1 /(k 2 -k.

Slides:



Advertisements
Similar presentations
Intro to modeling April Part of the course: Introduction to biological modeling 1.Intro to modeling 2.Intro to biological modeling (Floor) 3.Modeling.
Advertisements

A. The Basic Principle We consider the multivariate extension of multiple linear regression – modeling the relationship between m responses Y 1,…,Y m and.
Chapter Outline 3.1 Introduction
Visualizing the Microscopic Structure of Bilinear Data: Two components chemical systems.
Part 17: Nonlinear Regression 17-1/26 Econometrics I Professor William Greene Stern School of Business Department of Economics.
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Part 5 Chapter 19 Numerical Differentiation PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright © The McGraw-Hill.
Chemical Reaction Engineering Asynchronous Video Series Chapter 5: Finding the Rate Law H. Scott Fogler, Ph.D.
Maths for Computer Graphics
1 Chapter 6: LSA Chapter 6: LSA by CAS  CAS: Computer Algebra Systems  ideal for heavy yet routine analytical derivation (also useful for numerical/
Newton-Gauss Algorithm iii) Calculation the shift parameters vector R (p 0 )dR(p 0 )/dR(p 1 )dR(p 0 )/dR(p 2 )=- - p1p1 p2p2 - … - The Jacobian Matrix.
Curve-Fitting Regression
MECH300H Introduction to Finite Element Methods Lecture 2 Review.
Constrained Fitting Calculation the rate constants for a consecutive reaction with known spectrum of the reactant A = (A A + A B + A C ) + R = C E T =
Dynamical Systems Analysis I: Fixed Points & Linearization By Peter Woolf University of Michigan Michigan Chemical Process Dynamics.
Part 4 Chapter 13 Linear Regression
NOTES ON MULTIPLE REGRESSION USING MATRICES  Multiple Regression Tony E. Smith ESE 502: Spatial Data Analysis  Matrix Formulation of Regression  Applications.
Ordinary least squares regression (OLS)
Estimation and the Kalman Filter David Johnson. The Mean of a Discrete Distribution “I have more legs than average”
Course AE4-T40 Lecture 5: Control Apllication
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Matrix Approach to Simple Linear Regression KNNL – Chapter 5.
Arithmetic Operations on Matrices. 1. Definition of Matrix 2. Column, Row and Square Matrix 3. Addition and Subtraction of Matrices 4. Multiplying Row.
Least-Squares Regression
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Linear Prediction Coding (LPC)
CHAP 0 MATHEMATICAL PRELIMINARY
1 A Fast-Nonegativity-Constrained Least Squares Algorithm R. Bro, S. D. Jong J. Chemometrics,11, , 1997 By : Maryam Khoshkam.
Scientific Computing Linear Least Squares. Interpolation vs Approximation Recall: Given a set of (x,y) data points, Interpolation is the process of finding.
1 © 2010 Pearson Education, Inc. All rights reserved © 2010 Pearson Education, Inc. All rights reserved Chapter 9 Matrices and Determinants.
Jeff Howbert Introduction to Machine Learning Winter Regression Linear Regression.
Multiple Regression I KNNL – Chapter 6. Models with Multiple Predictors Most Practical Problems have more than one potential predictor variable Goal is.
Curve-Fitting Regression
Matrix Differential Calculus By Dr. Md. Nurul Haque Mollah, Professor, Dept. of Statistics, University of Rajshahi, Bangladesh Dr. M. N. H. MOLLAH.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Efficient computation of Robust Low-Rank Matrix Approximations in the Presence of Missing Data using the L 1 Norm Anders Eriksson and Anton van den Hengel.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
Scientific Computing General Least Squares. Polynomial Least Squares Polynomial Least Squares: We assume that the class of functions is the class of all.
SE-280 Dr. Mark L. Hornick Multiple Regression (Cycle 4)
Introduction to Matrices and Matrix Approach to Simple Linear Regression.
Solving linear models. x y The two-parameter linear model.
Circuits Theory Examples Newton-Raphson Method. Formula for one-dimensional case: Series of successive solutions: If the iteration process is converged,
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Chapter 15 General Least Squares and Non- Linear.
Rotational Ambiguity in Soft- Modeling Methods. D = USV = u 1 s 11 v 1 + … + u r s rr v r Singular Value Decomposition Row vectors: d 1,: d 2,: d p,:
By: Aaron Dyreson Supervising Professor: Dr. Ioannis Schizas
Engineering Analysis ENG 3420 Fall 2009 Dan C. Marinescu Office: HEC 439 B Office hours: Tu-Th 11:00-12:00.
1.7 Linear Independence. in R n is said to be linearly independent if has only the trivial solution. in R n is said to be linearly dependent if there.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Chapter 5: Matrices and Determinants Section 5.5: Augmented Matrix Solutions.
Econometrics III Evgeniya Anatolievna Kolomak, Professor.
Self-Modeling Curve Resolution and Constraints Hamid Abdollahi Department of Chemistry, Institute for Advanced Studies in Basic Sciences (IASBS), Zanjan,
President UniversityErwin SitompulSMI 10/1 Lecture 10 System Modeling and Identification Dr.-Ing. Erwin Sitompul President University
An Introduction to Model-Free Chemical Analysis Hamid Abdollahi IASBS, Zanjan Lecture 3.
Curve fitting methods for complex and rank deficient chemical data
CH 5: Multivariate Methods
Parameter estimation class 5
Numerical Differentiation
Chapter 12 Curve Fitting : Fitting a Straight Line Gab-Byung Chae
Linear Regression.
J.-F. Pâris University of Houston
Linear regression Fitting a straight line to observations.
Today (2/23/16) Learning objectives:
6.5 Taylor Series Linearization
6.6 The Marquardt Algorithm
Nonlinear Fitting.
The Math of Machine Learning
Topic 11: Matrix Approach to Linear Regression
Pivoting, Perturbation Analysis, Scaling and Equilibration
Derivatives and Gradients
Presentation transcript:

A B C k1k1 k2k2 Consecutive Reaction d[A] dt = -k 1 [A] d[B] dt = k 1 [A] - k 2 [B] d[C] dt = k 2 [B] [A] = [A] 0 exp (-k 1 t) [B] = [A] 0 (k 1 /(k 2 -k 1 )) (exp (-k 1 t) - exp (-k 2 t)) [C] = [A] 0 – [A] – [B] A three component system

k 1 =5 k 2 =0.5

Kinfit4.m

k 1 =5 k 2 =0.5 k 1 =0.5 k 2 =5 Multiple Solutions

Non-negativity constraints for eliminations of multiple solutions in fitting of multivariate kinetic models to spectroscopic data Joaquim Jaumot, Paul J. Gemperline and Alexandra stang J. Chemometrics, 2005, 19,

Model-based non-linear fitting The task of model-based data fitting for a given matrix A, is to determine the best parameters defining matrix C, as well as the best pure responses collected in matrix E. A = C E + R AC E R =+ The quality of the fit is represented by the matrix of residuals. Assuming white noise, the sum of the squares, ssq, of all elements r i,j is statistically the best measure to be minimized ssq = ΣΣ r 2 I,j

Linear and non_linear parameters There are two fundamentally different kinds of parameters, a smal number of model constants, which are nonlinear parameters, and the large number of response coefficients, which are linear parameters. E = C + A = (C T C) -1 C T A E = C \ A (MATLAB notation) R = A – C E = A – C C + A = f( A, model, k)

A B C k1k1 k2k2 Newton-Gauss Algorithm d[A] dt = -k 1 [A] d[B] dt = k 1 [A] - k 2 [B] d[C] dt = k 2 [B] [A] = [A] 0 exp (-k 1 t) [B] = [A] 0 (k 1 /(k 2 -k 1 ))(exp (-k 1 t) - exp (-k 2 t)) [C] = [A] 0 – [A] – [B] A = C E + R A Model (consecutive reaction) Measured data (A) Non-linear parameters (k 1, k 2 ) Linear parameters (E)

Newton-Gauss Algorithm i) Estimate of non-linear parameters (k 1, k 2 ) k 1 =0.3 k 2 =0.15

Newton-Gauss Algorithm ii) Calculation the residuals and ssq according to model and estimated parameters [A] = [A] 0 exp (-k 1 t) [B] = [A] 0 (k 1 /(k 2 -k 1 ))(exp (-k 1 t) - exp (-k 2 t)) [C] = [A] 0 – [A] – [B] Model k 1 =0.2 & k 2 =0.15 E=C\A

Newton-Gauss Algorithm ii) Calculation the residuals and ssq according to model and estimated parameters Calculation of residual matrix R = A – C (C\A) ---= ssq = ΣΣ r 2 I,j = 1.66

Newton-Gauss Algorithm iii) Calculation the shift parameters vector The Taylor series expansion R = f (parameters)ssq = f (parameters) A shift in nonlinear parameters should be calculated in such a way that the ssq moves towards its minimum value. R (p) = A – C (C\A) R (p +  p ) minimum  p = ? f (x +  x) = f (x) + ( ) (  x) + ( ) (  x) 2 df (x) d x 2! 1 d n f (x) d n x d 2 f (x) d2xd2x + … + ( ) (  x) n n! 1 f (x +  x) = f (x) + ( ) (  x) df (x) d x

Newton-Gauss Algorithm iii) Calculation the shift parameters vector The Taylor series expansion R (p +  p ) = R(p) + ( )  p dR (p) d p The goal is to determine the vector of parameter shifts that moves R(p 0 +  p) towards zero. R(p 0 ) = - ( )  p dR (p 0 ) d p R(p 0 ) = -  p 1 -  p 2 - … -  p np dR (p 0 ) d p 1 dR (p 0 ) d p 2 dR (p 0 ) d p np

Newton-Gauss Algorithm iii) Calculation the shift parameters vector Calculation the partial derivative It is always possible to approximate the partial derivative numerically by the method of finite differences. In the limit as  p 1 approaches zero, the derivative of R(p 0 ) with respect to p 1 can be approximated: dR(p 0 ) d p i R(p 0 +  p i ) – R (p 0 )  p i = ~ R([0.3, 0.15]) R([0.3 + , 0.15]) dR([k 1,k 2 ])/dk 1  = p i

Newton-Gauss Algorithm iii) Calculation the shift parameters vector R(p 0 ) = -  p 1 -  p 2 - … -  p np dR (p 0 ) d p 1 dR (p 0 ) d p 2 dR (p 0 ) d p np R (p 0 )dR(p 0 )/dR(p 1 )dR(p 0 )/dR(p 2 )=- - p1p1 p2p2 - … - How  p vector can be calculated? One solution to this problem is to vectorise (unfold into long column vectors) the residuals and partial derivative matrices.