1 Chapter 6: LSA Chapter 6: LSA by CAS  CAS: Computer Algebra Systems  ideal for heavy yet routine analytical derivation (also useful for numerical/

Slides:



Advertisements
Similar presentations
Differential Equations Brannan Copyright © 2010 by John Wiley & Sons, Inc. All rights reserved. Chapter 08: Series Solutions of Second Order Linear Equations.
Advertisements

Algebraic, transcendental (i.e., involving trigonometric and exponential functions), ordinary differential equations, or partial differential equations...
Chapter 2: Second-Order Differential Equations
Introduction to Molecular Orbitals
Ch 5.7: Series Solutions Near a Regular Singular Point, Part II
A B C k1k1 k2k2 Consecutive Reaction d[A] dt = -k 1 [A] d[B] dt = k 1 [A] - k 2 [B] d[C] dt = k 2 [B] [A] = [A] 0 exp (-k 1 t) [B] = [A] 0 (k 1 /(k 2 -k.
Potentials and Fields. V = V (x,y,z) electric potential in a region of space that do not contain any electric charges Note: this is a 2-D motion Different.
Ch 7.9: Nonhomogeneous Linear Systems
Overview of The Operations Research Modeling Approach.
Ch 5.1: Review of Power Series Finding the general solution of a linear differential equation depends on determining a fundamental set of solutions of.
Systems of Non-Linear Equations
ECIV 301 Programming & Graphics Numerical Methods for Engineers REVIEW II.
Advanced Topics in Optimization
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Chi Square Distribution (c2) and Least Squares Fitting
NUMERICAL METHODS WITH C++ PROGRAMMING
Math for CSLecture 51 Function Optimization. Math for CSLecture 52 There are three main reasons why most problems in robotics, vision, and arguably every.
ON MULTIVARIATE POLYNOMIAL INTERPOLATION
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Chapter 10 Real Inner Products and Least-Square (cont.)
Principles of Least Squares

UNCONSTRAINED MULTIVARIABLE
2IV60 Computer Graphics Basic Math for CG
Application of CAS to geodesy: a ‘live’ approach P. Zaletnyik 1, B. Paláncz 2, J.L. Awange 3, E.W. Grafarend 4 1,2 Budapest University of Technology and.
Least-Squares Regression
Taylor Series.
Systems and Matrices (Chapter5)
Linear Algebra Chapter 4 Vector Spaces.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
1 As we have seen in section 4 conditional probability density functions are useful to update the information about an event based on the knowledge about.
Chapter 9: Differential Analysis of Fluid Flow SCHOOL OF BIOPROCESS ENGINEERING, UNIVERSITI MALAYSIA PERLIS.
1 Part II: Linear Algebra Chapter 8 Systems of Linear Algebraic Equations; Gauss Elimination 8.1 Introduction There are many applications in science and.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Multivariate Statistics Matrix Algebra II W. M. van der Veld University of Amsterdam.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
1 Ch. 4 Linear Models & Matrix Algebra Matrix algebra can be used: a. To express the system of equations in a compact manner. b. To find out whether solution.
MA/CS 375 Fall MA/CS 375 Fall 2002 Lecture 31.
Copyright © Cengage Learning. All rights reserved. CHAPTER 9 COUNTING AND PROBABILITY.
Warm up Construct the Taylor polynomial of degree 5 about x = 0 for the function f(x)=ex. Graph f and your approximation function for a graphical comparison.
Chapter 7 Inner Product Spaces 大葉大學 資訊工程系 黃鈴玲 Linear Algebra.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
Infinite Sequences and Series 8. Taylor and Maclaurin Series 8.7.
MA3264 Mathematical Modelling Lecture 3 Model Fitting.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
Chapter 1 Computing Tools Analytic and Algorithmic Solutions Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
Illustration of FE algorithm on the example of 1D problem Problem: Stress and displacement analysis of a one-dimensional bar, loaded only by its own weight,
1 2 nd Pre-Lab Quiz 3 rd Pre-Lab Quiz 4 th Pre-Lab Quiz.
Engineering Analysis – Computational Fluid Dynamics –
Circuits Theory Examples Newton-Raphson Method. Formula for one-dimensional case: Series of successive solutions: If the iteration process is converged,
Iteration Methods “Mini-Lecture” on a method to solve problems by iteration Ch. 4: (Nonlinear Oscillations & Chaos). Some nonlinear problems are solved.
Polynomials, Curve Fitting and Interpolation. In this chapter will study Polynomials – functions of a special form that arise often in science and engineering.
Advanced Engineering Mathematics, 7 th Edition Peter V. O’Neil © 2012 Cengage Learning Engineering. All Rights Reserved. CHAPTER 4 Series Solutions.
STA347 - week 91 Random Vectors and Matrices A random vector is a vector whose elements are random variables. The collective behavior of a p x 1 random.
ERT 210/4 Process Control & Dynamics DYNAMIC BEHAVIOR OF PROCESSES :
1 ECE 102 Engineering Computation Chapter 3 Math Review 3: Cramer’s Rule Dr. Herbert G. Mayer, PSU Status 10/11/2015 For use at CCUT Fall 2015.
1 ECE 221 Electric Circuit Analysis I Chapter 6 Cramer’s Rule Herbert G. Mayer, PSU Status 11/14/2014 For use at Changchun University of Technology CCUT.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Richard Kass/F02P416 Lecture 6 1 Lecture 6 Chi Square Distribution (  2 ) and Least Squares Fitting Chi Square Distribution (  2 ) (See Taylor Ch 8,
Matrices CHAPTER 8.9 ~ Ch _2 Contents  8.9 Power of Matrices 8.9 Power of Matrices  8.10 Orthogonal Matrices 8.10 Orthogonal Matrices 
Stats 242.3(02) Statistical Theory and Methodology.
EEE 431 Computational Methods in Electrodynamics
Chapter 11-SOLVING EQUATIONS IN EXCEL USING GOAL SEEK
Advanced Engineering Mathematics 6th Edition, Concise Edition
CHE 391 T. F. Edgar Spring 2012.
Chapter 10: Solving Linear Systems of Equations
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Lecture 4 - Transverse Optics II
Lecture 4 - Transverse Optics II
Presentation transcript:

1 Chapter 6: LSA Chapter 6: LSA by CAS  CAS: Computer Algebra Systems  ideal for heavy yet routine analytical derivation (also useful for numerical/ programming tasks);  independent method to check spreadsheet results  Mathematics involved: Taylor-series expansion of vector functions  Analytical, calculus-based theory of LSA

Taylor Series Expansion Taylor’s Theorem: gives approximation of f(x) at x near x 0 where x = x 0 +  x. Requires: (i) values of f & (various) f’, both evaluated at x 0, and (ii) small quantities  x: f(x)  f(x 0 +  x) = f(x 0 ) + + H.O.T. (6.1) H.O.T. = “Higher Order Terms” To approximate m multi-variate functions f 1 (x), f 2 (x),…, f m (x): view collectively as components of vector function f(x), then f(x) = f(x 0 +  x) = f(x 0 ) +  x + H.O.T. (6.2a) Define: A ij = (6.2b)

Variation of Coordinates via Series Expansion Resection w/ redundant targets: measured: many (m) angles Objective: obtain the best set of (n) coordinates (i.e. E, N) for unknown station(s), that will fit the m observed data as closely as possible. Assume: m > n. Arrange observed data into column vector:  =

Apply least-squares (LS) condition:   f( x ) x = LS solution for coordinates, e.g. x = [E U, N U ] T in Section (n = 2); f( x ):  Calculated version of measured angles or/ and distances  Computed using values of the coordinates x (what’s the best x?)

f 1 = calculated angle A-U-B in Fig. 3-13, where Hence f 1 as a function of the unknown coordinates x is Fig (6.4) Example:

How to find the best solution x? Utilize the fact: x = x 0 +  x x 0 = some approximate solution. Thus   f(x 0 +  x) Apply 6.2(a)(b):   f(x 0 ) +  x + H.O.T. Hence,  x – [  – f(x 0 )] + H.O.T.  0 (6.5) Note:  x is the only unknown in this problem Rephrasing (6.5): Minimize ||  x – k + H.O.T. || 2, where k   – f(x 0 ) (a weighted problem, with weight matrix W) ** If we modified a problem very slightly (dropping H.O.T.) then the solution should only differ slightly the solution should only differ slightly **

First obtain approx. solution (really: minimizes ||A  x – k|| 2 ) :  x = k (6.7) Solution improved to x new = x 0 +  x (6.8) This updated (still approximate) solution: provides a new (better) “x 0 “ Fig. 6.1 Improving provisional coordinates by (approximate)  x Use new x 0 to repeat procedure until convergence is met

Calculation of derivatives (6.2b) for matrix elements A ij : (i = 1 to m, j = 1 to n) By hand: lengthy (m can be >> 1; n also) & error-prone Symbolic expression to be numerically evaluated repeatedly by substituting x 0 ; also for k =  – f(x 0 ) Seek help from CAS tools Maple V, Mathematica (“Mtka”), REDUCE, DERIVE, MACSYMA, MuMath, MathCAD, etc. URL for free Mtka download (save-disallowed): CAS calculators

9 Resection example: 1.Download and install trial version of Mtka 2.Enter Program 6.1 given in book 3.Press Shift + Enter to run each line 4.Results should agree with Solver results in Ch. 3

10 Generic procedure 1.Define unknowns = x (n x 1 vector) 2.Put “observed data” into  (m x 1 vector) 3.Prepare computed version of  as f(x) (m x 1) 4.Prepare A ij = D[f i,x j ] (m x n) (symbolic) 5.Prepare a reasonable provisional solution, x 0 6. k =  – f(x 0 ); A -> A(x 0 ) (now numerical) 7.  x = (A T WA) -1 A T Wk 8.Update x 0 to x 0 +  x; repeat from step 6 until solution converges

11 Potential applications Recovering missing parameters of a circle using (4 or more) observed points Locating the center, major & minor axes of an ellipse by observed points Parameters of a comet trajectory using observed data Etc.