Geology 5670/6670 Inverse Theory 27 Feb 2015 © A.R. Lowry 2015 Read for Wed 25 Feb: Menke Ch 9 (163-188) Last time: The Sensitivity Matrix (Revisited)

Slides:



Advertisements
Similar presentations
Machine Learning and Data Mining Linear regression
Advertisements

Lecture 13 L1 , L∞ Norm Problems and Linear Programming
The loss function, the normal equation,
Visual Recognition Tutorial
Lecture 15 Nonlinear Problems Newton’s Method. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Lecture 4 The L 2 Norm and Simple Least Squares. Syllabus Lecture 01Describing Inverse Problems Lecture 02Probability and Measurement Error, Part 1 Lecture.
Optimization in Engineering Design 1 Lagrange Multipliers.
Numerical Optimization
Function Optimization Newton’s Method. Conjugate Gradients
Intro to Linear Methods Reading: DH&S, Ch 5.{1-4,8} hip to be hyperplanar...
Some useful linear algebra. Linearly independent vectors span(V): span of vector space V is all linear combinations of vectors v i, i.e.
Chapter 1 Introduction The solutions of engineering problems can be obtained using analytical methods or numerical methods. Analytical differentiation.
Ch 2.2: Separable Equations In this section we examine a subclass of linear and nonlinear first order equations. Consider the first order equation We can.
Margins, support vectors, and linear programming Thanks to Terran Lane and S. Dreiseitl.
Reformulated - SVR as a Constrained Minimization Problem subject to n+1+2m variables and 2m constrains minimization problem Enlarge the problem size and.
Retrieval Theory Mar 23, 2008 Vijay Natraj. The Inverse Modeling Problem Optimize values of an ensemble of variables (state vector x ) using observations:
7. Least squares 7.1 Method of least squares K. Desch – Statistical methods of data analysis SS10 Another important method to estimate parameters Connection.
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
Lecture 10: Support Vector Machines
Today Wrap up of probability Vectors, Matrices. Calculus
Chapter 10 Review: Matrix Algebra
Geology 5670/6670 Inverse Theory 26 Jan 2015 © A.R. Lowry 2015 Read for Wed 28 Jan: Menke Ch 4 (69-88) Last time: Ordinary Least Squares (   Statistics)
GEO7600 Inverse Theory 09 Sep 2008 Inverse Theory: Goals are to (1) Solve for parameters from observational data; (2) Know something about the range of.
CS 8751 ML & KDDSupport Vector Machines1 Support Vector Machines (SVMs) Learning mechanism based on linear programming Chooses a separating plane based.
Ordinary Least-Squares Emmanuel Iarussi Inria. Many graphics problems can be seen as finding the best set of parameters for a model, given some data Surface.
Application of Differential Applied Optimization Problems.
Nonlinear least squares Given m data points (t i, y i ) i=1,2,…m, we wish to find a vector x of n parameters that gives a best fit in the least squares.
CS Statistical Machine learning Lecture 18 Yuan (Alan) Qi Purdue CS Oct
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
Systems of Equations and Inequalities Systems of Linear Equations: Substitution and Elimination Matrices Determinants Systems of Non-linear Equations Systems.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Example: 3x 2 + 9x + 6. Solving Quadratic Equations.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Local Probabilistic Sensitivity Measure By M.J.Kallen March 16 th, 2001.
Geology 5670/6670 Inverse Theory 21 Jan 2015 © A.R. Lowry 2015 Read for Fri 23 Jan: Menke Ch 3 (39-68) Last time: Ordinary Least Squares Inversion Ordinary.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Bayes Theorem The most likely value of x derived from this posterior pdf therefore represents our inverse solution. Our knowledge contained in is explicitly.
Worksheet – Solving for Zeros
L25 Numerical Methods part 5 Project Questions Homework Review Tips and Tricks Summary 1.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Geology 5670/6670 Inverse Theory 20 Feb 2015 © A.R. Lowry 2015 Read for Mon 23 Feb: Menke Ch 9 ( ) Last time: Nonlinear Inversion Solution appraisal.
Geology 5670/6670 Inverse Theory 28 Jan 2015 © A.R. Lowry 2015 Read for Fri 30 Jan: Menke Ch 4 (69-88) Last time: Ordinary Least Squares: Uncertainty The.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Geology 5670/6670 Inverse Theory 16 Mar 2015 © A.R. Lowry 2015 Last time: Review of Inverse Assignment 1 Expect Assignment 2 on Wed or Fri of this week!
Improving Performance of The Interior Point Method by Preconditioning Project Proposal by: Ken Ryals For: AMSC Fall 2007-Spring 2008.
Linear Programming Interior-Point Methods D. Eiland.
Geology 5670/6670 Inverse Theory 18 Mar 2015 © A.R. Lowry 2015 Last time: Review of Inverse Assignment 1; Constrained optimization for nonlinear problems.
Numerical Analysis – Data Fitting Hanyang University Jong-Il Park.
Geology 5670/6670 Inverse Theory 4 Feb 2015 © A.R. Lowry 2015 Read for Fri 6 Feb: Menke Ch 4 (69-88) Last time: The Generalized Inverse The Generalized.
Geology 5670/6670 Inverse Theory 12 Jan 2015 © A.R. Lowry 2015 Read for Wed 14 Jan: Menke Ch 2 (15-37) Last time: Course Introduction (Cont’d) Goal is.
Geology 5670/6670 Inverse Theory 6 Feb 2015 © A.R. Lowry 2015 Read for Mon 9 Feb: Menke Ch 5 (89-114) Last time: The Generalized Inverse; Damped LS The.
Optimal Control.
Support Vector Machines
CHAPTER 3: Quadratic Functions and Equations; Inequalities
CH 5: Multivariate Methods
Parameter estimation class 5
Support Vector Machines
MATH 1310 Section 2.7.
3-2 Solving Inequalities Using Addition or Subtraction
Conjugate Gradient Method
6.5 Taylor Series Linearization
CS5321 Numerical Optimization
The loss function, the normal equation,
LINEAR EQUATIONS.
Mathematical Foundations of BME Reza Shadmehr
LINEAR EQUATIONS.
Computer Animation Algorithms and Techniques
Optimization under Uncertainty
Presentation transcript:

Geology 5670/6670 Inverse Theory 27 Feb 2015 © A.R. Lowry 2015 Read for Wed 25 Feb: Menke Ch 9 ( ) Last time: The Sensitivity Matrix (Revisited) The Sensitivity Matrix, or Kernel Matrix, G can be thought of as the matrix of derivatives: and this holds true regardless of the nature of the model equation/operator F ! If F is linear, the derivatives are independent of the model parameter space m and the second derivatives are zero, so the minimum error solution is found in a single step. If F is nonlinear but the derivatives can be evaluated analytically, derivatives depend on m and gradient search methods iteratively search for minimum error. If analytical derivatives can’t be found, can evaluate derivatives numerically using:

For inequality constraints on an L 2 problem, use Quadratic Programming. General statement of the QP problem is something like: Minimize subject to So want to express our problem in these forms. Note that:

So our objective function to minimize is (i.e. let ; ) If our constraints are of the form Like with linear programming, can treat quadratic programming as a black box and find a suitable algorithm (e.g. in Matlab) to solve.

Stochastic Inversion: Suppose we know (or expect) something about the model parameters m before we begin the inversion, i.e., we know the expected value and an a priori covariance matrix We can express this problem as in which the remainder part of the model has We seek to find a generalized inverse G + that minimizes the mean square error: of:

Expanding and re-writing using a few of our math tricks, Minimizing this is equivalent to minimizing: You can convince yourself (if so inclined) that the solution to this minimax problem is: or: (Does this remind you of anything we’ve seen before?)