458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8.

Slides:



Advertisements
Similar presentations
Lecture 5 Newton-Raphson Method
Advertisements

Roundoff and truncation errors
1 12. Principles of Parameter Estimation The purpose of this lecture is to illustrate the usefulness of the various concepts introduced and studied in.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
EARS1160 – Numerical Methods notes by G. Houseman
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Easy Optimization Problems, Relaxation, Local Processing for a single variable.
PHYS2020 NUMERICAL ALGORITHM NOTES ROOTS OF EQUATIONS.
Lecture #18 EEE 574 Dr. Dan Tylavsky Nonlinear Problem Solvers.
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
ECIV 301 Programming & Graphics Numerical Methods for Engineers Lecture 6 Roots of Equations Bracketing Methods.
Optimization Methods One-Dimensional Unconstrained Optimization
Engineering Optimization
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Optimization Methods One-Dimensional Unconstrained Optimization
Notes, part 5. L’Hospital Another useful technique for computing limits is L'Hospital's rule: Basic version: If, then provided the latter exists. This.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
458 Fitting models to data – I (Sum of Squares) Fish 458, Lecture 7.
Optimization Methods One-Dimensional Unconstrained Optimization
Tier I: Mathematical Methods of Optimization
Solver & Optimization Problems n An optimization problem is a problem in which we wish to determine the best values for decision variables that will maximize.
UNCONSTRAINED MULTIVARIABLE
Collaborative Filtering Matrix Factorization Approach
456/556 Introduction to Operations Research Optimization with the Excel 2007 Solver.
Professor Walter W. Olson Department of Mechanical, Industrial and Manufacturing Engineering University of Toledo Solving ODE.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. by Lale Yurttas, Texas A&M University Chapter 31.
Solving Non-Linear Equations (Root Finding)
Chapter 6 Finding the Roots of Equations
ENCI 303 Lecture PS-19 Optimization 2
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Lecture Notes Dr. Rakhmad Arief Siregar Universiti Malaysia Perlis
Numerical Methods Applications of Loops: The power of MATLAB Mathematics + Coding 1.
Nonlinear programming Unconstrained optimization techniques.
Nonlinear least squares Given m data points (t i, y i ) i=1,2,…m, we wish to find a vector x of n parameters that gives a best fit in the least squares.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Loop Application: Numerical Methods, Part 1 The power of Matlab Mathematics + Coding.
1 Solution of Nonlinear Equation Dr. Asaf Varol
Chapter 3 Roots of Equations. Objectives Understanding what roots problems are and where they occur in engineering and science Knowing how to determine.
4.5: Linear Approximations, Differentials and Newton’s Method.
Numerical Methods for Engineering MECN 3500
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 3.
CHAPTER 3 NUMERICAL METHODS
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
Newton’s Method, Root Finding with MATLAB and Excel
559 Fish 559; Lecture 25 Root Finding Methods. 559 What is Root Finding-I? Find the value for such that the following system of equations is satisfied:
AUGUST 2. MATH 104 Calculus I Review of previous material…. …methods and applications of integration, differential equations ………..
Applications of Loops: The power of MATLAB Mathematics + Coding
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Linearization, Newton’s Method
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm One double-sided cheat sheet (8.5in x 11in) allowed Bring your calculator to the exam Chapters.
Ch. Eick: Num. Optimization with GAs Numerical Optimization General Framework: objective function f(x 1,...,x n ) to be minimized or maximized constraints:
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
4.5: Linear Approximations, Differentials and Newton’s Method Greg Kelly, Hanford High School, Richland, Washington.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Quiz 2.
Linear Programming for Solving the DSS Problems
Root Finding Methods Fish 559; Lecture 15 a.
12. Principles of Parameter Estimation
Non-linear Minimization
Solver & Optimization Problems
Numerical Methods.
One-layer neural networks Approximation problems
Numerical Methods.
Collaborative Filtering Matrix Factorization Approach
Optimization and Some Traditional Methods
Local Search Algorithms
12. Principles of Parameter Estimation
Presentation transcript:

458 Interlude (Optimization and other Numerical Methods) Fish 458, Lecture 8

458 Numerical Methods Most fisheries assessment problems are mathematically too complicated to apply analytical methods. We often have to resort to numerical methods. The two most frequent questions requiring numerical solutions are : Find the values for a set of parameters so that they satisfy a set of (non-linear) equations. Find the values for a set of parameters to minimize some function. Note: Numerical methods can (and do) fail – you need to know enough about them to be able to check for failure.

458 Minimizing a Function The problem : Find the vector so that the function is minimized (note: maximizing is the same as minimizing ). We may place bounds on the values for the elements of (e.g. some must be positive). By definition, for a minimum:

458 Analytic Approaches Sometimes it is possible to solve the differential equation directly. For example: Now:

458 Linear Models – The General Case Generalizing: The solution to this case is: Exercise: check it for the simple case:

458 Analytical Approaches-II Use analytical approaches whenever possible. Finding analytical solutions for some of the parameters of a complicated model can substantially speed up the process of minimizing the function. For example: q for the Dynamic Schaefer model:

458 Numerical Methods (Newton’s Method) Single variable version-I We wish to find the value of x such that f(x) is at a minimum. 1. Guess a value for x 2. Determine whether increasing or decreasing x will lead to a lower value for f(x) (based on the derivative). 3. Assess the slope and its change (first and second derivatives of f) to determine how far to move from the current value of x. 4. Change x based on step Repeat steps 2-4 until no further progress is made.

458 Numerical Methods (Newton’s Method) Single variable version-II Formally: Note: Newton’s method may diverge rather than converge!

458 Minimize: 2+(x-2)^4-x^2 This is actually quite a nasty function – differentiate it and see! Convergence took 17 steps in this case. Minimum: at x = 3.165

458 Numerical Methods (Newton’s Method) If the function is quadratic (i.e. the third derivative is zero), Newton’s method will get to the solution in one step (but then you could solve the equation by hand!) If the function is non-quadratic, iteration will occur. Multi-parameter extensions to Newton’s method exist but most people prefer gradient free methods (e.g. SIMPLEX) to deal with problems like this.

458 Calculating Derivatives Numerically-I Many numerical methods (e.g. Newton’s method) require derivatives. You should generally calculate the derivatives analytically. However, sometimes, this gets very tedious. For example, for the Dynamic Schaefer model: I think you get the picture…

458 Calculating Derivatives Numerically-II The accuracy of the approximation depends on the number of terms and the size of  x /  y (smaller – but not too small – is better)

458 Calculating Derivatives Numerically-III (back to Cape Hake)

458 Optimization – some problems-I Local minima Global minimum Local minimum

458 Optimization – some problems-II Problems calculating numerical derivatives. Integer parameters. Bounds [either on the function itself (extinction of Cape Hake hasn’t happened - yet) or on the values for the parameters].

458 Optimization – Tricks of the Trade-I To keep a parameter,x, constrained between a and b, transform it to a+(0.5+arctan(y)/  )(b-a). x=1+(0.5+arctan(y)/  ) x~[1-2]

458 Optimization – Tricks of the Trade-II Test the code for the model by fitting to data where the answer is known! Look at the fit graphically (have you maximized rather than minimizing the function)? Minimize the function manually; restart the optimization algorithm from the final value; restart the optimization algorithm from different values. When fitting n parameters that must add to 1, fit n-1 parameters and set the last to 1-sum(1:n-1). In SOLVER, use automatic scaling and set the convergence criterion smaller.

458 Solving an Equation (the Bisection Method) Often we have to solve the equation f(x)=0 (e.g. the Lokta equation). This can be treated as an optimization problem (i.e. minimize f(x) 2 ). However, it is usually more efficient to use a numerical method specifically developed for this purpose. We illustrate the Bisection Method here but there are many others.

458 Solving an Equation (the Bisection Method)

458 Solving an Equation (the Bisection Method) Find x such that 10+20(x-2)^3+100(x-2)^2=0

458 Note Use of Numerical Methods (particularly optimization) is an art. The only way to get it right is to practice (probably more than anything else in this course).

458 Readings Hilborn and Mangel (1997); Chapter 11 Press et al. (1988), Chapter 10