Fin500J: Mathematical Foundations in Finance

Slides:



Advertisements
Similar presentations
Fin500J: Mathematical Foundations in Finance
Advertisements

Fin500J Topic 2Fall 2010 Olin Business School1 Fin500J Mathematical Foundations in Finance Topic 2: Matrix Calculus Philip H. Dybvig Reference: Matrix.
Optimization.
Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Optimisation The general problem: Want to minimise some function F(x) subject to constraints, a i (x) = 0, i=1,2,…,m 1 b i (x)  0, i=1,2,…,m 2 where x.
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Nonlinear Programming
MIT and James Orlin © Nonlinear Programming Theory.
Numerical Optimization
Nonlinear Optimization for Optimal Control
Methods For Nonlinear Least-Square Problems
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Support Vector Machines Formulation  Solve the quadratic program for some : min s. t.,, denotes where or membership.  Different error functions and measures.
Optimization Methods One-Dimensional Unconstrained Optimization
Optimization Linear Programming and Simplex Method
Unconstrained Optimization Problem
Lecture outline Support vector machines. Support Vector Machines Find a linear hyperplane (decision boundary) that will separate the data.
Engineering Optimization
MAE 552 – Heuristic Optimization Lecture 1 January 23, 2002.
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Optimization Methods One-Dimensional Unconstrained Optimization
Unconstrained Optimization Rong Jin. Logistic Regression The optimization problem is to find weights w and b that maximizes the above log-likelihood How.
Fin500J: Mathematical Foundations in Finance Topic 3: Numerical Methods for Solving Non-linear Equations Philip H. Dybvig Reference: Numerical Methods.
Tier I: Mathematical Methods of Optimization
Optimization of Linear Problems: Linear Programming (LP) © 2011 Daniel Kirschen and University of Washington 1.
Fin500J Topic 7Fall 2010 Olin Business School 1 Fin500J Mathematical Foundations in Finance Topic 7: Numerical Methods for Solving Ordinary Differential.
Linear Programming Operations Research – Engineering and Math Management Sciences – Business Goals for this section  Modeling situations in a linear environment.
9 1 Performance Optimization. 9 2 Basic Optimization Algorithm p k - Search Direction  k - Learning Rate or.
UNCONSTRAINED MULTIVARIABLE
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
1 Chapter 8 Nonlinear Programming with Constraints.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Fin500J Topic 6Fall 2010 Olin Business School 1 Fin500J: Mathematical Foundations in Finance Topic 6: Ordinary Differential Equations Philip H. Dybvig.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 101 Quasi-Newton Methods.
Application of Differential Applied Optimization Problems.
Nonlinear Programming (NLP) Operation Research December 29, 2014 RS and GISc, IST, Karachi.
Nonlinear Programming.  A nonlinear program (NLP) is similar to a linear program in that it is composed of an objective function, general constraints,
L4 Graphical Solution Homework See new Revised Schedule Review Graphical Solution Process Special conditions Summary 1 Read for W for.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Chapter 15.
System Optimization (1) Liang Yu Department of Biological Systems Engineering Washington State University
1 Chapter 7 Linear Programming. 2 Linear Programming (LP) Problems Both objective function and constraints are linear. Solutions are highly structured.
Copyright © 2011 Pearson, Inc. 7.5 Systems of Inequalities in Two Variables.
Fin500J Mathematical Foundations in Finance Topic 1: Matrix Algebra Philip H. Dybvig Reference: Mathematics for Economists, Carl Simon and Lawrence Blume,
Survey of gradient based constrained optimization algorithms Select algorithms based on their popularity. Additional details and additional algorithms.
Nonlinear Programming Models
Optimization unconstrained and constrained Calculus part II.
Quasi-Newton Methods of Optimization Lecture 2. General Algorithm n A Baseline Scenario Algorithm U (Model algorithm for n- dimensional unconstrained.
Introduction to Optimization Methods
1 Chapter 6 General Strategy for Gradient methods (1) Calculate a search direction (2) Select a step length in that direction to reduce f(x) Steepest Descent.
Survey of unconstrained optimization gradient based algorithms
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
LINEAR PROGRAMMING 3.4 Learning goals represent constraints by equations or inequalities, and by systems of equations and/or inequalities, and interpret.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
3-5: Linear Programming. Learning Target I can solve linear programing problem.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
1 Optimization Linear Programming and Simplex Method.
Optimal Control.
Lecture 8 – Nonlinear Programming Models
3-3 Optimization with Linear Programming
L5 Optimal Design concepts pt A
Outline Unconstrained Optimization Functions of One Variable
Linear Programming Example: Maximize x + y x and y are called
EEE 244-8: Optimization.
LINEARPROGRAMMING 4/26/2019 9:23 AM 4/26/2019 9:23 AM 1.
Some Comments on Root finding
Performance Optimization
1.6 Linear Programming Pg. 30.
Numerical Analysis – Solving Nonlinear Equations
Presentation transcript:

Fin500J: Mathematical Foundations in Finance Topic 5: Numerical Methods for Optimization Philip H. Dybvig Reference: Optimization Toolbox User’s Guide in Matlab, 2008 by the MathWorks, Inc. Slides designed by Yajun Wang Fin500J Topic 5 Fall 2010 Olin Business School

Recall, with optimization, we are seeking f '(x) = 0 Recall, for one dimensional unconstrained optimization problems, we are solving the first order condition, f’(x)=0, then check the second order conditions. If f’’(x*) is negative, then x* is a maximum of f, if f’’(x*) is positive, then x* is a minimum. f '(x) = 0 f "(x)>0 Fin500J Topic 5 Fall 2010 Olin Business School

One Dimension Unconstrained Optimization (Example) Find the maximum of To solve the root problem for and the second condition is satisfied at the root Fin500J Topic 5 Fall 2010 Olin Business School

One Dimension Unconstrained Optimization (Example) We can solve f '(x) =0 by Bisection using initial interval [1,2], Newton’s with initial point 1.2 or Secant method with initial points 1.2 and 2 presented in Topic 3. We can also solve it in Matlab. >> f=@(x) 2*cos(x)-1/5*x; >> fzero(f,[1,2]) ans =1.4276 Fin500J Topic 5 Fall 2010 Olin Business School

Objectives : Using Optimization Toolbox in Matlab to Solve unconstrained optimization with multiple variables Solve linear programming problem Solve quadratic programming problem (for example: optimal portfolio) Solve nonlinear optimization with constraints Mostly, we will focus on minimization in this topic, max f(x) is equivalent to min –f(x) Fin500J Topic 5 Fall 2010 Olin Business School

Linear Programming/Quadratic Programming/Nonlinear Programming If f(x) and the constraints are linear, we have linear programming If f(x) is quadratic, and the constraints are linear, we have quadratic programming If f(x) in not linear or quadratic, and/or the constraints are nonlinear, we have nonlinear programming Fin500J Topic 5 Fall 2010 Olin Business School

Recall the Optimality Conditions for Multiple Variables Unconstrained Minimization Problem: min f(x1, x2,…xn) Optimality Condition: x* is a local minimum if Example: What values of x make Recall the optimality conditions for a unconstrained minimization problem, min f(x1….xn). Firstly, the first order condition, i.e., the gradient of f is zero and the Hessian Matrix of f(x) at the critical point is positive definite. For example, min f(x)=…. We have to solve the critical point, (x1*, x2*) numerically. Fin500J Topic 5 Fall 2010 Olin Business School

Unconstrained Optimization with Multiple Variables in Matlab Step 1: Write an M-file objfun.m and save under the work path of matlab function f=objfun(x) f=exp(x(1)+x(2)-1)+exp(x(1)-x(2)-1)+exp(-x(1)-1); Step 2: >>optimtool in the commend window to open the optimization toolbox Plot the 3D graph of this example. X=-6:0.1:2; >> Y=-4:0.1:4; >> [XX,YY]=meshgrid(X,Y); >> Z=exp(XX+YY-1)+exp(XX-YY-1)+exp(-XX-1); >> surf(XX,YY,Z) Explain: [X,Y] = meshgrid(x,y) transforms the domain specified by vectors x and y into arrays X and Y, which can be used to evaluate functions of two variables and three-dimensional mesh/surface plots. The rows of the output array X are copies of the vector x; columns of the output array Y are copies of the vector y. Fin500J Topic 5 Fall 2010 Olin Business School

Unconstrained Optimization with Multiple Variables in Matlab (cont.) We use the function fminunc to solve unconstrained optimization problem “objfun” Use ‘fminunc’ to solve unconstrained optimization problem with objective function ‘objfun’. For our class, we choose to use ‘Medium Scale’ algorithm, Medium-scale is not a standard term and is used only to differentiate these algorithm for large-scale algorithms, which are designed to handle large-scale problems efficiently. For example, for the unconstrained minimization, the algorithm is BFGS quasi-Newton method For constrained minimization, Sequential Quadratic Programming are used. Large scale algorithms use trust region methods. Gradient Calculations: Gradients are calculated using a finite difference method unless they are supplied in a function. Analytical expressions of the gradients of objective can be incorporated through gradient functions. To minimize this function with the gradient provided, modify the m-file objfun.m so the gradient is the second output argument function [f g H]=objfun(x) f=exp(x(1)+x(2)-1)+exp(x(1)-x(2)-1)+exp(-x(1)-1); if nargout>1 % gradient required G=[exp(x(1)+x(2)-1)+exp(x(1)-x(2)-1)-exp(-x(1)-1); exp(x(1)+x(2)-1)-exp(x(1)-x(2)-1)]; if nargout>2 %Hessian required H=[$$, $$; $$,$$]; End Nargout checks the number of arguments that a calling function specifies; See help of Checking the Number of Input Arguments Termination Tolerance: TolX is a lower bound on the size of a step, meaning the norm of (xi – xi+1). If the solver attempts to take a step that is smaller than TolX, the iterations end. TolX is sometimes used as a relative bound, meaning iterations end when |(xi – xi+1)| < TolX*(1 + |xi|), or a similar relative measure. TolFun is a lower bound on the change in the value of the objective function during a step. If |f(xi) – f(xi+1)| < TolFun, the iterations end. TolFun is sometimes used as a relative bound, meaning iterations end when |f(xi) – f(xi+1)| < TolFun(1 + |f(xi)|), or a similar relative measure. Fin500J Topic 5 Fall 2010 Olin Business School

Quasi-Newton Method is an Algorithm used in function fminunc Quasi-Newton Method is an Algorithm used in function ‘fminunc’, Quasi-Newton is based on Newton’s method. Firstly, recall the Newton’s method: Is the basis for many techniques for solving optimization problems. Fin500J Topic 5 Fall 2010 Olin Business School

Recall the Algorithm of Newton Method Newton’s method stops when the Gradient is sufficiently close to zero. Stop when ||Df(x^k)||=sqrt( (df/dx_1)^2+(df/dx_2)^2+….(df/dx_n)^2)<=epsilon. Newton’s method may fail if Hessian is not positive definite Fin500J Topic 5 Fall 2010 Olin Business School

Quasi-Newton Methods Replace the Hessian with some Positive Definite Matrix H The function “fminunc” uses BFGS (Broyden, Fletcher, Goldfarb and Shanno) Hessian Update in the Quasi-Newton algorithm. The formula given by BFGS is A large number of Hessian updating methods have been developed. However, the formula of BFGS is thought to be the most effective for use in a General purpose method. P-5-8 in the Optimization Toolbox Guide. Fin500J Topic 5 Fall 2010 Olin Business School

Linear Programming Both the objective function and the constraints are linear Example: maximizing profit or minimizing cost Objective function Max or Min Z = c1x1 +c2x2 +…..cnxn where cj = payoff of each unit of the jth activity xj = magnitude of the jth activity The constraints can be represented by ai1x1 +ai2x2+…..ainxn  bi where aij = amount of the ith resource that is consumed for each unit of the jth activity, bi = amount of the ith resource available Finally, we add the constraint that all activities have a positive value, xi  0 Linear Programming: Both the objective function and the constraints are linear. Fin500J Topic 5 Fall 2010 Olin Business School

Example x1 = amount of regular and x2 = amount of premium Total Profit = 150 x1 + 175 x2 Maximize Z = 150 x1 + 175 x2 Gas processing plant that receives a fixed amount of raw gas each week Capable of processing two grades of heating gas (regular and premium) High demand for the product (i.e. guaranteed to sell) Each grade yields a different profit Each grade has different production time and on-site storage constraints Facility is only open 120 hrs/week develop a linear programming formulation to maximize profits for this operation. Objective function 7x1 + 11x2  77 (material constraint) 10x1 + 8x2  120 (time constraint) x1  9 (storage constraint) x2  6 (storage constraint) x1,x2  0 (positivity constraint) Fin500J Topic 5 Fall 2010 Olin Business School

Graphical Solution (1) 7x1 + 11x2  77 →x2  -7/11 x1 +7 Fin500J Topic 5 Fall 2010 Olin Business School

Graphical Solution Now we need to add the objective function to the plot. Start with Z = 0 (0=150x1 + 175x2) and Z = 500 (500=150x1 + 175x2) Z=1200 Z=1550 Still in feasible region x1*= 9 x2*  1.5 Fin500J Topic 5 Fall 2010 Olin Business School

Linear Programming in Matlab Example: Step 1: >>optimtool in the commend window to open the optimization toolbox Step 2: Define matrices A, Aeq and the vectors f, b, lb, ub Fin500J Topic 5 Fall 2010 Olin Business School

Linear Programming in Matlab (Example) File->export to workspace can export the results including lambda,etc. Simplex Method: Three facts about linear programs: (1) If there is exactly one optimal point, then it must be at a feasible vertex (A vertex is a point where constraints intersect each other) If there are multiple optimal points then, at least two must be at adjacent vertices. (2) There are a finite number of feasible vertices. So: We can find optimal points by evaluating objective at every feasible vertex, but this is not efficient (the number of vertices grows exponentially with the number of constraints and variables) (3) If the objective function evaluated at a feasible vertex is lower(higher) or equal than the value at all adjacent feasible vertices then, the vertex is an optimal point for the minimization (maximization ) problem. Simplex Method: Traverse Feasible Vertices until no adjacent feasible vertex improves the objective function. Simplex method is widely used today. (start with a basic feasible solution, then it moves through a sequence of other basic feasible solutions that successively improve the value of the objective function.) Fin500J Topic 5 Fall 2010 Olin Business School

Quadratic Programming in Matlab Step 1: >>optimtool in the commend window to open the optimization toolbox Step 2: Define matrices H,A and the vectors f, b Introduce how to use ‘quadprog’ to solve the quadratic programming problem, recall that a quadratic programming problem is that the objective function is quadratic function and the constraints are linear. The standard form in Matlab is: Ax<=b is inequality constraints and Aeqx=beq is equality constraints. If we only have inequality constraints, then we set Aeq, beq is empty. Lb is vector of lower bounds, ub is vector of upper bounds. Where H is symmetric matrix. Fin500J Topic 5 Fall 2010 Olin Business School

Quadratic Programming in Matlab (Example: Portfolio Optimization) Fin500J Topic 5 Fall 2010 Olin Business School

Quadratic Programming in Matlab (quadprog) H=[0.017087987 0.003298885 0.001224849; 0.003298885 0.005900944 0.004488271; 0.001224849 0.004488271 0.063000818] f=[0; 0; 0] A=[1 1 1; -0.026 -0.008 -0.074; -1 0 0; 0 -1 0; 0 0 -1] b=[1000; -50; 0; 0; 0] The function ‘quadprog ’ uses an active set strategy. The first phase involves the calculation of a feasible point. The second phase involves the generation of an iterative sequece of feasible points that converge to the solution. See Chapter 9, Reference book: Chapter 16 of Nocedal and Wright, Numerical Optimization, 2006 9.2 Active set methods. P132-P133. Feasible point see Active set pdf file under qqp-algorithm If we only have equality constraints, we can use the large-scale method. But, if we have both equalities and bounds, we must use the medium scale method. Fin500J Topic 5 Fall 2010 Olin Business School

Nonlinear Programming in Matlab ( Constrained Nonlinear Optimization) Formulation Finally, we learn how to solve nonlinear programming in Matlab. The standard form is: Where A is matrix for linear inequality constraints and Aeq is matrix for linear equality constraints. Beq is vector for linear equality constraints. Lb and up is vector of lower bounds/upper bounds. C is nonlinear inequalities Ceq is nonlinear equalities Fin500J Topic 5 Fall 2010 Olin Business School

Nonlinear Programming in Matlab (Example) Find x that solves Step 1: Write an M-file objfunc.m for the objective function. function f=objfunc(x) f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1); Step 2: Write an M-file confun.m for the constraints. function [c, ceq]=confun(x) %Nonlinear inequality constraints c=[1.5+x(1)*x(2)-x(1)-x(2); -x(1)*x(2)-10]; %Nonlinear equality constraints ceq=[]; Step 3: >>optimtool to open the optimization toolbox Don’t have linear constraints. Fin500J Topic 5 Fall 2010 Olin Business School

Nonlinear Programming in Matlab (Example) Fin500J Topic 5 Fall 2010 Olin Business School

Sequential Quadratic Programming is an Algorithm Used in Function ‘fmincon’ (Basic Idea) The basic idea is analogous to Newton’s method for unconstrained optimization. In unconstrained optimization, only the objective function must be approximated, in the NLP, both the objective and the constraint must be modeled. An sequential quadratic programming method uses a quadratic for the objective and a linear model of the constraint ( i.e., a quadratic program at each iteration) Sequential quadratic programming (SQP): The basic idea is analogous to Newton’s method for unconstrained minimization. At each step, a local model of the optimization problem is constructed and solved, yielding a step toward the solution of the original problem. In unconstrained optimization, only the objective function must be approximated, in the nonlinear problem, both the objective and the constraint must be modeled. f(x_k+p)=f(x_k)+Df(x_k)p+1/2*p*D^2f(x_k)*p. G(x_k+p)=DG(x_k)^Tp+G(x_k), so the constaint: G_i(x)=0, is replaced by DG_i(x_k)^T+G(X_k)=0. … Where H_k is psotive definite approximation of the Hessian Matrix of the Largragian function. H_k can be updated by any of the quasi-Newton mthods, for example, BFGS method. The step length parameter a_k is determined by an appropriate line search procedure. (Remember: Newton’s method may not converge to a solution if we move too far between estimates…, this can be remedied by limiting how far we move, a_k determines how far to move.) Fin500J Topic 5 Fall 2010 Olin Business School