Introduction to Optimization Methods

Slides:



Advertisements
Similar presentations
Optimality conditions for constrained local optima, Lagrange multipliers and their use for sensitivity of optimal solutions.
Advertisements

Chapter 6: Roots: Open Methods
Part 2 Chapter 6 Roots: Open Methods
MATLAB Optimization Greg Reese, Ph.D Research Computing Support Group Miami University.
Numerical Solution of Nonlinear Equations
Unconstrained optimization Gradient based algorithms –Steepest descent –Conjugate gradients –Newton and quasi-Newton Population based algorithms –Nelder.
Optimization methods Review
Chapter 1 Computing Tools Data Representation, Accuracy and Precision Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction.
Optimization with MATLAB
Nonlinear Optimization for Optimal Control
ENGR 351 Numerical Methods Instructor: Dr. L.R. Chevalier
Unconstrained Optimization Problem
Advanced Topics in Optimization
Nonlinear Algebraic Systems 1.Iterative solution methods 2.Fixed-point iteration 3.Newton-Raphson method 4.Secant method 5.Matlab tutorial 6.Matlab exercise.
Why Function Optimization ?
An Introduction to Optimization Theory. Outline Introduction Unconstrained optimization problem Constrained optimization problem.
Tier I: Mathematical Methods of Optimization
1 Chapter 5 Nonlinear Programming Chemical Engineering Department National Tsing-Hua University Prof. Shi-Shang Jang May, 2003.
Non-Linear Simultaneous Equations
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
UNCONSTRAINED MULTIVARIABLE
MATH 685/ CSI 700/ OR 682 Lecture Notes Lecture 9. Optimization problems.
Collaborative Filtering Matrix Factorization Approach
456/556 Introduction to Operations Research Optimization with the Excel 2007 Solver.
Ranga Rodrigo April 6, 2014 Most of the sides are from the Matlab tutorial. 1.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Analytical vs. Numerical Minimization Each experimental data point, l, has an error, ε l, associated with it ‣ Difference between the experimentally measured.
Nonlinear least squares Given m data points (t i, y i ) i=1,2,…m, we wish to find a vector x of n parameters that gives a best fit in the least squares.
Fin500J: Mathematical Foundations in Finance
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
System Optimization (1) Liang Yu Department of Biological Systems Engineering Washington State University
Optimization with the Excel Solver
Survey of gradient based constrained optimization algorithms Select algorithms based on their popularity. Additional details and additional algorithms.
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Solution of Nonlinear Functions
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
A comparison between PROC NLP and PROC OPTMODEL Optimization Algorithm Chin Hwa Tan December 3, 2008.
Newton’s Method, Root Finding with MATLAB and Excel
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Lecture 13. Geometry Optimization References Computational chemistry: Introduction to the theory and applications of molecular and quantum mechanics, E.
Survey of unconstrained optimization gradient based algorithms
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Basis of Mathematical Modeling LECTURE 3 Numerical Analysis with MATLAB Dr. N.K. Sakhnenko, PhD, Professor Associate.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
Geology 5670/6670 Inverse Theory 27 Feb 2015 © A.R. Lowry 2015 Read for Wed 25 Feb: Menke Ch 9 ( ) Last time: The Sensitivity Matrix (Revisited)
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Linear Programming Interior-Point Methods D. Eiland.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Numerical Methods for Inverse Kinematics Kris Hauser ECE 383 / ME 442.
Optimal Control.
Excel’s Solver Use Excel’s Solver as a tool to assist the decision maker in identifying the optimal solution for a business decision. Business decisions.
Physics 114: Lecture 16 Least Squares Fit to Arbitrary Functions
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
University of Colorado Boulder APPM 4380 October 10th, 2016
A system of nonlinear equations
OPSE 301: Lab13 Data Analysis – Fitting Data to Arbitrary Functions
ENGG 1801 Engineering Computing
Chap 3. The simplex method
Collaborative Filtering Matrix Factorization Approach
Ying shen Sse, tongji university Sep. 2016
Chapter 7 Optimization.
Outline Unconstrained Optimization Functions of One Variable
ME 123 Computer Applications I Lecture 25: MATLAB Overview 4/28/03
Computer Animation Algorithms and Techniques
Presentation transcript:

Introduction to Optimization Methods MATLAB Optimization Toolbox

Presentation Outline Introduction Function Optimization Optimization Toolbox Routines / Algorithms available Minimization Problems Unconstrained Constrained Example The Algorithm Description Multiobjective Optimization Optimal PID Control Example

Function Optimization Optimization concerns the minimization or maximization of functions Standard Optimization Problem: Subject to: Equality Constraints Inequality Constraints Side Constraints Where: is the objective function, which measure and evaluate the performance of a system. In a standard problem, we are minimizing the function. For maximization, it is equivalent to minimization of the –ve of the objective function. is a column vector of design variables, which can affect the performance of the system.

Function Optimization (Cont.) Constraints – Limitation to the design space. Can be linear or nonlinear, explicit or implicit functions Equality Constraints Inequality Constraints Most algorithm require less than!!! Side Constraints

Optimization Toolbox Is a collection of functions that extend the capability of MATLAB. The toolbox includes routines for: Unconstrained optimization Constrained nonlinear optimization, including goal attainment problems, minimax problems, and semi-infinite minimization problems Quadratic and linear programming Nonlinear least squares and curve fitting Nonlinear systems of equations solving Constrained linear least squares Specialized algorithms for large scale problems

Minimization Algorithm

Minimization Algorithm (Cont.)

Equation Solving Algorithms

Least-Squares Algorithms

Implementing Opt. Toolbox Most of these optimization routines require the definition of an M- file containing the function, f, to be minimized. Maximization is achieved by supplying the routines with –f. Optimization options passed to the routines change optimization parameters. Default optimization parameters can be changed through an options structure.

Unconstrained Minimization Consider the problem of finding a set of values [x1 x2]T that solves Steps: Create an M-file that returns the function value (Objective Function). Call it objfun.m Then, invoke the unconstrained minimization routine. Use fminunc

Step 1 – Obj. Function function f = objfun(x) f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1); Objective function

Step 2 – Invoke Routine x0 = [-1,1]; Starting with a guess x0 = [-1,1]; options = optimset(‘LargeScale’,’off’); [xmin,feval,exitflag,output]= fminunc(‘objfun’,x0,options); Optimization parameters settings Output arguments Input arguments

Results xmin = 0.5000 -1.0000 feval = 1.3028e-010 exitflag = 1 output = iterations: 7 funcCount: 40 stepsize: 1 firstorderopt: 8.1998e-004 algorithm: 'medium-scale: Quasi-Newton line search' Minimum point of design variables Objective function value Exitflag tells if the algorithm is converged. If exitflag > 0, then local minimum is found Some other information

More on fminunc – Input [xmin,feval,exitflag,output,grad,hessian]= fminunc(fun,x0,options,P1,P2,…) fun : Return a function of objective function. x0 : Starts with an initial guess. The guess must be a vector of size of number of design variables. Option : To set some of the optimization parameters. (More after few slides) P1,P2,… : To pass additional parameters.

More on fminunc – Output [xmin,feval,exitflag,output,grad,hessian]= fminunc(fun,x0,options,P1,P2,…) xmin : Vector of the minimum point (optimal point). The size is the number of design variables. feval : The objective function value of at the optimal point. exitflag : A value shows whether the optimization routine is terminated successfully. (converged if >0) Output : This structure gives more details about the optimization grad : The gradient value at the optimal point. hessian : The hessian value of at the optimal point

Options Setting – optimset optimset(‘param1’,value1, ‘param2’,value2,…) The routines in Optimization Toolbox has a set of default optimization parameters. However, the toolbox allows you to alter some of those parameters, for example: the tolerance, the step size, the gradient or hessian values, the max. number of iterations etc. There are also a list of features available, for example: displaying the values at each iterations, compare the user supply gradient or hessian, etc. You can also choose the algorithm you wish to use.

Options Setting (Cont.) optimset(‘param1’,value1, ‘param2’,value2,…) Type help optimset in command window, a list of options setting available will be displayed. How to read? For example: LargeScale - Use large-scale algorithm if possible [ {on} | off ] The default is with { } Value (value1) Parameter (param1)

Options Setting (Cont.) optimset(‘param1’,value1, ‘param2’,value2,…) LargeScale - Use large-scale algorithm if possible [ {on} | off ] Since the default is on, if we would like to turn off, we just type: Options = optimset(‘LargeScale’, ‘off’) and pass to the input of fminunc.

Useful Option Settings Highly recommended to use!!! Display - Level of display [ off | iter | notify | final ] MaxIter - Maximum number of iterations allowed [ positive integer ] TolCon - Termination tolerance on the constraint violation [ positive scalar ] TolFun - Termination tolerance on the function value [ positive scalar ] TolX - Termination tolerance on X [ positive scalar ]

fminunc and fminsearch fminunc uses algorithm with gradient and hessian information. Two modes: Large-Scale: interior-reflective Newton Medium-Scale: quasi-Newton (BFGS) Not preferred in solving highly discontinuous functions. This function may only give local solutions.. fminsearch is generally less efficient than fminunc for problems of order greater than two. However, when the problem is highly discontinuous, fminsearch may be more robust. This is a direct search method that does not use numerical or analytic gradients as in fminunc. This function may only give local solutions.

Constrained Minimization Vector of Lagrange Multiplier at optimal point [xmin,feval,exitflag,output,lambda,grad,hessian] = fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…)

Example function f = myfun(x) f=-x(1)*x(2)*x(3); Subject to:

Example (Cont.) For Create a function call nonlcon which returns 2 constraint vectors [C,Ceq] function [C,Ceq]=nonlcon(x) C=2*x(1)^2+x(2); Ceq=[]; Remember to return a null Matrix if the constraint does not apply

fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…) Example (Cont.) Initial guess (3 design variables) x0=[10;10;10]; A=[-1 -2 -2;1 2 2]; B=[0 72]'; LB = [0 0 0]'; UB = [30 30 30]'; [x,feval]=fmincon(@myfun,x0,A,B,[],[],LB,UB,@nonlcon) CAREFUL!!! fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…)

Example (Cont.) Const. 1 Const. 2 Const. 3 Const. 4 Const. 5 Const. 6 Warning: Large-scale (trust region) method does not currently solve this type of problem, switching to medium-scale (line search). > Optimization terminated successfully: Magnitude of directional derivative in search direction less than 2*options.TolFun and maximum constraint violation is less than options.TolCon Active Constraints: 2 9 x = 0.00050378663220 0.00000000000000 30.00000000000000 feval = -4.657237250542452e-035 Const. 1 Const. 2 Const. 3 Const. 4 Const. 5 Const. 6 Const. 7 Sequence: A,B,Aeq,Beq,LB,UB,C,Ceq Const. 8 Const. 9

Multiobjective Optimization Previous examples involved problems with a single objective function. Now let us look at solving problem with multiobjective function by lsqnonlin. Example is taken for data curve fitting In curve fitting problem the the error is reduced at each time step producing multiobjective function.

lsqnonlin in Matlab – Curve fitting clc; %recfit.m clear; global data; data= [ 0.6000 0.999 0.6500 0.998 0.7000 0.997 0.7500 0.995 0.8000 0.982 0.8500 0.975 0.9000 0.932 0.9500 0.862 1.0000 0.714 1.0500 0.520 1.1000 0.287 1.1500 0.134 1.2000 0.0623 1.2500 0.0245 1.3000 0.0100 1.3500 0.0040 1.4000 0.0015 1.4500 0.0007 1.5000 0.0003 ]; % experimental data,`1st coloum x, 2nd coloum R x=data(:,1); Rexp=data(:,2); plot(x,Rexp,'ro'); % plot the experimental data hold on b0=[1.0 1.0]; % start values for the parameters b=lsqnonlin('recfun',b0) % run the lsqnonlin with start value b0, returned parameter values stored in b Rcal=1./(1+exp(1.0986/b(1)*(x-b(2)))); % calculate the fitted value with parameter b plot(x,Rcal,'b'); % plot the fitted value on the same graph Find b1 and b2 >>recfit >>b = 0.0603 1.0513 %recfun.m function y=recfun(b) global data; x=data(:,1); Rexp=data(:,2); Rcal=1./(1+exp(1.0986/b(1)*(x-b(2)))); % the calculated value from the model %y=sum((Rcal-Rexp).^2); y=Rcal-Rexp; % the sum of the square of the difference %between calculated value and experimental value Link to this Page Short tutorial Model Fitting last edited on 26 October 2003 at 7:22 pm by westlake.che.gatech.edu

Numerical Optimization Newton –Raphson Method Root Solver – System of nonlinear equations (MATLAB – Optimization – fsolve) 2. One dimensional Solver – (MATLAB - OPTIMIZATION Method ) Steepest Gradient Ascent/Descent Methods Is the magnitude of descent direction. achieve the steepest descent achieve steepest ascent

Steepest Gradient Solve the following for two step steepest ascent The partial derivatives can be evaluated at the initial guesses, x = 1 and y = 1, Therefore, the search direction is –0.75i. This can be differentiated and set equal to zero and solved for h* = 0.33333. Therefore, the result for the first iteration is x = 1 – 0.75(0.3333) = 0.75 and y = 1 + 0(0.3333) = 1. For the second iteration, the partial derivatives can be evaluated as, Therefore, the search direction is –0.5625j. This can be differentiated and set equal to zero and solved for h* = 0.25. Therefore, the result for the second iteration is x = 0.75 + 0(0.25) = 0.75 and y = 1 + (–0.5625)0.25 = 0.859375.

%chapra14.5 … Contd clear clc Clf x2 ww1=0:0.01:1.2; ww2=ww1; [w1,w2]=meshgrid(ww1,ww2); J=-1.5*w1.^2+2.25*w2.*w1-2*w2.^2+1.75*w2; cs=contour(w1,w2,J,70); %clabel(cs); hold grid w1=1; w2=1; h=0; for i=1:10 syms h dfw1=-3*w1(i)+2.25*w2(i); dfw2=2.25*w1(i)-4*w2(i)+1.75; fw1=-1.5*(w1(i)+dfw1*h).^2 + 2.25*(w2(i)+dfw2*h).*(w1(i)+… dfw1*h)-2*(w2(i)+dfw2*h).^2+1.75*(w2(i)+dfw2*h); J=-1.5*w1(i)^2+2.25*w2(i)*w1(i)-2*w2(i)^2+1.75*w2(i) g=solve(fw1); h=sum(g)/2; w1(i+1)=w1(i)+dfw1*h; w2(i+1)=w2(i)+dfw2*h; plot(w1,w2) xi pause(0.05) End MATLAB OPTIMIZATION TOOLBOX w1(i), w2(i) function J=chaprafun(x) w1=x(1); w2=x(2) J=-(-1.5*w1^2+2.25*w2*w1-2*w2^2+1.75*w2); %startchapra.m clc clear x0=[1 1]; options=optimset('LargeScale','off','Display','iter','Maxiter',… 20,'MaxFunEvals',100,'TolX',1e-3,'TolFun',1e-3); [x,fval]=fminunc(@chaprafun,x0,options)

Newton –Raphson – Four Bar Mechanism Sine and Cosine angle components - All angles referenced from global x - axis In above equations θ1 = 0 as it is along the x-axis and other three angles are time varying. 1st angular velocity derivative

If input is applied to link2 - DC motor then ω2 would be the input to the syste In Matrix for 2. Numerical Solution for Non Algebraic Equations

Newton-Raphson %fouropt.m function f=fouropt(x) the = 0; r1=12; r2=4; r3=10; r4=7; f=-[r2*cos(the)+r3*cos(x(1))-r1*cos(0)-r4*cos(x(2)); r2*sin(the)+r3*sin(x(1))-r1*sin(0)-r4*sin(x(2))]; %startfouropt.m clc clear x0=[0.1 0.1]; options=optimset('LargeScale','off','Display','iter','Maxiter',… 200,'MaxFunEvals',100,'TolX',1e-8,'TolFun',1e-8); [x,fval]=fsolve(@fouropt,x0,options); theta3=x(1)*57.3 theta4=x(2)*57.3 Foursimmechm.m, foursimmech.mdl and possol4.m

Simulink PID Controller Optimization UNIM513tune1.mdl