Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1.

Slides:



Advertisements
Similar presentations
Optimization with Constraints
Advertisements

Copyright © 2006 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 One-Dimensional Unconstrained Optimization Chapter.
Optimization : The min and max of a function
Optimization of thermal processes
Optimization 吳育德.
Optimization methods Review
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Easy Optimization Problems, Relaxation, Local Processing for a small subset of variables.
Local and Global Optima
Machine learning optimization
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Visual Recognition Tutorial
MIT and James Orlin © Nonlinear Programming Theory.
Numerical Optimization
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Optimization Methods One-Dimensional Unconstrained Optimization
MAE 552 – Heuristic Optimization Lecture 3 January 28, 2002.
Constrained Optimization
Optimization Mechanics of the Simplex Method
MAE 552 – Heuristic Optimization Lecture 4 January 30, 2002.
Optimization Methods One-Dimensional Unconstrained Optimization
Advanced Topics in Optimization
Optimization Methods One-Dimensional Unconstrained Optimization
Tier I: Mathematical Methods of Optimization
Optimization of Linear Problems: Linear Programming (LP) © 2011 Daniel Kirschen and University of Washington 1.
LP formulation of Economic Dispatch
Introduction to Optimization (Part 1)
Computational Optimization
Collaborative Filtering Matrix Factorization Approach
1 Chapter 8 Nonlinear Programming with Constraints.
ENCI 303 Lecture PS-19 Optimization 2
84 b Unidimensional Search Methods Most algorithms for unconstrained and constrained optimisation use an efficient unidimensional optimisation technique.
Chapter 7 Optimization. Content Introduction One dimensional unconstrained Multidimensional unconstrained Example.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
1 N -Queens via Relaxation Labeling Ilana Koreh ( ) Luba Rashkovsky ( )
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
1 Markov Decision Processes Infinite Horizon Problems Alan Fern * * Based in part on slides by Craig Boutilier and Daniel Weld.
Method of Hooke and Jeeves
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Department of Mechanical Engineering, The Ohio State University Sl. #1GATEWAY Optimization.
Part 4 Nonlinear Programming 4.1 Introduction. Standard Form.
L21 Numerical Methods part 1 Homework Review Search problem Line Search methods Summary 1 Test 4 Wed.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
L25 Numerical Methods part 5 Project Questions Homework Review Tips and Tricks Summary 1.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Linear Programming Interior-Point Methods D. Eiland.
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 1 Primal Methods.
Ch. Eick: Num. Optimization with GAs Numerical Optimization General Framework: objective function f(x 1,...,x n ) to be minimized or maximized constraints:
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
Searching a Linear Subspace Lecture VI. Deriving Subspaces There are several ways to derive the nullspace matrix (or kernel matrix). ◦ The methodology.
Lecture 4 Chapter 3 Improving Search
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 2 - Chapter 7 Optimization.
MIT and James Orlin © The Geometry of Linear Programs –the geometry of LPs illustrated on GTC.
Optimal Control.
Bounded Nonlinear Optimization to Fit a Model of Acoustic Foams
Part 4 Nonlinear Programming
Computational Optimization
Haim Kaplan and Uri Zwick
Dr. Arslan Ornek IMPROVING SEARCH
Collaborative Filtering Matrix Factorization Approach
Chapter 7 Optimization.
Optimization Part II G.Anuradha.
Part 4 Nonlinear Programming
Optimization and Some Traditional Methods
Chapter 11 Reliability-Based Optimum Design
Presentation transcript:

Non-Linear Programming © 2011 Daniel Kirschen and University of Washington 1

Motivation Method of Lagrange multipliers – Very useful insight into solutions – Analytical solution practical only for small problems – Direct application not practical for real-life problems because these problems are too large – Difficulties when problem is non-convex Often need to search for the solution of practical optimization problems using: – Objective function only or – Objective function and its first derivative or – Objective function and its first and second derivatives © 2011 Daniel Kirschen and University of Washington 2

Naïve One-Dimensional Search Suppose: – That we want to find the value of x that minimizes f(x) – That the only thing that we can do is calculate the value of f(x) for any value of x We could calculate f(x) for a range of values of x and choose the one that minimizes f(x) © 2011 Daniel Kirschen and University of Washington 3

x f(x) Naïve One-Dimensional Search Requires a considerable amount of computing time if range is large and a good accuracy is needed © 2011 Daniel Kirschen and University of Washington 4

One-Dimensional Search x f(x) x0x0 © 2011 Daniel Kirschen and University of Washington 5

One-Dimensional Search x f(x) x0x0 x1x1 © 2011 Daniel Kirschen and University of Washington 6

One-Dimensional Search x f(x) x0x0 x1x1 x2x2 If the function is convex, we have bracketed the optimum Current search range © 2011 Daniel Kirschen and University of Washington 7

One-Dimensional Search x f(x) x0x0 x1x1 x2x2 x3x3 Optimum is between x 1 and x 2 We do not need to consider x 0 anymore © 2011 Daniel Kirschen and University of Washington 8

One-Dimensional Search x f(x) x0x0 x1x1 x2x2 x3x3 Repeat the process until the range is sufficiently small x4x4 © 2011 Daniel Kirschen and University of Washington 9

One-Dimensional Search x f(x) x0x0 x1x1 x2x2 x3x3 The procedure is valid only if the function is convex! x4x4 © 2011 Daniel Kirschen and University of Washington 10

Multi-Dimensional Search Unidirectional search not applicable Naïve search becomes totally impossible as dimension of the problem increases If we can calculate the first derivatives of the objective function, much more efficient searches can be developed The gradient of a function gives the direction in which it increases/decreases fastest © 2011 Daniel Kirschen and University of Washington 11

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 12

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 13

Unidirectional Search Gradient direction Objective function © 2011 Daniel Kirschen and University of Washington 14

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 15

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 16

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 17

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 18

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 19

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 20

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 21

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 22

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 23

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 24

Steepest Ascent/Descent Algorithm x1x1 x2x2 © 2011 Daniel Kirschen and University of Washington 25

Choosing a Direction Direction of steepest ascent/descent is not always the best choice Other techniques have been used with varying degrees of success In particular, the direction chosen must be consistent with the equality constraints © 2011 Daniel Kirschen and University of Washington 26

How far to go in that direction? Unidirectional searches can be time- consuming Second order techniques that use information about the second derivative of the objective function can be used to speed up the process Problem with the inequality constraints – There may be a lot of inequality constraints – Checking all of them every time we move in one direction can take an enormous amount of computing time © 2011 Daniel Kirschen and University of Washington 27

Handling of inequality constraints © 2011 Daniel Kirschen and University of Washington 28 x1x1 x2x2 Move in the direction of the gradient How do I know that I have to stop here?

Handling of inequality constraints © 2011 Daniel Kirschen and University of Washington 29 x1x1 x2x2 How do I know that I have to stop here?

Penalty functions © 2011 Daniel Kirschen and University of Washington 30 Penalty x max x min

Penalty functions © 2011 Daniel Kirschen and University of Washington 31 Replace enforcement of inequality constraints by addition of penalty terms to objective function Penalty x max x min K(x-x max ) 2

Problem with penalty functions © 2011 Daniel Kirschen and University of Washington 32 Stiffness of the penalty function must be increased progressively to enforce the constraints tightly enough Not very efficient method Penalty x max x min

Barrier functions © 2011 Daniel Kirschen and University of Washington 33 Barrier cost x max x min

Barrier functions © 2011 Daniel Kirschen and University of Washington 34 Barrier must be made progressively closer to the limit Works better than penalty function Interior point methods Barrier cost x max x min

Non-Robustness © 2011 Daniel Kirschen and University of Washington 35 x1x1 x2x2 B A C D X Y Different starting points may lead to different solutions if the problem is not convex

Conclusions Very sophisticated non-linear programming methods have been developed They can be difficult to use: – Different starting points may lead to different solutions – Some problems will require a lot of iterations – They may require a lot of “tuning” © 2011 Daniel Kirschen and University of Washington 36