Lecture 8 – Nonlinear Programming Models

Slides:



Advertisements
Similar presentations
Fin500J: Mathematical Foundations in Finance
Advertisements

Solving LP Models Improving Search Special Form of Improving Search
Nonlinear Programming McCarl and Spreen Chapter 12.
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Optimization. f(x) = 0 g i (x) = 0 h i (x)
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Page 1 Page 1 ENGINEERING OPTIMIZATION Methods and Applications A. Ravindran, K. M. Ragsdell, G. V. Reklaitis Book Review.
Engineering Optimization
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Nonlinear Programming
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
MIT and James Orlin © Nonlinear Programming Theory.
OPTIMAL CONTROL SYSTEMS
Chapter 5: Linear Discriminant Functions
Exploiting Duality (Particularly the dual of SVM) M. Pawan Kumar VISUAL GEOMETRY GROUP.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Unconstrained Optimization Problem
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
MAE 552 – Heuristic Optimization Lecture 1 January 23, 2002.
Optimization using Calculus
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
D Nagesh Kumar, IIScOptimization Methods: M2L3 1 Optimization using Calculus Optimization of Functions of Multiple Variables: Unconstrained Optimization.
Computer Algorithms Mathematical Programming ECE 665 Professor Maciej Ciesielski By DFG.
Tier I: Mathematical Methods of Optimization
Lecture 9 – Nonlinear Programming Models
Introduction to Optimization (Part 1)
KKT Practice and Second Order Conditions from Nash and Sofer
Chapter 11 Nonlinear Programming
ENCI 303 Lecture PS-19 Optimization 2
L4 Graphical Solution Homework See new Revised Schedule Review Graphical Solution Process Special conditions Summary 1 Read for W for.
1 Chapter 7 Linear Programming. 2 Linear Programming (LP) Problems Both objective function and constraints are linear. Solutions are highly structured.
Nonlinear Programming Models
EASTERN MEDITERRANEAN UNIVERSITY Department of Industrial Engineering Non linear Optimization Spring Instructor: Prof.Dr.Sahand Daneshvar Submited.
Nonlinear Programming I Li Xiaolei. Introductory concepts A general nonlinear programming problem (NLP) can be expressed as follows: objective function.
Optimization unconstrained and constrained Calculus part II.
Signal & Weight Vector Spaces
Performance Surfaces.
Introduction to Optimization
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
OR II GSLM
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Regularized Least-Squares and Convex Optimization.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Linear Programming for Solving the DSS Problems
Water Resources Development and Management Optimization (Nonlinear Programming & Time Series Simulation) CVEN 5393 Apr 11, 2011.
deterministic operations research
Linear Programming Topics General optimization model
Eigenvalues and Eigenvectors
Chapter 11 Optimization with Equality Constraints
Solver & Optimization Problems
L11 Optimal Design L.Multipliers
Computational Optimization
Linear Programming Topics General optimization model
Linear Programming Topics General optimization model
Lecture 9 – Nonlinear Programming Models
Linear Programming Topics General optimization model
3-3 Optimization with Linear Programming
Quadratic Forms and Objective functions with two or more variables
Constrained Optimization – Part 1
Linear Programming I: Simplex method
PRELIMINARY MATHEMATICS
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John.
Outline Unconstrained Optimization Functions of One Variable
EE 458 Introduction to Optimization
Performance Surfaces.
Chapter 2. Simplex method
Convex and Concave Functions
Multivariable optimization with no constraints
Presentation transcript:

Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming Examples

Nonlinear Optimization In LP ... the objective function & constraints are linear and the problems are “easy” to solve. Many real-world engineering and business problems have nonlinear elements and are hard to solve.

General NLP Minimize f(x) s.t. gi(x) (, , =) bi, i = 1,…,m x = (x1,…,xn) is the n-dimensional vector of decision variables f (x) is the objective function gi(x) are the constraint functions bi are fixed known constants

“decreasing efficiencies” Examples of NLPs 4 Example 1 Max f (x) = 3x1 + 2x2 s.t. x1 + x2 £ 1, x1 ³ 0, x2 unrestricted 2 Example 2 Max f (x) = e c 1 x e 2 … n s.t. Ax = b, x ³ 0 n Example 3 Min å j =1 fj (xj ) s.t. Ax = b, x ³ 0 where each fj(xj ) is of the form Problems with “decreasing efficiencies” fj(xj) xj Examples 2 and 3 can be reformulated as LPs

NLP Graphical Solution Method Max f(x1, x2) = x1x2 s.t. 4x1 + x2 £ 8 x1 ³ 0, x2 ³ 0 x2 8 f(x) = 2 f(x) = 1 2 x1 Optimal solution will lie on the line g(x) = 4x1 + x2 – 8 = 0.

Solution Characteristics Gradient of f (x) = f (x1, x2)  (f/x1, f/x2)T This gives f/x1 = x2, f/x2 = x1 and g/x1 = 4, g/x2 = 1 At optimality we have f (x1, x2) = g (x1, x2) or x1* = 1 and x2* = 4 Solution is not a vertex of feasible region. For this particular problem the solution is on the boundary of the feasible region. This is not always the case. In a more general case, f (x1, x2) = g (x1, x2) with   0. (In this case,  = 1.)

Nonconvex Function global max stationary point f(x) local max local min local min x Let S  n be the set of feasible solutions to an NLP. Definition: A global minimum is any x0  S such than f (x0)  f (x) for all feasible x not equal to x0.

Function with Unique Global Minimum at x = (1, –3) If g1 = x1 ³ 0 and g2 = x2 ³ 0, what is the optimum ? At (1, 0), f(x1, x2) = 1g1(x1, x2) + 2g1(x1, x2) or (0, 6) = 1(1, 0) + 2(0, 1), 1 ³ 0, 2 ³ 0 so 1 = 0 and 2 = 6

Function with Multiple Maxima and Minima Min { f (x)= sin(x) : 0  x  5p}

Constrained Function with Unique Global Maximum and Unique Global Minimum

Convexity Convex function: If you draw a straight line between any two points on f (x) the line will be above or on the line. Concave function: If f (x) is convex than – f (x) is concave. d2 f (x) dx2 ≥ 0 for all x Convexity condition for univariate f : Linear functions are both convex and concave.

Definition of Convexity Let x1 and x2 be two points in S  n. A function f (x) is convex if and only if f (lx1 + (1–l)x2) ≤ lf (x1) + (1–l)f (x2) for all 0 < l < 1. It is strictly convex if the inequality sign ≤ is replaced with the sign <. 1-dimensional example

Nonconvex -- Nonconave Function f(x) x

Theoretical Result for Convex Functions A positively weighted sum of convex functions is convex: if fk(x) k =1,…,m are convex and 1,…,m ³ 0, then f (x) = å ak fk(x) is convex. m k =1 d 2 f dx 1 . . . n . . . Hessian of f at x : s2f (x) = .

Determining Convexity One-Dimensional Functions: A function f (x) Î C 1 is convex if and only if it is underestimated by linear extrapolation; i.e., f (x2) ≥ f (x1) + (df (x1)/dx)(x2 – x1) for all x1 and x2. x1 x2 f(x) A function f (x)  C 2 is convex if and only if its second derivative is nonnegative. d2f (x)/dx2 ≥ 0 for all x If the inequality is strict (>), the function is strictly convex.

Multiple Dimensional Functions f (x) is convex if only if f (x2) ≥ f (x1) + Tf (x1)(x2 – x1) for all x1 and x2. Definition: The Hessian matrix H(x) associated with f (x) is the n  n symmetric matrix of second partial derivatives of f (x) with respect to the components of x. When f (x) is quadratic, H(x) has only constant terms; when f (x) is linear, H(x) does not exist. Example: f (x) = 3(x1)2 + 4(x2)3 – 5x1x2 + 4x1

Properties of the Hessian How can we use Hessian to determine whether or not f(x) is convex? H(x) is positive definite if and only if xTHx > 0 for all x  0. H(x) is positive semi-definite if and only if xTHx ≥ 0 for all x and there exists and x  0 such that xTHx = 0. H(x) is indefinite if and only if xTHx > 0 for some x, and xTHx < 0 for some other x.

Multiple Dimensional Functions and Convexity f (x) is strictly convex (convex) if its associated Hessian matrix H(x) is positive definite (semi- definite) for all x. f (x) is neither convex nor concave if its associated Hessian matrix H(x) is indefinite The terms negative definite and negative-semi definite are also appropriate for the Hessian and provide symmetric results for concave functions. Recall that a function f (x) is concave if –f (x) is convex.

Testing for Definiteness Let Hessian, H = , where hij = 2f (x)/xixj Definition: The ith leading principal submatrix of H is the matrix formed taking the intersection of its first i rows and i columns. Let Hi be the value of the corresponding determinant:

Rules for Definiteness H is positive definite if and only if the determinants of all the leading principal submatrices are positive; i.e., Hi > 0 for i = 1,…,n. H is negative definite if and only if H1 < 0 and the remaining leading principal determinants alternate in sign: H2 > 0, H3 < 0, H4 > 0, . . . Positive-semidefinite and negative semi-definiteness require that all principal submatrices satisfy the above conditions for the particular case.

Quadratic Functions Example 1: f (x) = 3x1x2 + x12 + 3x22 so H1 = 2 and H2 = 12 – 9 = 3 Conclusion  f (x) is convex because H(x) is positive definite.

Quadratic Functions Example 2: f (x) = 24x1x2 + 9x12 + 16x22 so H1 = 18 and H2 = 576 – 576 = 0 Thus H is positive semi-definite (determinants of all submatrices are nonnegative) so f (x) is convex. Note, xTHx = 2(3x1 + 4x2)2 ≥ 0. For x1 = -4, x2 = 3, we get xTHx = 0.

Nonquadratic Functions Example 3: f (x) = (x2 – x12)2 + (1 – x1)2 Thus the Hessian depends on the point under consideration: At x = (1, 1), which is positive definite. At x = (0, 1), which is indefinite. Thus f(x) is not convex although it is strictly convex near (1, 1).

What You Should Know About Nonlinear Programming How to identify the decision variables. How to write constraints. How to identify a convex function. The difference between a local and global solution.