Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.

Slides:



Advertisements
Similar presentations
Fin500J: Mathematical Foundations in Finance
Advertisements

Solving LP Models Improving Search Special Form of Improving Search
Nonlinear Programming McCarl and Spreen Chapter 12.
1 OR II GSLM Outline  some terminology  differences between LP and NLP  basic questions in NLP  gradient and Hessian  quadratic form  contour,
Optimization. f(x) = 0 g i (x) = 0 h i (x)
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
APPENDIX A: REVIEW OF LINEAR ALGEBRA APPENDIX B: CONVEX AND CONCAVE FUNCTIONS V. Sree Krishna Chaitanya 3 rd year PhD student Advisor: Professor Biswanath.
Page 1 Page 1 ENGINEERING OPTIMIZATION Methods and Applications A. Ravindran, K. M. Ragsdell, G. V. Reklaitis Book Review.
Engineering Optimization
L12 LaGrange Multiplier Method Homework Review Summary Test 1.
D Nagesh Kumar, IIScOptimization Methods: M2L2 1 Optimization using Calculus Convexity and Concavity of Functions of One and Two Variables.
Nonlinear Programming
Thursday, April 25 Nonlinear Programming Theory Separable programming Handouts: Lecture Notes.
Optimization in Engineering Design 1 Lagrange Multipliers.
MIT and James Orlin © Nonlinear Programming Theory.
OPTIMAL CONTROL SYSTEMS
Chapter 5: Linear Discriminant Functions
Exploiting Duality (Particularly the dual of SVM) M. Pawan Kumar VISUAL GEOMETRY GROUP.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
Support Vector Machines and Kernel Methods
Unconstrained Optimization Problem
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
MAE 552 – Heuristic Optimization Lecture 1 January 23, 2002.
Example 1 Determine whether the stationary point of the following quadratic functions is a local maxima, local minima or saddle point? A point x* is a.
Optimization using Calculus
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
D Nagesh Kumar, IIScOptimization Methods: M2L3 1 Optimization using Calculus Optimization of Functions of Multiple Variables: Unconstrained Optimization.
Computer Algorithms Mathematical Programming ECE 665 Professor Maciej Ciesielski By DFG.
Tier I: Mathematical Methods of Optimization
Lecture 9 – Nonlinear Programming Models
1 OR II GSLM Outline  separable programming  quadratic programming.
Introduction to Optimization (Part 1)
KKT Practice and Second Order Conditions from Nash and Sofer
1. Problem Formulation. General Structure Objective Function: The objective function is usually formulated on the basis of economic criterion, e.g. profit,
Chapter 11 Nonlinear Programming
ENCI 303 Lecture PS-19 Optimization 2
Linear Programming Topics General optimization model LP model and assumptions Manufacturing example Characteristics of solutions Sensitivity analysis Excel.
L4 Graphical Solution Homework See new Revised Schedule Review Graphical Solution Process Special conditions Summary 1 Read for W for.
1 Chapter 7 Linear Programming. 2 Linear Programming (LP) Problems Both objective function and constraints are linear. Solutions are highly structured.
Pareto Linear Programming The Problem: P-opt Cx s.t Ax ≤ b x ≥ 0 where C is a kxn matrix so that Cx = (c (1) x, c (2) x,..., c (k) x) where c.
Discrete Optimization Lecture 2 – Part I M. Pawan Kumar Slides available online
1 Optimization Multi-Dimensional Unconstrained Optimization Part II: Gradient Methods.
Nonlinear Programming Models
EASTERN MEDITERRANEAN UNIVERSITY Department of Industrial Engineering Non linear Optimization Spring Instructor: Prof.Dr.Sahand Daneshvar Submited.
Chapter 3 Linear Programming Methods
Nonlinear Programming I Li Xiaolei. Introductory concepts A general nonlinear programming problem (NLP) can be expressed as follows: objective function.
Optimization unconstrained and constrained Calculus part II.
(iii) Lagrange Multipliers and Kuhn-tucker Conditions D Nagesh Kumar, IISc Introduction to Optimization Water Resources Systems Planning and Management:
Signal & Weight Vector Spaces
Performance Surfaces.
Introduction to Optimization
Linear & Nonlinear Programming -- Basic Properties of Solutions and Algorithms.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
Linear Programming Chap 2. The Geometry of LP  In the text, polyhedron is defined as P = { x  R n : Ax  b }. So some of our earlier results should.
OR II GSLM
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Regularized Least-Squares and Convex Optimization.
Pattern Classification All materials in these slides were taken from Pattern Classification (2nd ed) by R. O. Duda, P. E. Hart and D. G. Stork, John Wiley.
deterministic operations research
Lecture 8 – Nonlinear Programming Models
3-3 Optimization with Linear Programming
PRELIMINARY MATHEMATICS
Outline Unconstrained Optimization Functions of One Variable
EE 458 Introduction to Optimization
Performance Surfaces.
Chapter 2. Simplex method
Convex and Concave Functions
Multivariable optimization with no constraints
Presentation transcript:

Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming Examples

In LP... the objective function & constraints are linear and the problems are “easy” to solve. Many real-world engineering and business problems have nonlinear elements and are hard to solve. Nonlinear Optimization

Minimize f(x) s.t. g i (x) ( , , =) b i, i = 1,…,m x = (x 1,…,x n ) is the n-dimensional vector of decision variables f (x) is the objective function g i (x) are the constraint functions b i are fixed known constants General NLP

Example 1 Max f ( x ) = 3 x x 2 4 s.t. x 1 + x 2  1, x 1  0, x 2 unrestricted 2 Examples 2 and 3 can be reformulated as LPs Example 2 Max f ( x ) = e c 1 x 1 e c 2 x 2 … e c n x n s.t. Ax = b, x  0 n Example 3Min  j =1 f j ( x j ) s.t. Ax = b, x  0 where each f j ( x j ) is of the form Problems with “decreasing efficiencies” fj(xj)fj(xj) xjxj Examples of NLPs

Max f ( x 1, x 2 ) = x 1 x 2 s.t.4 x 1 + x 2  8 x 1  0, x 2  f(x) = 2 f(x) = 1 x2x2 Optimal solution will lie on the line g(x) = 4x 1 + x 2 – 8 = 0. x1x1 NLP Graphical Solution Method

Solution is not a vertex of feasible region. For this particular problem the solution is on the boundary of the feasible region. This is not always the case. In a more general case,  f ( x 1, x 2 ) =  g ( x 1, x 2 ) with   0. ( In this case,  = 1.) Gradient of f ( x ) =  f ( x 1, x 2 )  (  f /  x 1,  f /  x 2 ) T This gives  f /  x 1 = x 2,  f /  x 2 = x 1 and  g /  x 1 = 4,  g /  x 2 = 1 At optimality we have  f ( x 1, x 2 ) =  g ( x 1, x 2 ) or x 1 * = 1 and x 2 * = 4 Solution Characteristics

f(x)f(x) x local min global max stationary point local min local max Let S   n be the set of feasible solutions to an NLP. Definition: A global minimum is any x 0  S such than f (x 0 )  f (x) for all feasible x not equal to x 0. Nonconvex Function

At (1, 0),  f ( x 1, x 2 ) =  1  g 1 ( x 1, x 2 ) +  2  g 1 ( x 1, x 2 ) or (0, 6) =  1 (1, 0) +  2 (0, 1),  1  0,  2  0 so  1 = 0 and  2 = 6 If g 1 = x 1  0 and g 2 = x 2  0, what is the optimum ? Function with Unique Global Minimum at x = (1, –3)

Min { f (x)= sin(x) : 0  x  5  } Function with Multiple Maxima and Minima

Constrained Function with Unique Global Maximum and Unique Global Minimum

Convex function:If you draw a straight line between any two points on f (x) the line will be above or on f (x). Concave function: If f (x) is convex than – f (x) is concave. Linear functions are both convex and concave. Convexity d 2 f (x) dx2dx2 ≥ 0 for all x Convexity condition for univariate f : x1x1 x2x2 f (x)

Definition of Convexity Let x 1 and x 2 be two points (vectors) in S   n. A function f (x) is convex if and only if f ( x 1 + (1– )x 2 ) ≤ f (x 1 ) + (1– )f (x 2 ) for all 0 < < 1. It is strictly convex if the inequality sign ≤ is replaced with the sign <. 1-dimensional example

f(x)f(x) x Nonconvex -- Nonconave Function x1x1 x2x2

d 2 f dxdx 1 2 d 2 f dxdx 1 dxdx 2... d 2 f dx 1 dxdx n d 2 f dxdx 2 dxdx 1 d 2 f dxdx n dxdx 1... d 2 f dxdx n 2 Hessian of f at x :  2 f ( x ) = A positively weighted sum of convex functions is convex: If f k ( x ) is convex for k =1,…, m and  1,…,  m  0, then f ( x ) =  k f k ( x ) is convex. m k =1 Theoretical Result for Convex Functions Used to determine convexity.

Determining Convexity One-Dimensional Functions: A function f ( x )  C 1 is convex if and only if it is underestimated by linear extrapolation; i.e., f ( x 2 ) ≥ f ( x 1 ) + (d f ( x 1 )/d x )( x 2 – x 1 ) for all x 1 and x 2. A function f ( x )  C 2 is convex if and only if its second derivative is nonnegative. d 2 f ( x )/d x 2 ≥ 0 for all x If the inequality is strict (>), then f ( x ) is strictly convex. x 1 x 2 f(x)f(x)

Multiple Dimensional Functions Definition: The Hessian matrix H(x) associated with f (x) is the n  n symmetric matrix of second partial derivatives of f (x) with respect to the components of x. Example: f (x) = 3(x 1 ) 2 + 4(x 2 ) 3 – 5x 1 x 2 + 4x 1 When f (x) is quadratic, H(x) has only constant terms; when f (x) is linear, H(x) does not exist. f ( x ) is convex if only if f ( x 2 ) ≥ f ( x 1 ) +  T f ( x 1 )( x 2 – x 1 ) for all x 1 and x 2.

Properties of the Hessian H ( x ) is positive definite if and only if x T Hx > 0 for all x  0. H ( x ) is positive semi-definite if and only if x T Hx ≥ 0 for all x and there exists and x  0 such that x T Hx = 0. H ( x ) is indefinite if and only if x T Hx > 0 for some x, and x T Hx < 0 for some other x. How can we use Hessian to determine whether or not f ( x ) is convex?

Multiple Dimensional Functions and Convexity f ( x ) is strictly convex (or just convex) if its associated Hessian matrix H ( x ) is positive definite (semi-definite) for all x. f ( x ) is neither convex nor concave if its associated Hessian matrix H ( x ) is indefinite The terms negative definite and negative-semi- definite are also appropriate for the Hessian and provide symmetric results for concave functions. Recall that a function f ( x ) is concave if – f ( x ) is convex.

Testing for Definiteness Definition: The i th leading principal submatrix of H is the matrix formed taking the intersection of its first i rows and i columns. Let H i be the value of the corresponding determinant: Let Hessian, H =, where h ij =  2 f (x)/  x i  x j

Rules for Definiteness H is positive definite if and only if the determinants of all the leading principal submatrices are positive; i.e., H i > 0 for i = 1,…, n. H is negative definite if and only if H 1 < 0 and the remaining leading principal determinants alternate in sign: H 2 > 0, H 3 0,... Positive-semidefinite and negative semi-definiteness require that all principal submatrices satisfy the above conditions for the particular case.

Quadratic Functions Example 1: f ( x ) = 3 x 1 x 2 + x x 2 2 so H 1 = 2 and H 2 = 12 – 9 = 3 Conclusion  f ( x ) is convex because H ( x ) is positive definite.

Quadratic Functions (cont’d) Example 2: f ( x ) = 24 x 1 x 2 + 9x x 2 2 so H 1 = 18 and H 2 = 576 – 576 = 0 Thus H is positive semi-definite (determinants of all submatrices are nonnegative) so f ( x ) is convex. Note, x T Hx = 2(3 x x 2 ) 2 ≥ 0. For x 1 =  4, x 2 = 3, we get x T Hx = 0.

Nonquadratic Functions Example 3: f (x) = (x 2 – x 1 2 ) 2 + (1 – x 1 ) 2 Thus the Hessian depends on the point under consideration: At x = (1, 1), which is positive definite. At x = (0, 1), which is indefinite. Thus f(x) is not convex although it is strictly convex near (1, 1).

What You Should Know About Nonlinear Programming How to develop models with nonlinear functions. The definition of convexity. Rules for positive and negative definiteness How to identify a convex function. The difference between a local and global solution.