Nonlinear Programming McCarl and Spreen Chapter 12.

Slides:



Advertisements
Similar presentations
Fin500J: Mathematical Foundations in Finance
Advertisements

Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Geometry and Theory of LP Standard (Inequality) Primal Problem: Dual Problem:
Analyzing Multivariable Change: Optimization
Introduction to Algorithms
Chapter 6 Linear Programming: The Simplex Method
Dragan Jovicic Harvinder Singh
Optimization of thermal processes2007/2008 Optimization of thermal processes Maciej Marek Czestochowa University of Technology Institute of Thermal Machinery.
Introducción a la Optimización de procesos químicos. Curso 2005/2006 BASIC CONCEPTS IN OPTIMIZATION: PART II: Continuous & Unconstrained Important concepts.
Lecture 8 – Nonlinear Programming Models Topics General formulations Local vs. global solutions Solution characteristics Convexity and convex programming.
Separating Hyperplanes
Linear Inequalities and Linear Programming Chapter 5
Optimization in Engineering Design 1 Lagrange Multipliers.
Optimization using Calculus
OPTIMAL CONTROL SYSTEMS
THE MATHEMATICS OF OPTIMIZATION
Finite Mathematics & Its Applications, 10/e by Goldstein/Schneider/SiegelCopyright © 2010 Pearson Education, Inc. 1 of 99 Chapter 4 The Simplex Method.
Optimization Mechanics of the Simplex Method
Optimization Linear Programming and Simplex Method
Unconstrained Optimization Problem
Constrained Optimization Economics 214 Lecture 41.
Optimality Conditions for Nonlinear Optimization Ashish Goel Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.
D Nagesh Kumar, IIScOptimization Methods: M2L3 1 Optimization using Calculus Optimization of Functions of Multiple Variables: Unconstrained Optimization.
Chapter 4 The Simplex Method
Spreadsheet Modeling & Decision Analysis:
THE MATHEMATICS OF OPTIMIZATION
Tier I: Mathematical Methods of Optimization
Introduction to Optimization (Part 1)
Applied Economics for Business Management
Econ 533 Econometrics and Quantitative Methods One Variable Calculus and Applications to Economics.
Optimization Techniques Methods for maximizing or minimizing an objective function Examples –Consumers maximize utility by purchasing an optimal combination.
LINEAR PROGRAMMING SIMPLEX METHOD.
11. Cost minimization Econ 494 Spring 2013.
Special Conditions in LP Models (sambungan BAB 1)
Chapter 11 Nonlinear Programming
Chapter 4 Simplex Method
Chapter 6 Linear Programming: The Simplex Method
8. Linear Programming (Simplex Method) Objectives: 1.Simplex Method- Standard Maximum problem 2. (i) Greedy Rule (ii) Ratio Test (iii) Pivot Operation.
Managerial Economics Prof. M. El-Sakka CBA. Kuwait University Managerial Economics in a Global Economy Chapter 2 Optimization Techniques and New Management.
Nonlinear Programming Models
Slide 1  2002 South-Western Publishing Web Chapter A Optimization Techniques Overview Unconstrained & Constrained Optimization Calculus of one variable.
Chapter 6 Linear Programming: The Simplex Method Section 4 Maximization and Minimization with Problem Constraints.
Nonlinear Programming I Li Xiaolei. Introductory concepts A general nonlinear programming problem (NLP) can be expressed as follows: objective function.
Optimization unconstrained and constrained Calculus part II.
Chapter 4 Sensitivity Analysis, Duality and Interior Point Methods.
Spreadsheet Modeling & Decision Analysis A Practical Introduction to Management Science 5 th edition Cliff T. Ragsdale.
(iii) Lagrange Multipliers and Kuhn-tucker Conditions D Nagesh Kumar, IISc Introduction to Optimization Water Resources Systems Planning and Management:
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Introduction to Optimization
Calculus-Based Optimization AGEC 317 Economic Analysis for Agribusiness and Management.
Nonlinear Programming In this handout Gradient Search for Multivariable Unconstrained Optimization KKT Conditions for Optimality of Constrained Optimization.
C opyright  2007 by Oxford University Press, Inc. PowerPoint Slides Prepared by Robert F. Brooker, Ph.D.Slide 1 1.
Linear Programming Chapter 9. Interior Point Methods  Three major variants  Affine scaling algorithm - easy concept, good performance  Potential.
Inequality Constraints Lecture 7. Inequality Contraints (I) n A Review of Lagrange Multipliers –As we discussed last time, the first order necessary conditions.
1 Introduction Optimization: Produce best quality of life with the available resources Engineering design optimization: Find the best system that satisfies.
1 Unconstrained and Constrained Optimization. 2 Agenda General Ideas of Optimization Interpreting the First Derivative Interpreting the Second Derivative.
Operations Research By: Saeed Yaghoubi 1 Graphical Analysis 2.
Simplex Method Review. Canonical Form A is m x n Theorem 7.5: If an LP has an optimal solution, then at least one such solution exists at a basic feasible.
1 Optimization Linear Programming and Simplex Method.
D Nagesh Kumar, IISc Water Resources Systems Planning and Management: M2L2 Introduction to Optimization (ii) Constrained and Unconstrained Optimization.
Linear Programming for Solving the DSS Problems
McCarl and Spreen Chapter 2
Chapter 11 Optimization with Equality Constraints
Principles and Worldwide Applications, 7th Edition
Lecture 8 – Nonlinear Programming Models
Constrained Optimization
Unconstrained and Constrained Optimization
PRELIMINARY MATHEMATICS
Outline Unconstrained Optimization Functions of One Variable
Calculus-Based Optimization AGEC 317
Presentation transcript:

Nonlinear Programming McCarl and Spreen Chapter 12

Optimality Conditions Unconstrained optimization – multivariate calculus problem. For Y=f(X), the optimum occurs at the point where f '(X) =0 and f’''(X) meets second order conditions A relative minimum occurs where f '(X) =0 and f’''(X) >0 A relative maximum occurs where f '(X) =0 and f’''(X) <0

Concavity and Second Derivative f’’(x) 0 local max and global max local max local min local min and global min

Multivariate Case To find an optimum point, set the first partial derivatives (all of them) to zero. At the optimum point, evaluate the matrix of second partial derivatives (Hessian matrix) to see if it is positive definite (minimum) or negative definite (maximum). Check characteristic roots or apply determinental test to principal minors.

Determinental Test for a Maximum – Negative Definite Hessian f11 f12 f13 f21 f22 f23 f31 f32 f33 f11 f12 f21 f22 f11 < 0 > 0 < 0 These would all be positive for a minimum. (matrix positive definite)

Global Optimum A univariate function with a negative second derivative everywhere guarantees a global maximum at the point (if there is one) where f’(X)=0. These functions are called “concave down” or sometimes just “concave.” A univariate function with a positive second derivative everywhere guarantees a global minimum (if there is one) at the point where f’(X)=0. These functions are called “concave up” or sometimes “convex.”

Multivariate Global Optimum If the Hessian matrix is positive definite (or negative definite) for all values of the variables, then any optimum point found will be a global minimum (maximum).

Constrained Optimization Equality constraints – often solvable by calculus Inequality constraints – sometimes solvable by numerical methods

Equality Constraints Maximize f(X) s.t. gi(X) = bi Set up the Lagrangian function: L(X, ) = f(X) -  i i (gi(X)-bi)

Optimizing the Lagrangian Differentiate the Lagrangian function with respect to X and. Set the partial derivatives equal to zero and solve the simultaneous equation system. Examine the bordered Hessian for concavity conditions. The "border" of this Hessian is comprised of the first partial derivatives of the constraint function, with respect to, X1, and X2.

Bordered Hessian For a max, the determinant of this matrix would be positive. For a min, it would be negative. For problems with 3 or more variables, the “even” determinants are positive for max, and “odd” ones are negative. For a min, all are negative. Note: the determinant is designated |H2|

Aside on Bordered Hessians You can also set these up so that the border carries negative signs. And you can set these up so that the border runs along the bottom and the right edge, with either positive or negative signs. Be sure that the concavity condition tests match the way you set up the bordered Hessian.

Example Minimize X1 2 + X2 2 s.t. X1 + X2 = 10 L = X1 2 + X2 2 - (X1 + X2 – 10)  L/  X1 = 2X1 - = 0  L/  X2 = 2X2 - = 0  L/  = -(X1 + X2 -10) =0

Solving From first two equations: X1* = X2* = */2 Plugging into the third equation yields: X1*=X2*=5 and * = 10

Second Order Conditions For this problem to be a min, the determinant of the bordered Hessian above must be negative, which it is. (It’s -4)

Multi-constraint Case 3 constraints – g, h, and k 3 variables – 1, 2, 3

Multiple Constraints SOC M is the number of constraints in a given problem. N the number of variables in a given problem. The bordered Principle Minor that contains f22 as the last element is denoted |H2| as before. If f33 is the last element, we denote |H3|, and so on. Evaluate |Hm+1| through |Hn|. For a maximum, they alternate in sign. For a min, they all take the sign (-1) M

Additional Qualifications Examine the Jacobian developed from the constraints to see if it is full rank. If it is not full rank, some problems may arise. (The Jacobian is a matrix of first partial derivatives.)

Interpreting the Lagrangian Multipliers The values of the Lagrangian multipliers ( i) are similar to the shadow prices from LP, except they are true derivatives ( i =  L/  bi) and are not usually constant over a range.

Inequality Constraints Maximize f(X) s.t. g(X)  b X  0

Example Minimize C = (X1 – 4) 2 + (X2 –4) 2 s.t. 2x1 + 3x2 ge 6 -3x1 – 2x2 ge –12 x1, x2 ge 0

Graph Optimum: 2 2/13, 2 10/13

A Nonlinear Restriction Maximize Profit = 2x1 + x2 s.t. -x x1 - x2 le 0 2x1 + 3x2 le 12 x1, x2 ge 0

Graph – Profit Max Problem F1 F2 There is a local optimum at edge of F1, but it isn't global.

The Kuhn-Tucker Conditions x f(X*) - * x g(X*)  0 [ x F(X*) - * x g(X*)]X* = 0 X*  0 g(X*)  b *(g(X*)-b) =0 *  0 represents the gradient vector (1 st derivatives)

Economic Interpretation fj is the marginal profit of jth resource. i is shadow price of ith resource gij is the amount of the ith resource used to produce the marginal unit of product j. The sum-product of the shadow prices of the resources and the amounts used to produce the marginal unit of product j is the imputed marginal cost. Because of complementary slackness if j is produced, the marginal profit must be equal to imputed marginal cost.

Quadratic Programming Objective function is quadratic and restrictions are linear. These problems are tractable because the Kuhn-Tucker conditions reduce to something close to a set of linear equations. Standard Representation: Maximize CX – 1/2X'QX s.t. AX  b X  0 (Q is positive semi-def.)

Example Maximize 15X1 + 30X2 + 4X1X2 –2X1 2 – 4X2 2 s.t. X1 + 2X2  30 X1, X2 non-negative C= [15 30] Q = [ ] A = [ 1 2 ] b = 30

Kuhn-Tucker Conditions X2 – 4X1 - 1  0 2.X1(15 + 4X2 – 4X1 - 1 ) = X1 – 8X  0 4.X2(30 + 4X1 – 8X ) = 0 5.X1 + 2X2 – 30  (X1 + 2X2 – 30) = 0 7.X1, X2, 1  0

Reworking Conditions 1.-4X1 + 4X s1 = X1 - 8X s2 = X1 + 2X2 + v1 = 0 Now condition 2 can be expressed as X1s1=0 and condition 4 can be expressed as X2s2 =0 and condition 5 becomes 1v1=0. We can make one constraint X1s1 + X2s2 + 1v1=0

A Convenient Form 4X1 - 4X s1 = X1 + 8X s2 = 30 X1 + 2X2 + v1 = 0 X1s1 + X2s2 + 1v1=0

Modified Simplex Method A modified simplex method can be used to solve the transformed problem. The modification involves the "restricted-entry rule." When choosing an entering basic variable, exclude from consideration any nonbasic variables whose complementary variable is already basic.

Example Maximize 10X1 + 20X2 + 5X1X2 –3X1 2 – 2X2 2 s.t. X1 + 2X2  10 X1 ≤ 7 X1, X2 non-negative

Kuhn-Tucker Conditions a)Derive the Kuhn-Tucker conditions for this problem. L = Z (X1, X2) +  i gi (X1, X2)  L /  X1 <= 0  L /  X2 <= 0 X1 (  L /  X1) = 0X2 (  L /  X2) = 0 There are two constraints in this problem.

Kuhn-Tucker The Kuhn-Tucker condition for the above problem; F.O.C. with respect to X1 and X2: 10 –6 X1+5X <= 0 (these 1 and 2 come from two inequalities) X1 (10 –6 X1 + 5X ) = X2 + 5X <= 0 X2 (20 –4X2 + 5X ) = 0 From the constraint inequalities; X1 + 2X2 <= 10or 1 (X1 + 2X2 - 10) =0 X1 <= 7or 2 (X1- 7) = 0 X1, X2, 1, 2 >= 0