Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 1 Nonlinear Programming

Similar presentations


Presentation on theme: "Chapter 1 Nonlinear Programming"— Presentation transcript:

1 Chapter 1 Nonlinear Programming

2 1.1 INTRODUCTION TO OPTIMIZATION
Optimization is the act of obtaining the best result under given circumstances. The ultimate goal of all such decisions is either to minimize the cost or required or to maximize the desired benefit. Since the cost required or the benefit desired can be expressed as a function of certain decision variables, optimization can be defined as the process of finding the conditions that give the maximum or minimum value of a function.

3 It can be seen from Fig. 1. 1 that if a point x
It can be seen from Fig. 1.1 that if a point x* corresponds to the minimum value of function f(x), the same point also corresponds to the maximum value of the negative of the function, -f(x). Thus, without loss of generality, optimization can be taken to mean minimization.

4 There is no single method available for solving all optimization problems efficiently. Hence a number of optimization methods have been developed for solving different types of optimization problems. The optimum seeking methods are also known as mathematical programming techniques and are generally studied as a part of operations research. Operations research is a branch of mathematics concerned with the application of scientific methods and techniques to decision making problems and with establishing the best or optimal solutions. Table 1.1 lists various mathematical programming techniques together with other well-defined areas of operations research. The classification given in Table 1.1 is not unique; it is given mainly for convenience.

5 TABLE 1.1 Methods of Operations Research
Statistical Methods Stochastic Process Techniques Mathematical Programming - Regression analysis - Cluster analysis, pattern recognition - Design of experiments - Discriminate analysis (factor analysis) - Statistical decision theory - Markov processes - Queueing theory - Renewal theory - Simulation methods Reliability theory Stochastic programming - Calculus methods - Calculus of variations - Nonlinear programming - Geometric programming - Quadratic programming - Linear programming (LP) - Special Methods of LP - Integer programming - Dynamic programming - Separable programming - Multiobjective programming - Goal programming - Network methods: CPM-PERT - Game theory - Simulated annealing - Genetic algorithms Neural networks Inventory control

6 Methods of Operations Research
Mathematical programming techniques are useful in finding the minimum of a function of several variables under a prescribed set of constraints. Stochastic process techniques can be used to analyze problems described by a set of random variables having known probability distributions. Statistical methods enable one to analyze the experimental data and build empirical models to obtain the most accurate representation of the physical situation.

7 1.2 STATEMENT OF AN OPTIMIZATION PROBLEM
An optimization or a mathematical programming problem can be stated as follows. subject to the constraints …. (1.1) where X is an n-dimensional vector called the decision variables, f(X) is termed the objective function, and gj (X) and lj (X) are known as inequality and equality constraints, respectively. The number of variables n and the number of constraints m and/orp need not be related in any way.

8 Some optimization problems do not involve any constraints and can be stated as:
Such problems are called unconstrained optimization problems.

9 1.2.1 Decision Variables Any system or component is defined by a set of quantities some of which are viewed as variables during the design process. In general, certain quantities are usually fixed at the outset and these are called preassigned parameters. All the other quantities are treated as variables in the design process and are called decision variables xi, i = 1,2,. . .,n. The decision variables are collectively represented as a decision vector

10 1.2.2 Constraints In many practical problems, the decision variables cannot be chosen arbitrarily; rather, they have to satisfy certain specified functional and other requirements. The restrictions that must be satisfied to produce an acceptable design are collectively called constraints. Constraints that represent limitations on the behavior or performance of the system are termed behavior or functional constraints. Constraints that represent physical limitations on design variables such as availability, fabricability, and transportability are known as geometric or side constraints.

11 1.2.3 Constraint Surface For illustration, consider an optimization problem with only inequality constraints gj (X) < 0. The set of values of X that satisfy the equation gj (X) = 0 forms a hypersurface in the design space and is called a constraint surface. Note that this is an (n-l)-dimensional subspace, where n is the number of design variables. The constraint surface divides the design space into two regions: one in which gj (X) < 0 and the other in which gj (X) > 0. Thus the points lying on the hypersurface will satisfy the constraint gj (X) critically, whereas the points lying in the region where gj (X) > 0 are infeasible or unacceptable, and the points lying in the region where gj (X) < 0 are feasible or acceptable. The collection of all the constraint surfaces gj (X) = 0, j = 1,2,. . .,m, which separates the acceptable region is called the composite constraint surface.

12 Figure 1.2 shows a hypothetical two-dimensional design space where the infeasible region is indicated by hatched lines. A design point that lies on one or more than one constraint surface is called a bound point, and the associated constraint is called an active constraint. Design points that do not lie on any constraint surface are known as free points. Depending on whether a particular design point belongs to the acceptable or unacceptable region, it can be identified as one of the following four types: 1. Free and acceptable point 2. Free and unacceptable point 3. Bound and acceptable point 4. Bound and unacceptable point All four types of points are shown in Fig. 1.2.

13 Figure 1.2: Constraint surfaces in a hypothetical two-dimensional design space.

14 1.2.4 Objective Function In general, there will be more than one acceptable design, and the purpose of optimization is to choose the best one. Thus a criterion has to be chosen for comparing the different alternatives. The criterion when expressed as a function of the design variables, is known as the criterion or merit or objective function. In engineering designs, the objective is usually taken as the minimization of cost. The maximization of profit or efficiency is the obvious choice.

15 In some situations, there may be more than one criterion to be satisfied simultaneously. An optimization problem involving multiple objective functions is known as a multiobjective programming problem. With multiple objectives there arises a possibility of conflict, and one simple way to handle the problem is to construct an overall objective function as a linear combination of the conflicting multiple objective functions.

16 1.2.5 Objective Function Surfaces
The locus of all points satisfying f (X) = c = constant forms a hypersurface in the design space, and for each value of c there corresponds a different member of a family of surfaces. These surfaces, called objective function surfaces, are shown in a hypothetical two-dimensional design space in Fig Once the objective function surfaces are drawn along with the constraint surfaces, the optimum point can be determined without much difficulty. But the main problem is that as the number of design variables exceeds two or three, the constraint and objective function surfaces become complex even for visualization and the problem has to be solved purely as a mathematical problem.

17 Figure 1.3: Contours of the objective function.

18 1.3 Graphical Solution Consider the problem: min Subject to:
g1(X) = x1x2 – ≥ 0 g2(X) = ≥ 0

19 g1(X) = x1x ≥ 0, 0r: x1x2 = 1.593 Thus the curve x1x2 = represents the constraint surface g1(X) = 0. This curve can be plotted by finding several points on the curve. The points on the curve can be found by giving a series of values to x1 and finding the corresponding values of x2 that satisfy the relation x1x2 = 1.593, or: x2 = x1/ X1: X2:

20 These points are plotted and a curve P1 Q1 passing through all these points is drawn as shown in Fig. 1.4, and the infeasible region, represented by g1(X) < 0 Or: x1x2 < 1.593, is shown by hatched lines. g2(X) ≤ 0: x1: x2: These points are plotted as curve P2Q2, the feasible region is identified, and the infeasible region is shown by hatched lines as shown in Fig. 1.4.

21 g2(X) ≤ 0: Let x1 = 2 for example, then 2x2(4 + 𝑥 2 2 )=47.3
Solving this cubic polynomial equation using the Polynomial Roots Calculator: Solver for cubic equation: We find x2 = Then we assume other values for x1 and calculate the corresponding x2

22

23 complex roots i = −1 real root

24

25

26 Polynomials of Degree Three
The solution of ax2+bx+c=0 is: There is an analogous formula for polynomials of degree three: The solution of ax3+bx2+cx+d=0 is Or, more briefly, x = {q + [q2 + (r-p2)3]1/2}1/3  + {q - [q2 + (r-p2)3]1/2}1/3  + p Where: p = -b/(3a),   q = p3 + (bc-3ad)/(6a2),   r = c/(3a) But it is not recommend to memorize these formulas. Solution: 3 Real Roots, or 1 Real and 2 Complex Roots

27 x1: x2: These points are plotted as curve P2Q2, the feasible region is identified, and the infeasible region is shown by hatched lines as shown in Fig. 1.4.

28 The plotting of side constraints is very simple since they represent straight lines.
After plotting all the six constraints, the feasible region can be seen to be given by the bounded area ABCDEA.

29 Next, the contours of the objective function are to be plotted before finding the optimum point. For this, we plot the curves given by: for a series of values of c. By giving different values to c, the contours of f can be plotted with the help of the following points:

30

31 These contours are shown in Fig. 1
These contours are shown in Fig. 1.4 and it can be seen that the objective function cannot be reduced below a value of (corresponding to point B) without violating some of the constraints. Thus the optimum solution is given by point B with 26.53

32 Figure 1.4 Graphical optimization of the Example

33 1.4 Nonlinear Programming Problem
If any of the functions among the objective and constraint functions in Eq. (1.1) is nonlinear, the problem is called a nonlinear programming (NLP) problem. This is the most general programming problem and all other problems can be considered as special cases of the NLP problem.

34 1.5 CLASSICAL OPTIMIZATION THEORY
The classical methods of optimization are useful in finding the optimum solution of continuous and differentiate functions. These methods are analytical and make use of the techniques of differential calculus in locating the optimum points. The methods may not be suitable for efficient numerical computations. Moreover, since some of the practical problems involve objective functions that are not continuous and/or differentiate, the classical optimization techniques have limited scope in practical applications. However, a study of the calculus methods of optimization forms a basis for developing most of the numerical techniques of optimization.

35 1.6.1 SINGLE-VARIABLE OPTIMIZATION WITH NO CONSTRAINTS
1.6 UNCONSTRAINED PROBLEMS 1.6.1 SINGLE-VARIABLE OPTIMIZATION WITH NO CONSTRAINTS A function of one variable f(x) is said to have a relative or local minimum at x = x* if f (x*) ≤ f(x* + h) for all sufficiently small positive and negative values of h . Similarly, a point x* is called a relative or local maximum if f (x*) ≥ f (x* + h) for all values of h sufficiently close to zero. A function f (x) is said to have a global or absolute minimum at x* if f (x*) ≤ f(x) for all x, and not just for all x close to x*, in the domain over which f(x) is defined. Similarly, a point x* will be a global maximum of f(x) if f (x*) ≥ f (x) for all x in the domain.

36 Figure 1.5 shows the difference between the relative (local) and global optimum points.
Figure 1.5 Relative and global minima.

37 A single-variable optimization problem is one in which the value of x = x* is to be found in the interval [a,b] such that x* minimizes f(x). The following two theorems provide the necessary and sufficient conditions for the relative (local) minimum of a function of a single variable.

38 Theorem 1.1: Necessary Condition
If a function f(x) is defined in the interval a < x < b and has a relative minimum at x = x*, where a < x* < b, and if the derivative df(x)ldx =f'(x) = 𝛻𝑓(𝑥) exists as a finite number at x = x*, then f‘(x*) = 𝑓(x*) = 0 𝛻 Symbol Symbol Name Meaning / definition nabla gradient

39 Notes: 1. This theorem can be proved even if x* is a relative maximum. 2. The theorem does not say what happens if a minimum or maximum occurs at a point x* where the derivative fails to exist. For example, in Figure 1.6, If f'(x*) does not exist, the theorem is not applicable.

40 Figure 1.6 Derivative undefined at x*.

41 3. The theorem does not say that the function necessarily will have a minimum or maximum at every point where the derivative is zero. For example, the derivative f '(x) = 0 at x = 0 for the function shown in Figure 1.7. However, this point is neither a minimum nor a maximum. In general, a point x* at which f '(x*) = 0 is called a stationary (inflection) point.

42 Figure 1.7 Stationary (inflection) point.

43 Theorem 2.2: Sufficient Condition
Let f '(x*) = f "(x*) = • • • = f (n-1)(x*) = 0, but f(n)(x*) ≠ 0. Then f(x*) is: a minimum value of f(x) if f (n)(x*) > 0 and n is even; a maximum value of f (x) if f (n)(x*) < 0 and n is even; (iii) neither a maximum nor a minimum if n is odd. In this case the point x* is called a point of inflection (stationary point)..

44 Example 1 Determine the maximum and minimum values of the
function f (x) = 12x5 − 45x4 + 40x3 + 5 Solution Taking the first derivative of the function yields to ... f ′(x) = 60x4 −180x3 +120x2 = 60x2(x2 - 3x +2) = = 60x2(x – 2))(x – 1) The value of the function at x = 0 x =1 x = 2 is zero. In the next step take the second derivative of the function f ′’(x) = 240 x3 – 540x x - At x =0: f ′’(x) = 0, f ′’’(x)= 720x2 – 1080x +240 = 240 inflection point. At x =1: f ′’(x) = -60 relative (local) maximum f(1)= 12. At x =2: f ′’(x) = 240 relative (local) minimum f(2)= -11.

45 Second Order Equations
f(x) =

46 Example 2 Determine the maximum and minimum values of the function:
f (x) = 10x6 − 48x5 + 15x x3 − 120x2 + − 480x + 100 Solution: f′ (x) = 60x5 − 240x4 + 60x x2 − 240x − 480 = 0 x5 − 4x4 + x3 + 10x2 − 4x − 8 = 0 Solving using the Polynomial Roots Calculator

47 Polynomial Roots Calculator

48

49 Then f’(x) = 0 at x = -1 and 2 f′ (x) = x5 − 4x4 + x3 + 10x2 − 4x − 8 = 0 f ’’(x) = 5x4 − 16 x3 + 3x2 + 20x − 4 f ”(-1) = 0 f’’’ (x) = 20x3 − 48x2 + 6x + 20 f’’’ (-1) = − 20 − 48 − = -54 Then point x=-1 is an inflection point. f ”(2) = 0 f’’’ (2) = 0 f(4) (x) = 60x2 − 96x + 6 f(4) (2) = 240 − = 54 Then point x=2 is a relative minimum point. fmax = f(2) =

50 Solving Using Excel Function:
f (x) = 10x6 − 48x5 + 15x x3 − 120x2 + − 480x + 100 Function: B3: =G3^6 C3: =G3^5 D3: =G3^4 E3: =G3^3 ……………..

51

52

53 1.6.2 MULTIVARIABLE OPTIMIZATION WITH NO CONSTRAINTS
Theorem 2.3: Necessary Condition If f(X) has an extreme point (maximum or minimum) at X = X* and if the first partial derivatives of f (X) exist at X*, then: 𝑓( 𝑥 ∗ ) = 0, 𝛻 Theorem 2.4: Sufficient Condition A sufficient condition for a stationary point X* to be an extreme point is that the matrix of second partial derivatives (Hessian matrix) of f (X) evaluated at X* is: positive definite when X* is a relative minimum point, (ii) negative definite when X* is a relative maximum point.

54 Hessian matrix The Hessian matrix (or simply the Hessian) is the square matrix of second-order partial derivatives of a function. The Hessian matrix was developed in the 19th century by the Ludwig Otto Hesse and later named after him. Given the real-valued function if all second partial derivatives of f exist, then the Hessian matrix of f is the matrix where x = (x1, x2, ..., xn) and Di is the differentiation operator with respect to the ith argument and the Hessian becomes:

55

56 A test that can be used to find the positive definiteness of a matrix A of order n involves evaluation of the determinants 1 The matrix A will be positive definite if and only if all the values A1, A2, A3, , An are positive. If some of the Aj are positive and the remaining Aj are zero, the matrix A will be positive semidefinite. The matrix A will be negative definite if and only if the sign of Aj is (-1)j for j = 1,2,. . .,n (i.e. A1 is negative, A2 is positive, A3 is negative,…..).

57 Depending on the second derivative matrix Hf(a,b), the graph of f(x,y) might look like an elliptic paraboloid pointing upward, centered at the point (a,b) (shown by the blue dot, below). In this case, we say that Hf )a,b) is positive definite, and f  has a local minimum at (a,b.). Figure 1.8

58 Alternatively, the graph of f(x,y) might look like an elliptic paraboloid pointing downward, centered at the point (a,b) shown by the red dot, below). In this case, we say that Hf(a,b) is negative definite, and f  has a local maximum at (a,b.( Figure 1.9

59 Saddle Point In the case of a function of two variables, f(x,y), the Hessian matrix may be neither positive nor negative definite at a point (x*,y*) at which In such a case, the point (x*,y*) is called a saddle point. The characteristic of a saddle point is that it corresponds to a relative minimum or maximum of f(x,y) with respect to one variable, say, x (the other variable being fixed at y= (y*) and a relative maximum or minimum of f(x,y) with respect to the second variable say, y (the other variable being fixed at (x*).

60 Figure 1.10

61 Figure 1.11: Saddle point of the function f(x,y) = x2 — y2.
f(x,y*) = f (x,0) has a relative minimum and f (x*,y) = f (0,y) has a relative maximum at the saddle point (x*,y*).

62 Example 1 = consider the function: f(x,y) = x2 — y2 H=
These first derivatives are zero at x* = 0 and y* = 0. The Hessian matrix of f is: H= = The Hessian matrix of f at (x*,y*) = H The determinant H1 = 2 (positive), and the determinant H2 = 2(-2) -0(0) = -4 (negative). Then H is indefinite. Since this matrix is neither positive definite nor negative definite, the point (x* = 0, y* = 0) is a saddle point.

63 Example 2 Find the critical points of the function:
SOLUTION: The necessary conditions for the existence of an extreme point are: …. (1) … (2) From (1) x1 = 0 or ( ), and from (2) x2 = 0 or ( ) . Then these equations are satisfied at the points: (0,0), (0, ), ( , 0), and ( , )

64 To find the nature of these extreme points, we have to use the sufficiency conditions. The second-order partial derivatives of f are given by: The Hessian matrix of f is given by: H

65 H

66 1.7.1 MULTIVARIABLE OPTIMIZATION WITH EQUALITY CONSTRAINTS
1.7 CONSTRAINED PROBLEMS 1.7.1 MULTIVARIABLE OPTIMIZATION WITH EQUALITY CONSTRAINTS We consider the optimization of continuous functions subjected to equality constraints: Where: Here m is less than or equal to n; otherwise (if m > n), the problem becomes overdefined and, in general, there will be no solution. There are several methods available for the solution of this problem: The Constrained variation, Jacobian method, Methods of direct substitution, and Lagrange multipliers.

67 1.7.1.1 Method of Direct Substitution
For a problem with n variables and m equality constraints, it is theoretically possible to solve simultaneously the m equality constraints and express any set of m variables in terms of the remaining n — m variables. When these expressions are substituted into the original objective function, there results a new objective function involving only n — m variables. The new objective function is not subjected to any constraint, and hence its optimum can be found by using the unconstrained optimization techniques.

68 Example 1 Either x1 or x2 can be eliminated without difficulty.
Solving for x1, (1) Substitute for x1 in the Objective Function, the new equivalent objective function in terms of a single variable x2 is: The constraint in the original problem has now been eliminated, and f(x2) is an unconstrained function with one independent variable.

69 We can now minimize the new objective function by setting the first derivative of f equal to zero, and solving for the optimal value of x2: f”(x) = 28 (positive), then X* is a local minimum. Once x2* is obtained, then, x1* can be directly obtained via the relation (1): , then:

70 Solving Using Excel Problem: F4: =SUMPRODUCT(B4:C4;B3:C3)
F6: = SUMPRODUCT(D6:E6;D3:E3)

71

72

73

74

75

76 Example 2 The profit analysis model: Max the profit z = vp – cf – vcv …….….. (1) The demand is represented by: v = 1,500 – 24.6p ………... (2) Where: v = volume (quantity), p = price, cf = fixed cost = $10,000, cv = variable cost = $8 per unit. Substituting values of cf and cv into (1), we obtain: z = vp -10,000 – 8v …..…… (3) Substituting (2) in (3): z = 1500p – 24.6p2 – 10,000 – 8(1,500 – 24.6p) z = p -24.6p2 -22, …..……. (4) 𝑑𝑧 𝑑𝑝 = p -49.2p = 0 for the critical points, then: p* = 34.49 𝑑 2 𝑧 𝑑 𝑝 2 = (negative), then p* is a local maximum. Substituting in (2): v* = 1500 – 24.6(34.49) = Substituting in (3): zmax = (651.55)( 34.49) – 10,000 – 8(651.55) =

77 Lagrange Method The basic features of the Lagrange multiplier method is given initially for a simple problem of two variables with one constraint. The extension of the method to a general problem of n variables with m constraints is given later.

78 Problem with Two Variables and One Constraint.
Consider the problem: subject to: Define Lagrange function: λ is called the Lagrange multiplier. L is treated as a function of the three variables x1, x2, and λ. Necessary Conditions for Extremum:

79 Theorem: Sufficient Condition
A sufficient condition for f(X) to have a relative minimum at X* is that the quadratic, Q, defined by: Q = 𝜕 2 𝐿 𝜕 𝑥 1 𝜕 𝑥 2 d 𝑥 1 d 𝑥 2 evaluated at X = X* must be positive definite for all values of dX for which the constraints are satisfied. If Q is negative definite for all choices of the admissible variations dX, X* will be a constrained maximum of f(X). - It has been shown by Hancock that a necessary condition for the quadratic form Q, to be positive (negative) definite for all admissible variations dX is that each root of the polynomial Zi, defined by the following determinantal equation, be positive (negative):

80

81 Example Find the solution of the following problem using the Lagrange multiplier method: f(x,y) = x-1y-2 Subject to: g(x,y) = x2 + y2 - 4 = 0 The Lagrange function is: L(x,y,λ) =f(x,y) + λg(x,y) = x -1y -2 + λ(x2 + y2 - 4 ) The necessary conditions for the extreme of f(x, y) give: 𝜕𝐿 𝜕𝑥 λ = x -3y -2……… (4) = -x -2y -2 +2λx = ……. (1) 𝜕𝐿 𝜕𝑦 λ = x -1y ……… (5) = -2x -1y -3 +2λy = 0 ……. (2) From 4 , 5 : x -3y -2 = x -1y -4 𝜕𝐿 𝜕λ = x2 +y2 -4 = ……. (3)

82 1 2 x -3y -2 = x -1y -4 1 2 y -2 = x2y -4 1 2 y2 = x2 X. = 1 2 y
1 2 x -3y -2 = x -1y y -2 = x2y y2 = x2 X* = 1 2 y* Or: x2 = 1 2 y2 ………… (6) (3): x2 +y2 - 4 = 0 From (6): 1 2 y2 +y2 = 4 y2 = 8 3 y* = Substituting in (6): x* = 2 3 (5): λ = x -1y -4 λ* = =

83 The determinant: -z = 1 z 0.344 z 2.309 3.266 = 0

84 1.218-z 0.344 0.974-z 2.309 3.266 = 0.344 3.266 0.974-z 3.266 0.344 0.974-z (1.218-z) - (0.344) 3.266 2.309 2.309 3.266 (1.218-z) ( z) (0) – (3.266)(3.266) (0.344) -(3.266)(2.309) + + (2.309) (0.344)(3.266) – (0.974-z)(2.309) = 0 Z= (negative) Then x*,y* is a relative maximum f*(x,y) =

85 Determinant Calculation
1

86 R R2 + 8R1 -3[1(1)-2(0)] = -3(1) = -3

87

88 Necessary Conditions for a General Problem
The equations can be extended to the case of a general problem with n variables and m equality constraints: =f(x1, x2, ……, xn) subject to: The Lagrange function, L, in this case is defined by introducing one Lagrange multiplier λj for each constraint gj (X) as: The necessary conditions for the extremum of L, are given by:

89 Sufficient Condition A sufficient condition for f(X) to have relative minimum at X* is that the quadratic, Q, defined by: evaluated at X = X* must be positive definite for all values of variations dX for which the constraints are satisfied. If Q is negative definite for all choices of the admissible variations dxi, X* will be a constrained maximum of f(X). It has been shown by Hancock that a necessary condition for the quadratic form Q, to be positive (negative) definite for all admissible variations dX is that each root of the polynomial Zi, defined by the following determinantal equation, be positive (negative):

90 Where: , This equation on expansion, leads to an (n — m)th-order polynomial in z. If some of the roots of this polynomial are positive while the others are negative, the point X* is not an extreme point.

91 1.7.2 MULTIVARIABLE OPTIMIZATION WITH INEQUALITY CONSTRAINTS
Consider the following problem: Minimize f(X) subject to: gj(X) ≤ 0, j = 1,2,. . .,m Kuhn-Tucker Conditions The conditions to be satisfied at a constrained minimum point, X*. These conditions are, in general, not sufficient to ensure a relative minimum. However, there is a class of problems, called convex programming problems for which the Kuhn-Tucker conditions are necessary and sufficient for a global minimum.

92 Kuhn-Tucker Conditions
The Kuhn-Tucker conditions can be stated as follows: Note that if the problem is one of maximization or if the constraints are of the type gj ≥ 0, the λj have to be nonpositive. On the other hand, if the problem is one of maximization with constraints in the form gj ≥ 0, the λj have to be nonnegative.

93 For the following problem: Minimize f(X)
subject to: gj(X) ≤ 0, j = 1,2,. . .,m The Kuhn-Tucker conditions are stated as follows: Note that if the problem is one of maximization or if the constraints are of the type gj ≥ 0, the λj have to be nonpositive. On the other hand, if the problem is one of maximization with constraints in the form gj ≥ 0, the λj have to be nonnegative.


Download ppt "Chapter 1 Nonlinear Programming"

Similar presentations


Ads by Google