Presentation is loading. Please wait.

Presentation is loading. Please wait.

EASTERN MEDITERRANEAN UNIVERSITY Department of Industrial Engineering Non linear Optimization Spring 2014-15 Instructor: Prof.Dr.Sahand Daneshvar Submited.

Similar presentations


Presentation on theme: "EASTERN MEDITERRANEAN UNIVERSITY Department of Industrial Engineering Non linear Optimization Spring 2014-15 Instructor: Prof.Dr.Sahand Daneshvar Submited."— Presentation transcript:

1 EASTERN MEDITERRANEAN UNIVERSITY Department of Industrial Engineering Non linear Optimization Spring 2014-15 Instructor: Prof.Dr.Sahand Daneshvar Submited by: AAKASH AHMED Student number: 145322 OF JACOBIAN APPLICATIONS OF JACOBIANMETHOD METHOD

2 Constrained Derivatives (Jacobian) Method Minimize z = f(X) subject to g(X) = 0 where The functions f(X) and g(X), i = 1,2,..., m, are twice continuously differentiable. The idea of using constrained derivatives is to develop a closed-form expression for the first partial derivatives of f(X) at all points that satisfy the constraints g(X) = O. This corresponding stationary points are identified as the points at which these partial derivatives vanish. The sufficiency conditions introduced in Section can then be used to check the identity of stationary points.

3 To clarify the proposed concept, consider f(xl, X2) illustrated in Figure 18.4. This function is to be minimized subject to the constraint where b is a constant. From Figure, the curve designated by the three points A, B, and C represents the values of f(x1, X2) for which the given constraint is always satisfied. The constrained derivatives method defines the gradient of f(Xl, X2) at any point on the curve ABC. Point B at which the constrained derivative vanishes is a stationary point for the constrained problem. The method is now developed mathematically. By Taylor's theorem, for in the feasible neighborhood of X, we have and

4 Demonstration of the idea of the Jacobian method

5 For feasibility, we must have, and it follows that This gives (m + 1) equations in (n + 1) unknowns, and. Note that is a dependent variable, and hence is determined as soon as is known. This means that, in effect, we have m. equations in n unknowns. If m > n, at least (m - n) equations are redundant. Eliminating redundancy, the system reduces to m < n. If m = n, the solution is, and X has no feasible neighborhood, which means that the solution space consists of one point only. The remaining case, where m < n, requires further elaboration.

6 Define, x = (Y, Z) such that, The vectors Y and Z are called the dependent and independent variables, respectively. Rewriting the gradient vectors of f and g in terms of Y and Z, we get,

7 Define, J (m*m) is called the Jacobian matrix and C (m*n- m) the control matrix. The Jacobian J is assumed non-singular. This is always possible because the given m equations are independent by definition. The components of the vector Y must thus be selected from among those of X such that J is nonsingular. The original set of equations in partial df(x) and partial df(x) may be written as

8 Because J is nonsingular, its inverse J-1 exists. Hence, Substituting for partial d(Y) in the equation for partial df(x) gives partial d f as a function of partial d ( Z ) -that is, From this equation, the constrained derivative with respect to the independent vector Z is given by

9 The sufficiency conditions are similar to those developed in Section. The Hessian matrix will correspond to the independent vector Z, and the elements of the Hessian matrix must be the constrained second derivatives. To show how this is obtained, Let It thus follows that the “i” th row of the (constrained) Hessian matrix is a Notice that W is a function of Y and Y is a function of Z. Thus, the partial derivative of with respect to Zi is based on the following chain rule:

10 Example: 1 Consider the following problem:

11

12 Hence, the incremental value of constrained f is given as

13

14 Example: 2 Application of the Jacobian Method to an LP Problem : Consider the linear program Maximize z = 2x1 + 3x2 subject to Xl + X2 + X3 = 5 Xl – X2 + X4 = 3 Xl, X2, X3, X4 > 0 To account for the nonnegativity constraints, substitute. With this substitution, the nonnegativity conditions become implicit and the original problem becomes Subject to

15 To apply the Jacobian method, let (In the terminology of linear programming, Y and Z correspond to the basic and non basic variables, respectively.) Thus So that,

16

17

18 The corresponding dual objective value is 5UI + 3U2 = 15, which equals the optimal primal objective value. The given solution also satisfies the dual constraints and hence is optimal and feasible. This shows that the sensitivity coefficients are the same as the dual variables. In fact, both have the same interpretation. Figure: Extreme points of the solution space of the linear program

19


Download ppt "EASTERN MEDITERRANEAN UNIVERSITY Department of Industrial Engineering Non linear Optimization Spring 2014-15 Instructor: Prof.Dr.Sahand Daneshvar Submited."

Similar presentations


Ads by Google