Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Optimization (Part 1)

Similar presentations


Presentation on theme: "Introduction to Optimization (Part 1)"— Presentation transcript:

1 Introduction to Optimization (Part 1)
Daniel Kirschen

2 Economic dispatch problem
Several generating units serving the load What share of the load should each generating unit produce? Consider the limits of the generating units Ignore the limits of the network © 2011 D. Kirschen and University of Washington

3 Characteristics of the generating units
B T G (Input) Electric Power Fuel (Output) Output Pmin Pmax Input J/h MW Thermal generating units Consider the running costs only Input / Output curve Fuel vs. electric power Fuel consumption measured by its energy content Upper and lower limit on output of the generating unit © 2011 D. Kirschen and University of Washington

4 Cost Curve Multiply fuel input by fuel cost No-load cost $/h Cost
Cost of keeping the unit running if it could produce zero MW Output Pmin Pmax Cost $/h MW No-load cost © 2011 D. Kirschen and University of Washington

5 Incremental Cost Curve
Derivative of the cost curve In $/MWh Cost of the next MWh ∆F ∆P Cost [$/h] MW Incremental Cost [$/MWh] MW © 2011 D. Kirschen and University of Washington

6 Mathematical formulation
Objective function Constraints Load / Generation balance: Unit Constraints: A B C L This is an optimization problem © 2011 D. Kirschen and University of Washington

7 Introduction to Optimization

8 “An engineer can do with one dollar which any bungler can do with two” A. M. Wellington (1847-1895)
© 2011 D. Kirschen and University of Washington

9 Objective Most engineering activities have an objective:
Achieve the best possible design Achieve the most economical operating conditions This objective is usually quantifiable Examples: minimize cost of building a transformer minimize cost of supplying power minimize losses in a power system maximize profit from a bidding strategy © 2011 D. Kirschen and University of Washington

10 Decision Variables The value of the objective is a function of some decision variables: Examples of decision variables: Dimensions of the transformer Output of generating units, position of taps Parameters of bids for selling electrical energy © 2011 D. Kirschen and University of Washington

11 Optimization Problem What value should the decision variables take so that is minimum or maximum? © 2011 D. Kirschen and University of Washington

12 Example: function of one variable
f(x*) f(x) x* x f(x) is maximum for x = x* © 2011 D. Kirschen and University of Washington

13 Minimization and Maximization
f(x*) f(x) x* x -f(x) -f(x*) If x = x* maximizes f(x) then it minimizes - f(x) © 2011 D. Kirschen and University of Washington

14 Minimization and Maximization
maximizing f(x) is thus the same thing as minimizing g(x) = -f(x) Minimization and maximization problems are thus interchangeable Depending on the problem, the optimum is either a maximum or a minimum © 2011 D. Kirschen and University of Washington

15 Necessary Condition for Optimality
f(x) f(x*) x* x © 2011 D. Kirschen and University of Washington

16 Necessary Condition for Optimality
f(x) x* x © 2011 D. Kirschen and University of Washington

17 Example f(x) x For what values of x is ?
In other words, for what values of x is the necessary condition for optimality satisfied? © 2011 D. Kirschen and University of Washington

18 Example x f(x) A B C D A, B, C, D are stationary points
A and D are maxima B is a minimum C is an inflexion point © 2011 D. Kirschen and University of Washington

19 How can we distinguish minima and maxima?
f(x) A B C D x The objective function is concave around a maximum © 2011 D. Kirschen and University of Washington

20 How can we distinguish minima and maxima?
f(x) A B C D x The objective function is convex around a minimum © 2011 D. Kirschen and University of Washington

21 How can we distinguish minima and maxima?
f(x) A B C D x The objective function is flat around an inflexion point © 2011 D. Kirschen and University of Washington

22 Necessary and Sufficient Conditions of Optimality
Necessary condition: Sufficient condition: For a maximum: For a minimum: © 2011 D. Kirschen and University of Washington

23 Isn’t all this obvious? Can’t we tell all this by looking at the objective function? Yes, for a simple, one-dimensional case when we know the shape of the objective function For complex, multi-dimensional cases (i.e. with many decision variables) we can’t visualize the shape of the objective function We must then rely on mathematical techniques © 2011 D. Kirschen and University of Washington

24 Feasible Set The values that the decision variables can take are usually limited Examples: Physical dimensions of a transformer must be positive Active power output of a generator may be limited to a certain range (e.g. 200 MW to 500 MW) Reactive power output of a generator may be limited to a certain range (e.g MVAr to 150 MVAr) © 2011 D. Kirschen and University of Washington

25 Feasible Set f(x) x Feasible Set
xMIN x A D xMAX Feasible Set The values of the objective function outside the feasible set do not matter © 2011 D. Kirschen and University of Washington

26 Interior and Boundary Solutions
x f(x) A D xMAX xMIN B E A and D are interior maxima B and E are interior minima XMIN is a boundary minimum XMAX is a boundary maximum Do not satisfy the Optimality conditions! © 2011 D. Kirschen and University of Washington

27 Two-Dimensional Case f(x1,x2) x1* x1 x2*
f(x1,x2) is minimum for x1*, x2* x2 © 2011 D. Kirschen and University of Washington

28 Necessary Conditions for Optimality
f(x1,x2) x1* x1 x2* x2 © 2011 D. Kirschen and University of Washington

29 Multi-Dimensional Case
At a maximum or minimum value of we must have: A point where these conditions are satisfied is called a stationary point © 2011 D. Kirschen and University of Washington

30 Sufficient Conditions for Optimality
f(x1,x2) minimum maximum x1 x2 © 2011 D. Kirschen and University of Washington

31 Sufficient Conditions for Optimality
f(x1,x2) Saddle point x1 x2 © 2011 D. Kirschen and University of Washington

32 Sufficient Conditions for Optimality
Calculate the Hessian matrix at the stationary point: © 2011 D. Kirschen and University of Washington

33 Sufficient Conditions for Optimality
Calculate the eigenvalues of the Hessian matrix at the stationary point If all the eigenvalues are greater or equal to zero: The matrix is positive semi-definite The stationary point is a minimum If all the eigenvalues are less or equal to zero: The matrix is negative semi-definite The stationary point is a maximum If some or the eigenvalues are positive and other are negative: The stationary point is a saddle point © 2011 D. Kirschen and University of Washington

34 Contours x1 x2 f(x1,x2) F2 F1 F2 F1 © 2011 D. Kirschen and University of Washington

35 Contours A contour is the locus of all the point that give the same value to the objective function x2 x1 Minimum or maximum © 2011 D. Kirschen and University of Washington

36 Example 1 is a stationary point
© 2011 D. Kirschen and University of Washington

37 Example 1 Sufficient conditions for optimality:
must be positive definite (i.e. all eigenvalues must be positive) The stationary point is a minimum © 2011 D. Kirschen and University of Washington

38 Example 1 x2 x1 Minimum: C=0 C=9 C=4 C=1
© 2011 D. Kirschen and University of Washington

39 Example 2 is a stationary point
© 2011 D. Kirschen and University of Washington

40 Example 2 Sufficient conditions for optimality: The stationary point
is a saddle point © 2011 D. Kirschen and University of Washington

41 Example 2 x2 x1 This stationary point is a saddle point C=0 C=9 C=4
© 2011 D. Kirschen and University of Washington

42 Optimization with Constraints

43 Optimization with Equality Constraints
There are usually restrictions on the values that the decision variables can take Objective function Equality constraints © 2011 D. Kirschen and University of Washington

44 Number of Constraints N decision variables M equality constraints
If M > N, the problems is over-constrained There is usually no solution If M = N, the problem is determined There may be a solution If M < N, the problem is under-constrained There is usually room for optimization © 2011 D. Kirschen and University of Washington

45 Example 1 x2 Minimum x1 © 2011 D. Kirschen and University of Washington

46 Example 2: Economic Dispatch
G1 G2 Cost of running unit 1 Cost of running unit 2 Total cost Optimization problem: © 2011 D. Kirschen and University of Washington

47 Solution by substitution
Unconstrained minimization © 2011 D. Kirschen and University of Washington

48 Solution by substitution
Difficult Usually impossible when constraints are non-linear Provides little or no insight into solution Solution using Lagrange multipliers © 2011 D. Kirschen and University of Washington

49 Gradient © 2011 D. Kirschen and University of Washington

50 Properties of the Gradient
Each component of the gradient vector indicates the rate of change of the function in that direction The gradient indicates the direction in which a function of several variables increases most rapidly The magnitude and direction of the gradient usually depend on the point considered At each point, the gradient is perpendicular to the contour of the function © 2011 D. Kirschen and University of Washington

51 Example 3 x y B C A D © 2011 D. Kirschen and University of Washington

52 Example 4 y x © 2011 D. Kirschen and University of Washington

53 Lagrange multipliers © 2011 D. Kirschen and University of Washington

54 Lagrange multipliers © 2011 D. Kirschen and University of Washington

55 Lagrange multipliers © 2011 D. Kirschen and University of Washington

56 Lagrange multipliers The solution must be on the constraint
To reduce the value of f, we must move in a direction opposite to the gradient A ? B © 2011 D. Kirschen and University of Washington

57 Lagrange multipliers We stop when the gradient of the function is perpendicular to the constraint because moving further would increase the value of the function A At the optimum, the gradient of the function is parallel to the gradient of the constraint C B © 2011 D. Kirschen and University of Washington

58 Lagrange multipliers At the optimum, we must have:
Which can be expressed as: In terms of the co-ordinates: The constraint must also be satisfied: is called the Lagrange multiplier © 2011 D. Kirschen and University of Washington

59 Lagrangian function To simplify the writing of the conditions for optimality, it is useful to define the Lagrangian function: The necessary conditions for optimality are then given by the partial derivatives of the Lagrangian: © 2011 D. Kirschen and University of Washington

60 Example © 2011 D. Kirschen and University of Washington

61 Example © 2011 D. Kirschen and University of Washington

62 Example x2 Minimum 1 x1 4 © 2011 D. Kirschen and University of Washington

63 Important Note! If the constraint is of the form:
It must be included in the Lagrangian as follows: And not as follows: © 2011 D. Kirschen and University of Washington

64 Application to Economic Dispatch
G1 G2 x1 x2 Equal incremental cost solution © 2011 D. Kirschen and University of Washington

65 Equal incremental cost solution
Cost curves: x1 x2 x1 x2 Incremental cost curves: © 2011 D. Kirschen and University of Washington

66 Interpretation of this solution
+ - x1 x2 If < 0, reduce λ If > 0, increase λ © 2011 D. Kirschen and University of Washington

67 Physical interpretation
x The incremental cost is the cost of one additional MW for one hour. This cost depends on the output of the generator. x © 2011 D. Kirschen and University of Washington

68 Physical interpretation
© 2011 D. Kirschen and University of Washington

69 Physical interpretation
It pays to increase the output of unit 2 and decrease the output of unit 1 until we have: The Lagrange multiplier λ is thus the cost of one more MW at the optimal solution. This is a very important result with many applications in economics. © 2011 D. Kirschen and University of Washington

70 Generalization Lagrangian: One Lagrange multiplier for each constraint
n + m variables: x1, …, xn and λ1, …, λm © 2011 D. Kirschen and University of Washington

71 Optimality conditions
n equations m equations n + m equations in n + m variables © 2011 D. Kirschen and University of Washington


Download ppt "Introduction to Optimization (Part 1)"

Similar presentations


Ads by Google