Presentation is loading. Please wait.

Presentation is loading. Please wait.

Optimal Control.

Similar presentations


Presentation on theme: "Optimal Control."— Presentation transcript:

1 Optimal Control

2 Optimization It is a process of search that seek to optimize (Maximize & Minimize) a mathematical function of several variables subject to constraints (equality or inequality) It is of two types Static Optimization Dynamic Optimization

3 Static Optimization It is concerned with design variables (involve in objective function) which does not change with time. There are 3 techniques to solve such problems Ordinary Calculus Lagrange Multiplier Linear & Non-Linear Programing

4 Dynamic Optimization:
It is concerned with design variables (involve in objective function) are changing with time and thus time is involve in the problem statement. There are 3 techniques to solve such problems Calculus of variation Dynamic programing Convex optimization problem

5 How to get mathematical model for optimal design problem
Example: Design a can with the following constrains which can contain maximum 200 ml liquid Radius(cm) 3 ≤ r ≤ 5 Height (cm) 5 ≤ h ≤ 22 Ratio h ≥ 3r So that cost is minimum. Our objective is to minimize fabrication cost and assume the cost of the material use to fabricate the can is Rs c/cm2. For comfortable handing, griping etc

6 Example 1… Let x1 = r, x2 = h (x1 & x2 are know as design variables)
Minimize fabrication cost (Fabrication cost is now objective function) Fabrication cost = Objective function f(x1,x2) = (2πrh +2πr2)C f(x1,x2) = (2πx1x2 +2πx12)C ---(1) (Non linear) Subject to πr2h = 200 π x12x2= (2) ( Equality Constraint ) (Non linear) h ≥ 3r 3x1-x2 ≤ (3) ( Inequality constraint ) (Linear) Note: Objective function is a scalar function

7 Example 1… 3 ≤ x1 ≤ 5 5 ≤ x2 ≤ 22 Side constraints are necessary part of the solution techniques X1 & x2 does not change with time => Static Optimization Problem. Equation (1) & (2) are non linear while equation (3) is linear. So it is Non linear static optimization problem. Hence optimization problem may be Linear optimization problem: Objective function & Constraints all are linear Non linear optimization problem Side constraints

8 (Design variables lies here)
Example 1… Graphical Representation: 2 3 4 5 x1=r 1 10 15 20 25 3x1-x2=0 3x1-x2≤0 3x1-x2≥0 x2=h Search Region (Design variables lies here)

9 General Optimization problem statement (Single objective Optimization problem)
In case we have n design variables (x1, x2, x3, …..xn) then optimize (maximize or minimize) objective function (cost function) with some constraints and side constraints. General objective function f(x1, x2, x3, …..xn) ---(1) Subjected to hi(x1, x2, x3, …..xn) = 0 ; i=1 2 3…p ---(2) Equality constraints gj(x1, x2, x3, …..xn) ≤ 0; j=1, 2, 3….m ---(3) Inequality constraints xiL ≤xi≤xiU i=1, 2, 3, …n ---(4) side constraints (L=lower value, U =upper value)

10 General Optimization problem statement (Single objective Optimization problem)
In more compact form Let a vector x = xnx1=[x1,x2, x3,……..xn]T General objective function f(x) ---(1) Subjected to hi(x) = 0 ; i=1 2 3…p ---(2) Equality constraints gj(x) ≤ 0; j=1, 2, 3….m ---(3) Inequality constraints xiL ≤xi≤xiU i=1, 2, 3, …n ---(4) side constraints (L=lower value, U =upper value)

11 Multi objective Optimization problem
As we see on the previous slide objective function was f(x1, x2, x3, …..xn) or f(x) => one objective function We may have multiple objective function i.e f1(x1, x2, x3, …..xn), f2(x1, x2, x3, …..xn) etc may be one function is to be maximize and other function to be minimize This is know as Multi Objective Optimization Problem.

12 Optimality conditions

13 Theorem: Necessary & Sufficient conditions for optimality condition
The necessary & sufficient conditions for optimality condition in n- variables form (mini or max) Necessary condition: 𝛻𝑓 𝑥 = 𝜕𝑓(𝑥) 𝜕𝑥 =0 𝑥= 𝑥 1 𝑥 𝑥 𝑛 Sufficient condition: 𝐻= 𝛻 2 𝑓 𝑥 𝑥= 𝑥 ∗ >0 function f(x) has min value at x=x* 𝐻= 𝛻 2 𝑓 𝑥 𝑥= 𝑥 ∗ <0 function f(x) has max value at x=x* Matrix H is know as Hessian Matrix

14 Example Example 1: Find the optimum value of the function
𝑓 𝑥 = 2𝑥 𝑥 1 𝑥 2 +4 𝑥 2 2 −4 𝑥 1 +2 𝑥 2 +16 Solution: Necessary condition 𝛻𝑓 𝑥 = 𝜕𝑓(𝑥) 𝜕𝑥 = 𝜕𝑓(𝑥) 𝜕 𝑥 1 𝜕𝑓(𝑥) 𝜕 𝑥 2 = = 4 𝑥 1 + 4𝑥 2 −4 4 𝑥 1 +8 𝑥 2 +2 Solving above equations 𝑥 1 = 𝑥 1 ∗ = & 𝑥 2 = 𝑥 2 ∗ =−1.5

15 Example 1… Sufficient condition
𝐻=𝛻 2 𝑓 𝑥 = 𝜕 2 𝑓 𝑥 𝜕 𝑥 𝜕 2 𝑓 𝑥 𝜕 𝑥 1 𝑥 𝜕 2 𝑓 𝑥 𝜕 𝑥 1 𝑥 𝜕 2 𝑓 𝑥 𝜕 𝑥 𝑥 1 = 𝑥 1 ∗ =2.5 𝑥 2 = 𝑥 2 ∗ =−1.5 = Now check whether H is +definite, + semi definite or –definite , - semi definite H is + Definite (H>0) because All diagonal elements are >0 Leading principal minor of order one =4 >0 Leading principal minor of order two =32-16=16>0 Hence function f(x) has its min value at 𝑥 1 = 𝑥 1 ∗ = & 𝑥 2 = 𝑥 2 ∗ =−1.5

16 Unconstrained Optimization problem, Numerical Techniques

17 Unconstrained Optimization problem
Unconstrained optimization problem can be solved by two method Numerical methods: iterative process Analytical methods:

18 Unconstrained Optimization problem…
Numerical methods: Steepest Descent Method: Convergence order is 1 Gradient Method: Steepest Descent Method with predetermined step size Conjugate Gradient Method: Convergence order is b/w 1 & 2 Newton’s Method: Convergence order is 2

19 Steepest Decent Method Algorithm
Let 𝑓 𝑥 𝑛×1 is to be minimize Step 1: Choose a starting point 𝑥 (0) and let K=0, ɛ1, ɛ2, ɛ3 are the stopping criterion for the algorithm. (ɛ1, ɛ2& ɛ3 are predetermined very small +ve values) Step 2: At kth iteration, determine the gradient of the objective function f(x) i.e. 𝛻𝑓 𝑥 𝑘 Step 3: Find search direction 𝑑 𝑘 =−𝛻𝑓( 𝑥 𝑘 ) Step 4: Find the optimum step size 𝜆 𝑘 = 𝜆 𝑘 ∗ = − 𝛻𝑓 𝑥 𝑘 𝑇 𝑑 𝑘 ( 𝑑 𝑘 ) 𝑇 𝛻 2 𝑓 𝑥 𝑘 𝑑 𝑘

20 Steepest Decent Method Algorithm …
Step 5: Find out next iteration variable 𝑥 𝑘+1 = 𝑥 (𝑘) + 𝜆 𝑘 𝑑 𝑘 Step 6: Find ∆𝑓=𝑓(𝑥 𝑘+1 )−𝑓( 𝑥 𝑘 ) and ∆𝑥=𝑥 𝑘+1 − 𝑥 (𝑘) If ∆𝑓 < ∈ 1 then STOP, (=> Function value is not changing, ∈ 1 is very small predetermined + ve value ) If ∆𝑥 2 = ∆𝑥 𝑇 ∆𝑥< ∈ 2 then STOP, (=> Design variable value is not changing, ∈ 2 is very small predetermined +ve value ) There may be some function for which ∆𝑓 is higher but ∆𝑥 2 is very low or vice versa. So in that situation use above both conditions to stop the execution. Note: ∆𝑓 is scalar so normal mode ∆𝑓 and ∆𝑥 is a vector so norm ∆𝑥 2 is taken.

21 Steepest Decent Method Algorithm …
If 𝛻𝑓( 𝑥 𝑘+1 ) 𝑇 𝛻𝑓( 𝑥 𝑘+1 ) < ∈ 3 then STOP (=> Function is converged, ∈ 3 is very small predetermined + ve value ) k=k+1 Go to step 2

22 Conjugate Gradient Method
Algorithm: Let 𝑓 𝑥 𝑛×1 is to be minimize Step 1: same Step 2: same Step 3: Compute the new conjugate search direction 𝑑 𝑘 =−𝛻𝑓 𝑥 𝑘 + 𝛽 𝑘 𝑑 𝑘−1 𝑑 𝑘−1 previous iteration value of search direction 𝛽 𝑘 >0 constant 𝛽 𝑘 𝑑 𝑘−1 Scaled search direction of previous iteration 𝛽 𝑘 = 𝛻𝑓 𝑥 𝑘 𝑇 𝛻𝑓 𝑥 𝑘 𝛻𝑓 𝑥 𝑘−1 𝑇 𝛻𝑓 𝑥 𝑘−1 = 𝛻𝑓 𝑥 𝑘 𝛻𝑓 𝑥 𝑘−1

23 Conjugate Gradient Method …
Step 4: same Step 5: Same Step 6: Same Note 𝛻𝑓 𝑥 𝑘 𝑇 𝑑 𝑘 =− 𝛻𝑓 𝑥 𝑘 𝑇 𝛻𝑓 𝑥 𝑘 + 𝛽 𝑘 𝛻𝑓 𝑥 𝑘 𝑇 𝑑 𝑘−1 <0 => we are moving in right direction in this method too i.e 𝑓( 𝑥 𝑘+1 )<𝑓( 𝑥 𝑘 ) ( 𝑑 𝑘 is the direction of descent )

24 Newton’s Method Algorithm
Let 𝑓 𝑥 𝑛×1 is to be minimize Step 1: Same as Steepest Decent Method Algorithm Step 2: same Step 3: Compute Hessian matrix 𝑃=𝛻 2 𝑓 𝑥 𝑘 If P>0 then 𝑑 𝑘 =− 𝑃 −1 𝛻𝑓 𝑥 𝑘 Else 𝑑 𝑘 =− 𝑀 𝑘 −1 𝛻𝑓 𝑥 𝑘 Step 4: Find the optimum step size 𝜆 𝑘 If P>0 then 𝜆 𝑘 = 𝜆 𝑘 ∗ = − 𝛻𝑓 𝑥 𝑘 𝑇 𝑑 𝑘 ( 𝑑 𝑘 ) 𝑇 𝛻 2 𝑓 𝑥 𝑘 𝑑 𝑘 Else 𝜆 𝑘 = 𝜆 𝑘 ∗ = − 𝛻𝑓 𝑥 𝑘 𝑇 𝑑 𝑘 ( 𝑑 𝑘 ) 𝑇 𝑀 𝑘 𝑑 𝑘

25 Newton’s Method Algorithm …
Step 5: same Step 6: same

26 Constrained Optimization Problem

27 Constrained Optimization Problem
Generalized constrained optimization problem Minimize 𝑓 𝑥 𝑛×1 =𝑓( 𝑥 1 , 𝑥 2 … 𝑥 𝑛 ) ---(1) Subjected to ℎ 𝑖 𝑥 1 , 𝑥 2 … 𝑥 𝑛 =0 ---(2) i=0,1 2, …p (p equality constrained) 𝑔 𝑗 𝑥 1 , 𝑥 2 … 𝑥 𝑛 ≤0 ---(3) j=0,1 2, …m (m inequality constrained) May or may not have some side constrained

28 Any inequality constrained optimization problem can be converted into equality constrained optimization problem and this equality constrained optimization problem can be converted into unconstrained optimization problem(By Lagrange Multiplier Approach), which can be solved by previous methods(Numerical method or analytical methods).


Download ppt "Optimal Control."

Similar presentations


Ads by Google