Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Self-optimizing control: Simple implementation of optimal operation Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science.

Similar presentations


Presentation on theme: "1 Self-optimizing control: Simple implementation of optimal operation Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science."— Presentation transcript:

1 1 Self-optimizing control: Simple implementation of optimal operation Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Tecnology (NTNU) Trondheim, Norway ETH, Zürich, January 2009 (Based on presentation at 2008 Benelux control meeting) Effective Implementation of optimal operation using Off-Line Computations

2 2 Bio: Sigurd Skogestad received his Ph.D. degree from the California Institute of Technology, Pasadena, USA in 1987. He has been a full professor at Norwegian University of Science and Technology (NTNU), Trondheim, Norway since 1987 and Head of Department of Chemical Engineering since 1999. He is the principal author, together with Prof. Ian Postlethwaite, of the book "Multivariable feedback control" published by Wiley in 1996 (first edition) and 2005 (second edition). He received the Ted Peterson Award from AIChE in 1989, the George S. Axelby Outstanding Paper Award from IEEE in 1990, the O. Hugo Schuck Best Paper Award from the American Automatic Control Council in 1992, and the Best Paper Award 2004 from Computers and Chemical Engineering. He was an Editor of Automatica during the period 1996-2002. His research interests include the use of feedback as a tool to make the system well-behaved (including self-optimizing control), limitations on performance in linear systems, control structure design and plantwide control, interactions between process design and control, and distillation column design, control and dynamics. Title: Self-optimizing control: Effective Implementation of optimal operation using Off-Line Computations Abstract: The computational effort involved in the solution of real-time dynamic optimization problems (paradigm 1) can be very demanding. Hence, simple but effective implementations of close-to optimal policies are attractive. The main idea is to use off-line calculations and analysis to determine the structure and properties of the optimal solution (paradigm 2). From this the idea is to determine alternate representations of the optimal solution that are more suitable for implementation. In essence, paradigm 2 includes the use of feedback control with a precomputed controller K. However, standard feedback control assumes that the objective is to control signals (controlled variables) at or close to given values (setpoints). In particular, it does not deal with the very important decision on "what to control" (selection of controlled variables) which up to now has been the focus of "self-optimizing control". The idea is that the controlled variables should be selected such that keeping them constant (or at a precomputed trajectory) by itself gives close-to-optimal operation (without the need for online reoptimization). Other related issues, not normally dealt with by feedback control, is the selection of optimal switching policies. One approach here is "explicit MPC", but for most realistic problems this can not be used. Thus, the goal to derive simpler policies. The "problem" is to be to find how to do this in a systematic manner.... Some ideas are presented in the talk.

3 3 Outline Implementation of optimal operation Paradigm 1: On-line optimizing control Paradigm 2: "Self-optimizing" control schemes –Precomputed (off-line) solution Control of optimal measurement combinations –Nullspace method –Exact local method Link to optimal control / Explicit MPC Current research issues

4 4 Optimal operation A typical dynamic optimization problem Implementation: “Open-loop” solutions not robust to disturbances or model errors Want to introduce feedback

5 5 Implementation of optimal operation Paradigm 1: On-line optimizing control where measurements are used to update model and states Paradigm 2: “Self-optimizing” control scheme found by exploiting properties of the solution –Control structure design –Usually feedback solutions –Use off-line analysis/optimization to find “properties of the solution” –“self-optimizing ” = “inherent optimal operation”

6 6 Implementation: Paradigm 1 Paradigm 1: Online optimizing control Measurements are primarily used to update the model The optimization problem is resolved online to compute new inputs. Example: Conventional MPC This is the “obvious” approach (for someone who does not know control)

7 7 Example: Runner One degree of freedom (u): Power Cost to be minimized J = T –Constraints –u ≤ umax –Follow track –Fitness (body model Optimal operation: Minimize J with respect to u(t) ISSUE: How implement optimal operation?

8 8 Example paradigm 1: On-line optimizing control of Marathon runner Even getting a reasonable model requires > 10 PhD’s … and the model has to be fitted to each individual…. Clearly impractical!

9 9 Implementation: Paradigm 2 Paradigm 2: Precomputed solutions based on off-line optimization Find properties of the solution suited for simple and robust on-line implementation Examples –Marathon runner –Hierarchical decomposition –Optimal control –Explicit MPC

10 10 Example paradigm 2: Marathon runner c = heart rate Simplest case: select one measurement Simple and robust implementation Disturbances are indirectly handled by keeping a constant heart rate May have infrequent adjustment of setpoint (heart rate) measurements

11 11 Example paradigm 2: Optimal operation of chemical plant Hierarchial decomposition based on time scale separation Self-optimizing control: Acceptable operation (=acceptable loss) achieved using constant set points (c s ) for the controlled variables c cscs Controlled variables c 1.Active constraints 2.“Self-optimizing” variables c for remaining unconstrained degrees of freedom (u) No or infrequent online optimization. Controlled variables c are found based on off-line analysis.

12 12 Example paradigm 2: Feedback implementation of optimal control (LQ) Optimal solution to infinite time dynamic optimization problem Originally formulated as a “open-loop” optimization problem (no feedback) “By chance” the optimal u can be generated by simple state feedback u = K LQ x K LQ is obtained off-line by solving Riccatti equations Explicit MPC: Extension using different K LQ in each constraint region Summary: Two paradigms MPC 1.Conventional MPC: On-line optimization 2.Explicit MPC:Off-line calculation of K LQ for each region (must determine regions online)

13 13 Example paradigm 2: Explicit MPC MPC: Model predictive control Note: Many regions because of future constraints A. Bemporad, M. Morari, V. Dua, E.N. Pistikopoulos, ”The Explicit Linear Quadratic Regulator for Constrained Systems”, Automatica, vol. 38, no. 1, pp. 3-20 (2002).

14 14 Issues Paradigm 2: Precomputed on-line solutions based on off-line optimization Issues (expected research results for specific application): 1.Find analytical or precomputed solutions suitable for on-line implementation 2.Find structure of optimal solution for specific problems Typically, identify regions where different set of constraints are active 3.Find good “self-optimizing” variables c to control in each region: Active constraints Good variables or variable combinations (for remaining unconstrained) 4.Find optimal values (or trajectories) for unconstrained variables 5.Determine a switching policy between different regions

15 15 Operational objective: Minimize cost function J(u,d) The ideal “self-optimizing” variable is the gradient (first-order optimality condition (ref: Bonvin and coworkers)): Optimal setpoint = 0 BUT: Gradient can not be measured in practice Possible approach: Estimate gradient J u based on measurements y Here alternative approach: Find optimal linear measurement combination which when kept constant ( § n) minimize the effect of d on loss. Loss = J(u,d) – J(u opt,d); where input u is used to keep c = constant § n Candidate measurements (y): Include also inputs u How find “self-optimizing” variable combinations in a systematic manner? Unconstrained degrees of freedom:

16 16 Optimal measurement combination H Unconstrained degrees of freedom:

17 17 Amazingly simple! Sigurd is told how easy it is to find H Optimal measurement combination 1. Nullspace method for n = 0 (Alstad and Skogestad, 2007) Basis: Want optimal value of c to be independent of disturbances Find optimal solution as a function of d: u opt (d), y opt (d) Linearize this relationship:  y opt = F  d Want: To achieve this for all values of  d: Always possible to find H that satisfies HF=0 provided Optimal when we disregard implementation error (n) V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007). Unconstrained degrees of freedom:

18 18 Nullspace method continued Which measurements y should we combine (c=Hy)? To handle implementation error: Use “sensitive” measurements, with information about all independent variables (u and d) Unconstrained degrees of freedom:

19 19 Optimal measurement combination 2. “Exact local method” (Combined disturbances and implementation errors) Loss L = J(u,d) – J opt (d). Keep c = Hy constant, where y = G y u + G y d d + n y I.J. Halvorsen, S. Skogestad, J.C. Morud and V. Alstad, ``Optimal selection of controlled variables'', Ind. Eng. Chem. Res., 42 (14), 3273-3284 (2003). Optimization problem for optimal combination: Theorem 1. Worst-case loss for given H (Halvorsen et al, 2003): Unconstrained degrees of freedom: Applies to any H (selection/combination)

20 20 Optimal measurement combination V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 18, in press (2008). V. Kariwala, Y. Cao, S. jarardhanan, “Local self-optimizing control with average loss minimization”, Ind.Eng.Chem.Res., in press (2008) F – optimal sensitivity matrix = dy opt /dd 2. Exact local method for combined disturbances and implementation errors. Theorem 2. Explicit formula for optimal H in Thm. 1 (Alstad et al, 2008): Theorem 3. (Kariwala et al, 2008). Unconstrained degrees of freedom:

21 21 Toy Example

22 22 Toy Example: Single measurements Want loss < 0.1: Consider variable combinations Constant input, c = y 4 = u

23 23 Toy Example: Measurement combinations

24 24 Toy example: 3. Nullspace method (no noise) Loss caused by measurement error only Recall rank single measurements: 3 > 2 > 4 > 1

25 25 4. Exact local method (with noise)

26 26 4. Exact local method, 2 measurements Combined loss for disturbances and measurement errors

27 27 4. Exact local method, all 4 measurements

28 28 Example: CO2 refrigeration cycle Unconstrained DOF (u) Control what? c=? pHpH

29 29 CO2 refrigeration cycle Step 1. One (remaining) degree of freedom (u=z) Step 2. Objective function. J = W s (compressor work) Step 3. Optimize operation for disturbances (d 1 =T C, d 2 =T H, d 3 =UA) Optimum always unconstrained Step 4. Implementation of optimal operation No good single measurements (all give large losses): –p h, T h, z, … Nullspace method: Need to combine n u +n d =1+3=4 measurements to have zero disturbance loss Simpler: Try combining two measurements. Exact local method: –c = h 1 p h + h 2 T h = p h + k T h ; k = -8.53 bar/K Nonlinear evaluation of loss: OK!

30 30 Refrigeration cycle: Proposed control structure Control c= “temperature-corrected high pressure”

31 31 Summary: Procedure selection controlled variables 1.Define economics (cost J) and operational constraints 2.Identify degrees of freedom and important disturbances 3.Optimize for various disturbances 4.Identify active constraints regions (off-line calculations) For each active constraint region do step 5-6: 5.Identify “self-optimizing” controlled variables for remaining degrees of freedom 6.Identify switching policies between regions

32 32 Example switching policies – 10 km 1.”Startup”: Given speed or follow ”hare” 2.When heart beat > max or pain > max: Switch to slower speed 3.When close to finish: Switch to max. power

33 33 Current research 1 ( Sridharakumar Narasimhan and Henrik Manum): Conditions for switching between regions of active constraints Idea: Within each region it is optimal to 1.Control active constraints at c a = c,a, constraint 2.Control self-optimizing variables at c so = c,so, optimal Define in each region i: Keep track of c i (active constraints and “self-optimizing” variables) in all regions i Switch to region i when element in c i changes sign Research issue: can we get lost?

34 34 Current research 2 ( Håkon Dahl-Olsen) : Extension to dynamic systems Basis. From dynamic optimization: – Hamiltonian should be minimized along the trajectory Generalize steady-state local methods: –Generalize maximum gain rule –Generalize nullspace method (n=0) –Generalize “exact local method”

35 35 Current research 3 ( Johannes Jäschke): Extension of noise-free case (nullspace method) to nonlinear systems Idea: The ideal self-optimizing variable is the gradient Optimal setpoint = 0 Certain problems (e.g. polynomial) –Find analytic expression for J u in terms of u and d –Derive J u as a function of measurements y (eliminate disturbances d)

36 36 Our results on optimal measurement combination (keep c = Hy constant) Nullspace method for n=0 (Alstad and Skogestad, 2007) Explicit expression (“exact local method”) for n≠0 (Alstad et al., 2008) Observation 1: Both result are exact for quadratic optimization problems Observation 2: MPC can be written as a quadratic optimization problem and optimal solution is to keep c = u – Kx constant. Must be some link! Current research 4 (Henrik Manum ): Self-optimizing control and Explicit MPC

37 37 Quadratic optimization problems Noise-free case (n=0) Reformulation of nullspace method of Alstad and Skogestad (2007) –Can add linear constraints (c=Hy) to quadratic problem with no loss –Need n y ≥ n u + n d. H is unique if n y = n u + n d (n ym = n d ) –H may be computed from nullspace method, V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind. Eng. Chem. Res, 46 (3), 846-853 (2007).

38 38 Quadratic optimization problems With noise / implementation error (n ≠ 0) Reformulation of exact local method of Alstad et al. (2008) –Can add linear constraints (c=Hy) with minimum loss. –Have explicit expression for H from “exact local method” V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 19, 138-148 (2009)

39 39 Optimal control / Explicit MPC Treat initial state x 0 as disturbance d. Discrete time constrained MPC problem: In each active constraint region this becomes an unconstrained quadratic optimization problem ) Can use above results to find linear constraints 1.State feedback with no noise (LQ problem) Measurements: y = [u x] Linear constraints: c = H y = u – K x n x = n d : No loss (solution unchanged) by keeping c = 0, so u = Kx optimal! Can find optimal feedback K from “nullspace method”, –Same result as solving Riccatti equations NEW INSIGHT EXPLICIT MPC: Use change in sign of c for neighboring regions to decide when to switch regions H. Manum, S. Narasimhan and S. Skogestad, ``A new approach to explicit MPC using self-optimizing control”, ACC, Seattle, June 2008.

40 40 Explicit MPC. State feedback. Second-order system time [s] Phase plane trajectory

41 41 Optimal control / Explicit MPC 2.Output feedback (All states not measured). No noise Option 1: State estimator Option 2: Direct use of measurements for feedback “Measurements”: y = [u y m ] Linear constraints:c = H y = u – K y m No loss (solution unchanged) by keeping c = 0 (constant), so u = Ky m is optimal, provided we have enough independent measurements: n y ≥ n u + n d ) n ym ≥ n d Can find optimal feedback K from “self-optimizing nullspace method” PROBLEM: For feedback gain K to be constant we need as many measurements as states (so then this is state feedback…) Can use previous measurements, but then get some loss due to causality (cannot affect past outputs) H. Manum, S. Narasimhan and S. Skogestad, ``Explicit MPC with output feedback using self-optimizing control”, IFAC World Congress, Seoul, July 2008.

42 42 Explicit MPC. Output feedback Second-order system time [s] State feedback

43 43 Optimal control / Explicit MPC 3.Further extension: Output feedback with noise Option 1: State estimator Option 2: Direct use of measurements for feedback “Measurements”: y = [u y m ] Linear constraints:c = H y = u – K y m Loss by using this feedback law (adding these constraints) is minimized by computing feedback K using “exact local method” H. Manum, S. Narasimhan and S. Skogestad, ``Explicit MPC with output feedback using self-optimizing control”, IFAC World Congress, Seoul, July 2008.

44 44 Conclusion Simple control policies are always preferred in practice (if they exist and can be found) Paradigm 2: Use off-line optimization and analysis to find simple near-optimal control policies suitable for on-line implementation Current research: Several interesting extensions –Optimal region switching –Dynamic optimization –Explicit MPC Acknowledgements –Sridharakumar Narasimhan –Henrik Manum –Håkon Dahl-Olsen –Vinay Kariwala


Download ppt "1 Self-optimizing control: Simple implementation of optimal operation Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science."

Similar presentations


Ads by Google