Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Effective Implementation of optimal operation using Self- optimizing control Sigurd Skogestad Institutt for kjemisk prosessteknologi NTNU, Trondheim.

Similar presentations


Presentation on theme: "1 Effective Implementation of optimal operation using Self- optimizing control Sigurd Skogestad Institutt for kjemisk prosessteknologi NTNU, Trondheim."— Presentation transcript:

1 1 Effective Implementation of optimal operation using Self- optimizing control Sigurd Skogestad Institutt for kjemisk prosessteknologi NTNU, Trondheim

2 2 Classification of variables Process u input (MV) y output (CV) d Independent variables (“the cause”): (a)Inputs (MV, u): Variables we can adjust (valves) (b)Disturbances (DV, d): Variables outside our control Dependent (output) variables (“the effect or result”): (c)Primary outputs (CV1, y 1 ): Variables we want to keep at a given setpoint (economics) (d)Secondary outputs (CV2,y 2 ): Extra outputs that we control to improve control (e.g., stabilizing loops)

3 3 Idealized view of control (“PhD control”) Control: Use inputs (MVs, u) to counteract the effect of the disturbances (DVs, d) such that the outputs (CV=y) are kept close to their setpoints (y s ) d, y s = y-y s

4 4 Practice: Tennessee Eastman challenge problem (Downs, 1991) (“PID control”) TCPCLCCCxSRC Where place??

5 5 How we design a control system for a complete chemical plant? How do we get from PID control to PhD control? Where do we start? What should we control? and why? etc.

6 6 Plantwide control = Control structure design Not the tuning and behavior of each control loop, But rather the control philosophy of the overall plant with emphasis on the structural decisions: –Selection of controlled variables (“outputs”) –Selection of manipulated variables (“inputs”) –Selection of (extra) measurements –Selection of control configuration (structure of overall controller that interconnects the controlled, manipulated and measured variables) –Selection of controller type (LQG, H-infinity, PID, decoupler, MPC etc.). That is: Control structure design includes all the decisions we need make to get from ``PID control’’ to “PhD” control

7 7 Main objectives control system 1.Economics: Implementation of acceptable (near-optimal) operation 2.Regulation: Stable operation ARE THESE OBJECTIVES CONFLICTING? Usually NOT –Different time scales Stabilization fast time scale –Stabilization doesn’t “use up” any degrees of freedom Reference value (setpoint) available for layer above But it “uses up” part of the time window (frequency range)

8 8 Example of systems we want to operate optimally Process plant –minimize J=economic cost Runner –minimize J=time «Green» process plant –Minimize J=environmental impact (with given economic cost) General multiobjective: –Min J (scalar cost, often $) –Subject to satisfying constraints (environment, resources) Optimal operation (economics)

9 9 Theory: Optimal operation Objectives Present state Model of system Theory: Model of overall system Estimate present state Optimize all degrees of freedom Problems: Model not available Optimization complex Not robust (difficult to handle uncertainty) Slow response time Process control: Excellent candidate for centralized control (Physical) Degrees of freedom CENTRALIZED OPTIMIZER

10 10 Practice: Engineering systems Most (all?) large-scale engineering systems are controlled using hierarchies of quite simple controllers –Large-scale chemical plant (refinery) –Commercial aircraft 100’s of loops Simple components: on-off + PI-control + nonlinear fixes + some feedforward Same in biological systems

11 11 Practical operation: Hierarchical structure Manager Process engineer Operator/RTO Operator/”Advanced control”/MPC PID-control u = valves Our Paradigm

12 12 Translate optimal operation into simple control objectives: What should we control? CV 1 = c ? (economics) CV 2 = ? (stabilization)

13 13 Outline Skogestad procedure for control structure design I Top Down Step S1: Define operational objective (cost) and constraints Step S2: Identify degrees of freedom and optimize operation for disturbances Step S3: Implementation of optimal operation –What to control ? (primary CV’s) (self-optimizing control) Step S4: Where set the production rate? (Inventory control) II Bottom Up Step S5: Regulatory control: What more to control (secondary CV’s) ? Step S6: Supervisory control Step S7: Real-time optimization

14 14 Step S1. Define optimal operation (economics) What are we going to use our degrees of freedom (u=MVs) for? Typical cost function in process industry*: *No need to include fixed costs (capital costs, operators, maintainance) at ”our” time scale (hours) Note: J=-P where P= Operational profit J = cost feed + cost energy – value products

15 15 Step S2. Optimize (a) Identify degrees of freedom (b) Optimize for expected disturbances Need good model, usually steady-state Optimization is time consuming! But it is offline Main goal: Identify ACTIVE CONSTRAINTS A good engineer can often guess the active constraints

16 16 Step S3: Implementation of optimal operation Have found the optimal way of operation. How should it be implemented? What to control ? (primary CV’s). CV(c) = H y 1.Active constraints 2.Self-optimizing variables (for unconstrained degrees of freedom )

17 17 –Cost to be minimized, J=T –One degree of freedom (u=power) –What should we control? Optimal operation - Runner Optimal operation of runner

18 18 1. Optimal operation of Sprinter –100m. J=T –Active constraint control: Maximum speed (”no thinking required”) CV = power (at max) Optimal operation - Runner

19 19 40 km. J=T What should we control? CV=? Unconstrained optimum Optimal operation - Runner 2. Optimal operation of Marathon runner u=power J=T u opt

20 20 Any self-optimizing variable (to control at constant setpoint)? c 1 = distance to leader of race c 2 = speed c 3 = heart rate c 4 = level of lactate in muscles Optimal operation - Runner Self-optimizing control: Marathon (40 km)

21 21 Implementation: Feedback control of Marathon runner, J=T c = heart rate Simplest case: select one measurement Simple and robust implementation Disturbances are indirectly handled by keeping a constant heart rate May have infrequent adjustment of setpoint (heart rate) measurements “Self-optimizing control:” Constant setpoints for c gives acceptable loss

22 22 Constant setpoint policy: Loss for disturbances Acceptable loss ) self-optimizing control

23 23 Unconstrained optimum: NEVER try to control the cost J (or a variable that reaches max or min at the optimum ) –In particular, never try to control directly the cost J –Assume we want to minimize J (e.g., J = V = energy) - and we make the stupid choice os selecting CV = V = J Then setting J < Jmin: Gives infeasible operation (cannot meet constraints) and setting J > Jmin: Forces us to be nonoptimal (two steady states: may require strange operation) u J J min J>J min J<J min ?

24 24 Operational objective: Minimize cost function J(u,d) The ideal “self-optimizing” variable is the gradient (first- order optimality condition) (Halvorsen and Skogestad, 1997; Bonvin et al., 2001; Cao, 2003): Optimal setpoint = 0 BUT: Gradient can not be measured in practice Possible approach: Estimate gradient J u based on measurements y Approach here: Look directly for c as a function of measurements y (c=Hy) without going via gradient Ideal “Self-optimizing” variables Unconstrained degrees of freedom: I.J. Halvorsen and S. Skogestad, ``Optimizing control of Petlyuk distillation: Understanding the steady-state behavior'', Comp. Chem. Engng., 21, S249-S254 (1997) Ivar J. Halvorsen and Sigurd Skogestad, ``Indirect on-line optimization through setpoint control'', AIChE Annual Meeting, 16-21 Nov. 1997, Los Angeles, Paper 194h.. u cost J J u =0 J u <0 u opt JuJu 0

25 25 Controlling the gradient to zero, J u =0 A. “Standard optimization (RTO) 1.Estimate disturbances and present state 2.Use model to find optimal operation (usually numerically) 3.Implement optimal point. Alternatives: –Optimal inputs –Setpoints c s to control layer (most common) –Control computed gradient (which has setpoint=0) Complex C. Local loss method (local self-optimizing control) 1.Find linear expression for gradient, J u = c= Hy 2.Control c to zero Simple but assumes optimal nominal point Offline/ Online 1.Find expression for gradient (nonlinear analytic) 2.Eliminate unmeasured variables, J u =f(y) (analytic) 3.Control gradient to zero: –Control layer adjust u until J u = f(y)=0 Not generally possible, especially step 2 B. Analytic elimination (Jäschke method) All these methods: Optimal only for assumed disturbances Unconstrained degrees of freedom

26 26 Linear measurement combinations, c = Hy c=Hy is approximate gradient J u Two approaches 1.Nullspace method (HF=0): Simple but has limitations –Need many measurements if many disturbances (ny = nu + nd) –Does not handle measurement noise 2.Generalization: Exact local method + Works for any measurement set y + Handles measurement error / noise + - Must assume that nominal point is optimal Unconstrained degrees of freedom

27 27 Unconstrained variables H measurement noise steady-state control error disturbance controlled variable / selection Ideal: c = J u In practise: c = H y c J c opt

28 28 WHAT ARE GOOD “SELF- OPTIMIZING” VARIABLES? Intuition: “Dominant variables” (Shinnar) Is there any systematic procedure? A. Sensitive variables: “Max. gain rule” (Gain= Minimum singular value) B. “Brute force” loss evaluation C. Optimal linear combination of measurements, c = Hy Unconstrained variables

29 29 «Brute force» analysis: What to control? Define optimal operation: Minimize cost function J Each candidate variable c: With constant setpoints c s compute loss L for expected disturbances d and implementation errors n Select variable c with smallest loss

30 30 C. Local loss method Optimal operation Cost J Controlled variable c c opt J opt Unconstrained optimum

31 31 Optimal operation Cost J Controlled variable c c opt J opt Two problems: 1. Optimum moves because of disturbances d: c opt (d) 2. Implementation error, c = c opt + n d n Unconstrained optimum Loss1 Loss2

32 32 Candidate controlled variables c for self-optimizing control Intuitive 1.The optimal value of c should be insensitive to disturbances (avoid problem 1) 2.Optimum should be flat (avoid problem 2 – implementation error). Equivalently: Value of c should be sensitive to degrees of freedom u. “Want large gain”, |G| Or more generally: Maximize minimum singular value, Unconstrained optimum BADGood

33 33 Quantitative steady-state: Maximum gain rule Maximum gain rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the scaled gain  (G s ) (minimum singular value of the appropriately scaled steady-state gain matrix G s from u to c) Unconstrained optimum G u c

34 34 Why is Large Gain Good? u J, c J opt u opt c opt c-c opt Loss Δc = G Δu Controlled variable Δc = G Δu Want large gain G: Large implementation error n (in c) translates into small deviation of u from u opt (d) - leading to lower loss Variation of u J(u)

35 35 H

36 36 H

37 37 Optimal measurement combination H Candidate measurements (y): Include also inputs u

38 38 Nullspace method No measurement noise (n y =0)

39 39 Nullspace method (HF=0) gives J u =0 Proof. Appendix B in:Jäschke and Skogestad, ”NCO tracking and self-optimizing control in the context of real-time optimization”, Journal of Process Control, 1407-1416 (2011). Proof: CV=Measurement combination

40 40 Amazingly simple! Sigurd is told how easy it is to find H Proof nullspace method Basis: Want optimal value of c to be independent of disturbances Find optimal solution as a function of d: u opt (d), y opt (d) Linearize this relationship:  y opt = F  d Want: To achieve this for all values of  d: To find a F that satisfies HF=0 we must require Optimal when we disregard implementation error (n) V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007). No measurement noise (n y =0)

41 41 Example. Nullspace Method for Marathon runner u = power, d = slope [degrees] y 1 = hr [beat/min], y 2 = v [m/s] F = dy opt /dd = [0.25 -0.2]’ H = [h 1 h 2 ]] HF = 0 -> h 1 f 1 + h 2 f 2 = 0.25 h 1 – 0.2 h 2 = 0 Choose h 1 = 1 -> h 2 = 0.25/0.2 = 1.25 Conclusion: c = hr + 1.25 v Control c = constant -> hr increases when v decreases (OK uphill!)

42 42 H Optimal H (“exact local method”): Problem definition

43 43 Loss evaluation (with measurement noise) Loss with c=Hy m =0 due to (i) Disturbances d (ii) Measurement noise n y Book: Eq. (10.12) Controlled variables, c s = constant + + + + + - K H ymym c u u J Loss d Proof: See handwritten notes

44 44 1. Kariwala: Can minimize 2-norm (Frobenius norm) instead of singular value 2. BUT seemingly Non-convex optimization problem (Halvorsen et al., 2003), see book p. 397 st Improvement (Alstad et al. 2009) Analytical solution D : any non-singular matrix Have extra degrees of freedom Convex optimization problem Global solution - Do not need J uu - Analytical solution applies when YY T has full rank (w/ meas. noise): Optimal H (with measurement noise)

45 45 Special cases No noise (n y =0, W ny =0): Optimal is HF=0 (Nullspace method) But: If many measurement then solution is not unique No disturbances (d=0; W d =0) + same noise for all measurements (W ny =I): Optimal is H T =G y (“control sensitive measurements”) Proof: Use analytic expression c s = constant + + + + + - K H y c u

46 46 c s = constant + + + + + - K H y c u “Minimize” in Maximum gain rule ( maximize S 1 G J uu -1/2, G=HG y ) “Scaling” S 1 -1 for max.gain rule “=0” in nullspace method (no noise) Relationship to maximum gain rule

47 47 Special case: Indirect control = Estimation using “Soft sensor” Indirect control: Control measurements c= Hy such that primary output y 1 is constant –Optimal sensitivity F = dy opt /dd = (dy/dd) y1 –Distillation: y=temperature measurements, y 1 = product composition Estimation: Estimate primary variables, y 1 =c=Hy –y 1 = G 1 u, y = G y u –Same as indirect control if we use extra degrees of freedom (H 1 =DH) such that (H 1 G y )=G 1

48 48 Summary 1: Systematic procedure for selecting controlled variables for optimal operation 1.Define economics (cost J) and operational constraints 2.Identify degrees of freedom and important disturbances 3.Optimize for various disturbances 4.Identify active constraints regions (off-line calculations) For each active constraint region do step 5-6: 5.Identify “self-optimizing” controlled variables for remaining unconstrained degrees of freedom A. “Brute force” loss evaluation B. Sensitive variables: “Max. gain rule” (Gain= Minimum singular value) C. Optimal linear combination of measurements, c = Hy 6.Identify switching policies between regions

49 49 Summary 2 Systematic approach for finding CVs, c=Hy Can also be used for estimation S. Skogestad, ``Control structure design for complete chemical plants'', Computers and Chemical Engineering, 28 (1-2), 219-234 (2004). V. Alstad and S. Skogestad, ``Null Space Method for Selecting Optimal Measurement Combinations as Controlled Variables'', Ind.Eng.Chem.Res, 46 (3), 846-853 (2007). V. Alstad, S. Skogestad and E.S. Hori, ``Optimal measurement combinations as controlled variables'', Journal of Process Control, 19, 138- 148 (2009) Ramprasad Yelchuru, Sigurd Skogestad, Henrik Manum, MIQP formulation for Controlled Variable Selection in Self Optimizing Control IFAC symposium DYCOPS-9, Leuven, Belgium, 5-7 July 2010 Download papers: Google ”Skogestad”

50 50 Example: CO2 refrigeration cycle Unconstrained DOF (u) Control what? c=? pHpH

51 51 CO2 refrigeration cycle Step 1. One (remaining) degree of freedom (u=z) Step 2. Objective function. J = W s (compressor work) Step 3. Optimize operation for disturbances (d 1 =T C, d 2 =T H, d 3 =UA) Optimum always unconstrained Step 4. Implementation of optimal operation No good single measurements (all give large losses): –p h, T h, z, … Nullspace method: Need to combine n u +n d =1+3=4 measurements to have zero disturbance loss Simpler: Try combining two measurements. Exact local method: –c = h 1 p h + h 2 T h = p h + k T h ; k = -8.53 bar/K Nonlinear evaluation of loss: OK!

52 52 Refrigeration cycle: Proposed control structure Control c= “temperature-corrected high pressure”

53 53 Conclusion optimal operation ALWAYS: 1. Control active constraints and control them tightly!! –Good times: Maximize throughput -> tight control of bottleneck 2. Identify “self-optimizing” CVs for remaining unconstrained degrees of freedom Use offline analysis to find expected operating regions and prepare control system for this! –One control policy when prices are low (nominal, unconstrained optimum) –Another when prices are high (constrained optimum = bottleneck) ONLY if necessary: consider RTO on top of this


Download ppt "1 Effective Implementation of optimal operation using Self- optimizing control Sigurd Skogestad Institutt for kjemisk prosessteknologi NTNU, Trondheim."

Similar presentations


Ads by Google