Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Feedback: Still the simplest and best solution Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU)

Similar presentations


Presentation on theme: "1 Feedback: Still the simplest and best solution Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU)"— Presentation transcript:

1 1 Feedback: Still the simplest and best solution Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim Applications to 1) robustness 2) economics (self-optimizing control), and 3) stabilization of new operating regimes

2 2 Feedback: Still the simplest and best solution Applications to self-optimizing control and stabilization of new operating regimes Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU) Trondheim Xi’an May 2009 + Statoil Nov. 2009 + Slovakia Feb. 2010

3 3 Abstract Feedback: Still the simplest and best solution Applications to self-optimizing control and stabilization of new operating regimes Sigurd Skogestad, NTNU, Trondheim, Norway Most engineers are (indirectly) trained to be “feedforward thinkers" and they immediately think of “model inversion'' when it comes doing control. Thus, they prefer to rely on models instead of data, although simple feedback solutions in many cases are much simpler and certainly more robust. The seminar starts with a simple comparison of feedback and feedforward control and their sensitivity to uncertainty. Then two nice applications of feedback are considered: 1. Implementation of optimal operation by "self-optimizing control". The idea is to turn optimization into a setpoint control problem, and the trick is to find the right variable to control. Applications include process control, pizza baking, marathon running, biology and the central bank of a country. 2. Stabilization of desired operating regimes. Here feedback control can lead to completely new and simple solutions. One example would be stabilization of laminar flow at conditions where we normally have turbulent flow. I the seminar a nice application to anti-slug control in multiphase pipeline flow is discussed.

4 4 Trondheim, Norway Xi’an

5 5 Trondheim Oslo UK NORWAY DENMARK GERMANY North Sea SWEDEN Arctic circle

6 6 NTNU, Trondheim

7 7 Research Sigurd Skogestad 1.Truls Larsson, Studies on plantwide control, Aug. 2000. (Aker Kværner, Stavanger) 2.Eva-Katrine Hilmen, Separation of azeotropic mixtures, Des. 2000. (ABB, Oslo) 3.Ivar J. Halvorsen; Minimum energy requirements in distillation,May 2001. (SINTEF) 4.Marius S. Govatsmark, Integrated optimization and control, Sept. 2003. (Statoil, Haugesund) 5.Audun Faanes, Controllability analysis and control structures, Sept. 2003. (Statoil, Trondheim) 6.Hilde K. Engelien, Process integration for distillation columns, March 2004. (Aker Kværner) 7.Stathis Skouras, Heteroazeotropic batch distillation, May 2004. (StatoilHydro, Haugesund) 8.Vidar Alstad, Studies on selection of controlled variables, June 2005. (Statoil, Porsgrunn) 9.Espen Storkaas, Control solutions to avoid slug flow in pipeline-riser systems, June 2005. (ABB) 10.Antonio C.B. Araujo, Studies on plantwide control, Jan. 2007. (Un. Campina Grande, Brazil) 11.Tore Lid, Data reconciliation and optimal operation of refinery processes, June 2007 (Statoil) 12.Federico Zenith, Control of fuel cells, June 2007 (Max Planck Institute, Magdeburg) 13.Jørgen B. Jensen, Optimal operation of refrigeration cycles, May 2008 (ABB, Oslo) 14.Heidi Sivertsen, Stabilization of desired flow regimes (no slug), Dec. 2008 (Statoil, Stjørdal) 15.Elvira M.B. Aske, Plantwide control systems with focus on max throughput, Mar 2009 (Statoil) Current research: Restricted-complexity control (self-optimizing control): off-line and analytical solutions to optimal control (incl. explicit MPC & explicit RTO) multivariable PID batch processes Plantwide control. Applications: LNG, GTL Graduated PhDs since 2000

8 8 Outline 1. Why feedback (and not feedforward) ? The feedback amplifier 2. Self-optimizing control: How do we link optimization and feedback? What should we control? 3. Stabilizing feedback control: Anti-slug control Conclusion

9 9 G Amplifier r y Want: y(t) = α r(t) Solution 1 (feedforward): G = α (adjust amplifier gain) Very difficult to in practice Cannot get exact value of α Cannot easily adjust α online Do not get same amplification at all frequencies Problems with distortion and nonlinearity Example: AMPLIFIER

10 10 Want: y(t) = α r(t) Solution 2 (feedback): K = 1/α (adjustable) Closed-loop response MAGIC! Independent of G, provided GK >> 1 Black’s feedback amplifier (1927) G Amplifier r y K Measured y

11 11 Example: disturbance rejection G GdGd u d y Plant (uncontrolled system) 1 k=10 time 25

12 12 1. Feedforward control (measure d) G GdGd u d y ”Perfect” feedforward control: u = - G -1 G d d Our case: G=G d → Use u = -d

13 13 G GdGd u d y 1.Feedforward control: Nominal (perfect model)

14 14 2. Feedback control d G GdGd u y C ysys e

15 15 2. Feedback PI-control: Nominal case d G GdGd u y C ysys e Input u Output y Feedback generates inverse! Resulting output

16 16 Robustness comparison Gain error, k = 5, 10 (nominal), 20 Time constant error, τ = 5, 10 (nominal), 20 Time delay error,θ = 0 (nominal), 1, 2, 3

17 17 Robustness: Gain error, k = 5, 10 (nominal), 20 1. FEEDFORWARD 2. FEEDBACK

18 18 Robustness: Time constant error, τ = 5, 10 (nominal), 20 1. FEEDFORWARD 2. FEEDBACK

19 19 Robustness: Time delay error, θ = 0 (nominal), 1, 2, 3 1. FEEDFORWARD 2. FEEDBACK

20 20 Comment Time delay error in disturbance model (G d ): No effect (!) with feedback (except time shift) Feedforward: Similar effect as time delay error in G

21 21 Conclusion: Why feedback? (and not feedforward control) Simple: High gain feedback! Counteract unmeasured disturbances Reduce effect of changes / uncertainty (robustness) Change system dynamics (including stabilization) Linearize the behavior No explicit model required MAIN PROBLEM: Potential instability (may occur “suddenly”) with time delay/unstable zero Unstable (RHP) zero: Fundamental problem with feedback! Does not help with detailed model + state estimator (Kalman filter)…

22 22 Outline I. Why feedback (and not feedforward) ? II. Self-optimizing feedback control: How do we link optimization and feedback? What should we control? III. Stabilizing feedback control: Anti-slug control Conclusion

23 23 Optimal operation (economics) Define scalar cost function J(u 0,x,d) u 0 : degrees of freedom d: disturbances x: states (internal variables) Optimal operation for given d. Dynamic optimization problem: min u0 J(u 0,x,d) subject to: Model: f(u 0,x,d) = 0 Constraints: g(u 0,x,d) < 0 Here: How do we implement optimal operation?

24 24 Estimate d and compute new u opt (d) Probem: Complicated and sensitive to uncertainty 1. ”Obvious” solution: Optimizing control = ”Feedforward”

25 25 2. In Practice: Feedback implementation Issue: What should we control?

26 26 Process control hierarchy y 1 = c ? (economics) PID RTO MPC

27 27 What should we control? CONTROL ACTIVE CONSTRAINTS! –Optimal solution is usually at constraints, that is, most of the degrees of freedom are used to satisfy “active constraints”, g(u 0,d) = 0 –Implementation of active constraints is usually simple. WHAT MORE SHOULD WE CONTROL? –But what about the remaining unconstrained degrees of freedom? –Look for “self-optimizing” controlled variables!

28 28 Self-optimizing Control c=c s Definition Self-optimizing Control –Self-optimizing control is when acceptable operation (=acceptable loss) can be achieved using constant set points (c s ) for the controlled variables c (without the need for re-optimizing when disturbances occur).

29 29 Optimal operation – Runner –Cost: J=T –One degree of freedom (u=power) –Optimal operation?

30 30 Solution 1: Optimizing control Even getting a reasonable model requires > 10 PhD’s … and the model has to be fitted to each individual…. Clearly impractical! Optimal operation - Runner

31 31 Solution 2 – Feedback (Self-optimizing control) –What should we control? Optimal operation - Runner

32 32 Self-optimizing control: Sprinter (100m) 1. Optimal operation of Sprinter, J=T –Active constraint control: Maximum speed (”no thinking required”) Optimal operation - Runner

33 33 Optimal operation of Marathon runner, J=T Any self-optimizing variable c (to control at constant setpoint)? c 1 = distance to leader of race c 2 = speed c 3 = heart rate c 4 = level of lactate in muscles Optimal operation - Runner Self-optimizing control: Marathon (40 km)

34 34 Conclusion Marathon runner c = heart rate select one measurement Simple and robust implementation Disturbances are indirectly handled by keeping a constant heart rate May have infrequent adjustment of setpoint (heart rate) Optimal operation - Runner

35 35 Further examples Central bank. J = welfare. u = interest rate. c=inflation rate (2.5%) Cake baking. J = nice taste, u = heat input. c = Temperature (200C) Business, J = profit. c = ”Key performance indicator (KPI), e.g. –Response time to order –Energy consumption pr. kg or unit –Number of employees –Research spending Optimal values obtained by ”benchmarking” Investment (portofolio management). J = profit. c = Fraction of investment in shares (50%) Biological systems: –”Self-optimizing” controlled variables c have been found by natural selection –Need to do ”reverse engineering” : Find the controlled variables used in nature From this possibly identify what overall objective J the biological system has been attempting to optimize

36 36 Optimal operation Cost J Controlled variable c c opt J opt Unconstrained optimum

37 37 Optimal operation Cost J Controlled variable c c opt J opt Two problems: 1. Optimum moves because of disturbances d: c opt (d) 2. Implementation error, c = c opt + n d n Unconstrained optimum

38 38 Effect of implementation error BADGood Unconstrained optimum

39 39 Candidate controlled variables c for self-optimizing control Intuitive 1.The optimal value of c should be insensitive to disturbances (avoid problem 1) Ideal self-optimizing variable is gradient, c = J u 2.Optimum should be flat (avoid problem 2 – implementation error). Equivalently: “Want large gain” |G| from u to c Unconstrained optimum BADGood

40 40 Quantitative steady-state: Maximum gain rule Maximum gain rule (Skogestad and Postlethwaite, 1996): Look for variables that maximize the scaled gain  (G s ) (minimum singular value of the appropriately scaled steady-state gain matrix G s from u to c) Unconstrained optimum

41 41 Proof: Local analysis u cost J u opt c = G u Unconstrained optimum

42 42 Optimal measurement combinations Exact solutions for quadratic optimization problems 1.Nullspace method. No loss for disturbances (d) 2. Generalized (with noise n): Exact local method: c = Hy can be considered as linear invariants for the quadratic optimization problem – which can be used for feedback implementation of optimal solution! Example: Explicit MPC Unconstrained optimum * V. Alstad, S. Skogestad and E.S. Hori, Optimal measurement combinations as controlled variables, Journal of Process Control, 19, 138-148 (2009)

43 43 Example: CO2 refrigeration cycle J = W s (work supplied) DOF = u (valve opening, z) Main disturbances: d 1 = T H d 2 = T Cs (setpoint) d 3 = UA loss What should we control? pHpH

44 44 CO2 refrigeration cycle Step 1. One (remaining) degree of freedom (u=z) Step 2. Objective function. J = W s (compressor work) Step 3. Optimize operation for disturbances (d 1 =T C, d 2 =T H, d 3 =UA) Optimum always unconstrained Step 4. Implementation of optimal operation No good single measurements (all give large losses): –p h, T h, z, … Nullspace method: Need to combine n u +n d =1+3=4 measurements to have zero disturbance loss Simpler: Try combining two measurements. Exact local method: –c = h 1 p h + h 2 T h = p h + k T h ; k = -8.53 bar/K Nonlinear evaluation of loss: OK!

45 45 CO2 cycle: Maximum gain rule

46 46 Conclusion CO2 refrigeration cycle Self-optimizing c= “temperature-corrected high pressure”

47 47 Outline I. Why feedback (and not feedforward) ? II. Self-optimizing feedback control: What should we control? III. Stabilizing feedback control: Anti-slug control IV. Conclusion

48 48 Application stabilizing feedback control: Anti-slug control Slug (liquid) buildup Two-phase pipe flow (liquid and vapor)

49 49 Slug cycle (stable limit cycle) Experiments performed by the Multiphase Laboratory, NTNU

50 50

51 51 Experimental mini-loop

52 52 p1p1 p2p2 z Experimental mini-loop Valve opening (z) = 100%

53 53 p1p1 p2p2 z Experimental mini-loop Valve opening (z) = 25%

54 54 p1p1 p2p2 z Experimental mini-loop Valve opening (z) = 15%

55 55 p1p1 p2p2 z Experimental mini-loop: Bifurcation diagram Valve opening z % No slug Slugging

56 56 Avoid slugging? 1.Operate away from optimal point 2.Design changes 3.Feedforward control? 4.Feedback control?

57 57 p1p1 p2p2 z Avoid slugging: 1. Close valve (but increases pressure) Valve opening z % No slugging when valve is closed Design change

58 58 Avoid slugging: 2a. Design change to avoid slugging p1p1 p2p2 z Design change

59 59 Minimize effect of slugging: 2b. Build large slug-catcher Most common strategy in practice p1p1 p2p2 z Design change

60 60 Avoid slugging: 4. Feedback control? Valve opening z % Predicted smooth flow: Desirable but open-loop unstable Comparison with simple 3-state model:

61 61 Avoid slugging: 4. ”Active” feedback control PT PC ref Simple PI-controller p1p1 z

62 62 Anti slug control: Mini-loop experiments Controller ONController OFF p 1 [bar] z [%]

63 63 Anti slug control: Full-scale offshore experiments at Hod-Vallhall field (Havre,1999)

64 64 Analysis: Poles and zeros y z P 1 [Bar]DP[Bar]ρ T [kg/m 3 ]F Q [m 3 /s]F W [kg/s] 0.175 -0.00343.2473 0.0142 -0.0004 0.0048 -4.5722 -0.0032 -0.0004 -7.6315 -0.0004 0 0.25 -0.00343.4828 0.0131 -0.0004 0.0048 -4.6276 -0.0032 -0.0004 -7.7528 -0.0004 0 Operation points: Zeros: z P1P1 DPPoles 0.175 70.051.94 -6.11 0.0008±0.0067i 0.25 690.96 -6.21 0.0027±0.0092i P1P1 ρTρT DP FT Topside Topside measurements: Ooops.... RHP-zeros or zeros close to origin

65 65 Stabilization with topside measurements: Avoid RHP-zeros by using 2 measurements Model based control (LQG) with 2 top measurements: DP and density ρ T

66 66 Summary anti slug control Stabilization of smooth flow regime = $$$$! Stabilization using downhole pressure simple Stabilization using topside measurements possible Control can make a difference! Thanks to: Espen Storkaas + Heidi Sivertsen + Håkon Dahl-Olsen + Ingvald Bårdsen

67 67 Conclusions Feedback is an extremely powerful tool simple robust Complex systems can be controlled by hierarchies (cascades) of single- input-single-output (SISO) control loops Control the right variables to achieve ”self-optimizing control” Feedback can make new things possible Stabilization (anti-slug) More details: See paper available at my home page S. Skogestad. "Feedback: Still the simplest and best solution”. MIC Journal, 2010

68 68 Engineering systems Most (all?) large-scale engineering systems are controlled using hierarchies of quite simple single-loop controllers –Commercial aircraft –Large-scale chemical plant (refinery) 1000’s of loops Simple components: on-off + P-control + PI-control + nonlinear fixes + some feedforward Same in biological systems

69 69 Self-optimizing control: Recycle process J = V (minimize energy) N m = 5 3 economic (steady- state) DOFs 1 2 3 4 5 Given feedrate F 0 and column pressure: Constraints: Mr < Mrmax, xB > xBmin = 0.98 DOF = degree of freedom

70 70 Recycle process: Control active constraints Active constraint M r = M rmax Active constraint x B = x Bmin One unconstrained DOF left for optimization: What more should we control? Remaining DOF:L

71 71 Maximum gain rule: Steady-state gain Luyben snow-ball rule: Not promising economically Conventional: Looks good

72 72 Recycle process: Loss with constant setpoint, c s Large loss with c = F (Luyben rule) Negligible loss with c =L/F or c = temperature

73 73 Recycle process: Proposed control structure for case with J = V (minimize energy) Active constraint M r = M rmax Active constraint x B = x Bmin Self-optimizing loop: Adjust L such that L/F is constant


Download ppt "1 Feedback: Still the simplest and best solution Sigurd Skogestad Department of Chemical Engineering Norwegian University of Science and Technology (NTNU)"

Similar presentations


Ads by Google