Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advanced Process Control Training Presentation

Similar presentations

Presentation on theme: "Advanced Process Control Training Presentation"— Presentation transcript:

1 Advanced Process Control Training Presentation
Advanced Process Control or Advanced Quality Control: reducing process variability, producing more consistent products, increasing process capacity, potential to significantly increase profitability Lee Smith March 29, 2006

2 Contents Advanced Process Control (APC) Defined
Applications, Advantages & Limitations Basic Process Control Discussed Feedback Control Feedforward Control Advanced Process Control Discussed Real World Examples Process Control Exercise (PID Control) Summary Readings List

3 Advanced Process Control
State-of-the-art in Modern Control Engineering Appropriate for Process Systems and Applications APC: systematic approach to choosing relevant techniques and their integration into a management and control system to enhance operation and profitability Goal of advanced process control: A systematic studied approach to choosing relevant techniques and their integration into a co-operative management and control system that will significantly enhance plant operation and profitability. As an improvement over typical process control, APC represents an improvement in the performance of control strategies that results in more consistent production, process optimization, better product quality and less re-processing of products and less waste. Process models underpin most modern control approaches. Even the prevalent Proportional+Integral+Derivative (PID) algorithms can be designed from a model based perspective. The performance capabilities of PID algorithms are generally limited to on-line adjustment of parameters to enhance the stability of the subprocess controlld. More sophisticated strategies, such as adaptive algorithms and predictive controllers tied to predictive models are being used to improve process control. Optimizing the overall plant according to management objectives via APC links plant business objectives with local unit operations. The result is an environment that is conducive to more consistent production.

4 Advanced Process Control
APC is a step beyond Process Control Built on foundation of basic process control loops Process Models predict output from key process variables online and real-time Optimize Process Outputs relative to quality and profitability goals Key process variables Management Objectives APC describes an approach that draws on elements from many disciplines ranging from Control Engineering, Signal Processing, Statistics, Decision Theory, Artificial Intelligence to hardware and software engineering. Central to this philosophy is the requirement for an engineering appreciation of the problem, an understanding of process plant behavior coupled with the use of control technologies (not necessarily state-of-the art). If an accurate model of the process is available, and if its inverse exists, then process dynamics can be cancelled by the inverse model. As a result, the output of the process will always be equal to the desired output. Therefore, model based control design has the potential to provide perfect control. The first task in the implementation of modern advanced control is to obtain a model of the process to be controlled. However, given that constraints exist on all process operations, that all models contain some degree of error and that all models may not be invertible, perfect control is very difficult to realize. These are issues that modern control techniques aim to address, directly or indirectly. Recent efforts are focused on developing practicable nonlinear controllers, in recognition of the fact that many real processes are nonlinear and that adaptive systems may not be able to cope with significant nonlinearities. These efforts follow two approaches. One attempts to design control strategies based on nonlinear black box models ( nonlinear time-series or neural networks). The other relies on an analytical approach, making use of a physical-chemical model of the process. However, there are indications that the two approaches can be rationalized. Cheap powerful computers and advances in the field of artificial intelligence are making an impact. Local controls are increasingly being supplemented with monitoring, supervision and optimization schemes—roles that traditionally were undertaken by plant personnel. These reside at a higher level in the information management and process control hierarchy.

5 How Can APC Be Used? APC can be applied to any system or process where outputs can be optimized on-line and in real-time Model of process or system exist or can be developed Typical applications: Petrochemical plants and processes Semiconductor wafer manufacturing processes Also applicable to a wide variety of other systems including aerospace, robotics, radar tracking, vehicle guidance systems, etc. How can Advanced Process Control be used in your organization? Applications limited to those that are process or system oriented and where one can develop relevant models that accurately describe the state and output of the process or system. Models could include “black box” neural network model. Most current APC applications can be found in the petrochemical industry where process control of relevant systems is almost 100 years old. APC is also very prevalent in semiconductor wafer manufacturing fabs. Most current research be being done by these two industries, but applications can be adapted to almost any process or system that can meet the basic criteria that outputs can be measured in real-time and on-line, these outputs can be controlled and have applicable control points and models can be developed to describe the process or system. APC was developed based on typical PID process control beginning in the 1970’s with the advent of low-cost powerful computers and advances in the electronics industry that allowed processes to be more accurately measured and controlled.

6 Advantages and Benefits
Production quality can be controlled and optimized to management constraints APC can accomplish the following: improve product yield, quality and consistency reduce process variability—plants to be operated at designed capacity operating at true and optimal process constraints—controlled variables pushed against a limit reduce energy consumption exceed design capacity while reducing product giveaway increase responsiveness to desired changes (eliminate deadtime) improve process safety and reduce environmental emissions Profitability of implementing APC: benefits ranging from 2% to 6% of operating costs reported Petrochemical plants reporting up to 3% product yield improvements 10-15% improved ROI at some semiconductor plants Profitability improvements by implementing APC can be enormous, typically achieved by reducing process variability and allowing plants to be operated at their designed capacity or to exceed design constraints. For example, a 3% product yield improvement is huge to a petrochemical plant with a payout for implementing APC measured in weeks. APC particularly useful with the following control problems: long deadtimes (major control difficulty); inverse response and overshoot which leads initially to wrong action by reactive controller; and strong interactions between variables.

7 Limitations Implementation of an APC system is time consuming, costly and complex May require improved control hardware than currently exists High level of technical competency required Usually installed and maintained by vendors & consultants Must have a very good understanding of process prior to implementation High training requirements Difficult to use and operate after implementation Requires large capacity operations to justify effort and expense New APC applications more difficult, time consuming and costly Off-the-shelf APC products must be customized APC is not for every process or every potential application. The cost of implementation limits the implementation to a few key processes that can justify the effort and expense. Should view implementing an APC system the same way one would evaluate implementing a new ERP system or other global software enterprise systems. This is one of the reasons that applications of APC have been readily developed for the petrochemical industry with major processing units consisting of huge capacities where even a relatively small improvement in yield or capacity can yield very large economic benefits and pay for the installation of an APC system very quickly.


9 What is Basic Process Control?
Process control loop: control component monitors desired output results and changes input variables to obtain the result. Example: thermostat controller Furnace House is too cold furnace turns on heats the house This is an example of what is commonly called a feedback control loop. The thermostat has a temperature gauge that senses when the house is cold. It sends a signal to the furnace to turn on and heat the house until the temperature in the house reaches a point where the temperature sensors reach a pre-determined set point and the thermostat then sends a signal to turn off the furnace until the house gets too cold again the process repeats itself. This is a feedback loop since the controlled variable, in this case the house temperature, is fed back through the thermostat controller to the furnace. Heating up the temperature in a house is a process that has the specific, desired outcome to reach and maintain a defined temperature (for example, 72°F), kept constant over time. The temperature is the controlled variable and it is also the input variable since it is measured by a thermometer in the thermostat controller and used to decide whether to heat or not to heat. The desired temperature (72°F) is the setpoint. The state of the furnace (i.e. the setting of the valve that allows natural gas into the combustion chamber to heat the air that heats the house) is the manipulated variable since it is subject to control actions. Another common example of a process control loop is the cruise control on a car. In this case, the process or system is the car. The goal of cruise control is to keep the car at a constant speed and the output variable of the process is the speed of the car. The primary means to control the speed of the car is the fuel fed into the engin. Is the house too cold? yes Thermostat Controller recognized the house is too cold sends signal to the furnace to turn on and heat the house

10 Thermostat Controller
Basic Control Controlled variable: temperature (desired output) Input variable: temperature (measured by thermometer in theromostat) Setpoint: user-defined desired setting (temperature) Manipulated variable: natural gas valve to furnace (subject to control) Furnace House is too cold furnace turns on heats the house natural gas Heating up the temperature in a house is a process that has the specific, desired outcome to reach and maintain a defined temperature (for example, 72°F), kept constant over time. The temperature is the controlled variable and it is also the input variable since it is measured by a thermometer in the thermostat controller and used to decide whether to heat or not to heat. The desired temperature (72°F) is the setpoint. The state of the furnace (i.e. the setting of the valve that allows natural gas into the combustion chamber to heat the air that heats the house) is the manipulated variable since it is subject to control actions. house temperature measured is temperature below setpoint? Thermostat Controller recognized the house is too cold sends signal to the furnace to turn on and heat the house setpoint = 72°F

11 Feedback Control Theory
Output of the system y(t) is fed back to the reference value r(t) through measurement of a sensor Controller C takes the difference between the reference and the output and determines the error e Controller C changes the inputs u to Process under control P by the amount of error e A control loop consists of three parts: (1) Measurement by a sensor connected to the process (2) Decision in a controller element (3) Action through an output device ("actuator") such as a control valve. This is a so-called single-input-single-output (SISO) control system: example where one or more variables can contain more than a value (MIMO -- Multi-Input-Multi-Output - for example when outputs to be controlled are two or more input and/or output variables) are frequent. In such cases variables are represented through vectors instead of simple scalar values. For some distributed parameter systems the vectors may be infinite dimensional, typically functions. Usually, controller, C, and the plant, P, are linear and time-invariant (i.e. elements of their transfer function C(s) and P(s) do not depend on time). The system is usually analyzed by the Laplace transform on the variables to determine the correct response to a change in setpoint. One of the first ideas people usually have about designing an automatic process controller is called "proportional".  Meaning, if the difference between the PV and SP is small -- then let's make a small correction to the output.  If the difference between the PV and SP is large -- then let's make a larger correction to the output.

12 PID Control Error is found by subtracting the measured quantity from the setpoint. Proportional - To handle the present, the error is multiplied by a negative constant P and added to the controlled quantity. Note that when the error is zero, a proportional controller's output is zero. Integral - To handle the past, the error is integrated (added up) over a time period, multiplied by a negative constant I and added to the controlled quantity. I finds the process output's average error from the setpoint. A simple proportional system oscillates around the setpoint, because there's nothing to remove the error. By adding a negative proportion of the average error from the process input, the average difference between the process output and the setpoint is always reduced and the process output will settle at the setpoint. Derivative - To handle the future, the first derivative (slope) of the error is calculated, multiplied by negative constant D, and added to the controlled quantity. The larger this derivative term, the more rapidly the controller responds to changes in the process output. The D term dampens a controller's response to short term changes. Common feedback loop component in industrial control applications installed in about 80% of feedback control devices. The controller compares a measured value from a process with a reference setpoint value. The difference or "error" signal is then processed to calculate a new value for a manipulated process input, which new value then brings the process measured value back to its desired setpoint. Unlike simpler control algorithms, the PID controller can adjust process inputs based on the history and rate of change of the error signal, which gives more accurate and stable control. It can be shown mathematically that a PID loop will produce accurate stable control in cases where other control algorithms would either have a steady-state error or would cause the process to oscillate. The controller reads a sensor. Then it subtracts this measurement from a desired value, the "setpoint," to determine an "error". It then uses the error to calculate a correction to the process's input variable so that the correction will remove error from the process's output measurement. It then adds the correction to the process's input variable to remove errors from the process's output. In a PID loop, the correction that's added is calculated from the error in three ways, to cancel out the present error, average out past errors, and anticipate the future a bit from the slope of the error(s) over time. Proportional - To handle the present, the error is multiplied by a (negative) constant P and added to (subtracting error from) the controlled quantity. P is only valid in the band over which a controller's output is proportional to the error of the system. For example, for a heater, a controller with a proportional band of 10 °C and a setpoint of 20 °C would have an output of 100% at 10 °C, 50% at 15 °C and 10% at 19 °C. Note that when the error is zero, a proportional controller's output is zero. Integral - To handle the past, the error is integrated (added up) over a period of time, and then multiplied by a (negative) constant I (making an average), and added to (subtracting error from) the controlled quantity. I averages the measured error to find the process output's average error from the setpoint. A simple proportional system oscillates, moving back and forth around the setpoint, because there's nothing to remove the error when it overshoots. By adding a negative proportion of (i.e. subtracting part of) the average error from the process input, the average difference between the process output and the setpoint is always being reduced. Therefore, eventually, a well-tuned PID loop's process output will settle down at the setpoint. Derivative - To handle the future, the first derivative (the slope of the error) over time is calculated, and multiplied by another (negative) constant D, and also added to (subtracting error from) the controlled quantity. The derivative term controls the response to a change in the system. The larger the derivative term, the more rapidly the controller responds to changes in the process's output. Its D term is the reason a PID loop is also called a "Predictive Controller". The D term is a good thing to reduce when trying to dampen a controller's response to short term changes. Practical controllers for slow processes can even do without D.

13 Goals of PID Control Quickly respond to changes in setpoint
Stability of control Dampen oscillation Problems: Deadtime—lag in system response to changes in setpoint Deadtime can cause significant instability into the system controlled Deadtime is the lag or delay in measuring a change in the output after a change has been introduced to the inputs of the process.

14 PI Control Example No instability problems when there is no deadtime or delay in the response to controller’s action. The most important action of the controller is to get the process up to the setpoint as quickly as possible. I = 1.4 gives the best response: quickly brings controller to setpoint without oscillation

15 PI Control Example I = 0.6 gives the best response
With some deadtime in the system, system shows some instability. The oscillation is poor control at anything but I = 0.6. This is why you want to start with very small P, I, and D constants and increase them to improve performance.  If you start with large constants, bad things can happen. Dead Time refers to the delay between making a change in the process intput and seeing the change reflected in the PV or present value of the process output.  The classical example is getting an oven at the right temperature.  When the oven is first turned on, it takes a while for the oven to "heat up".  This is the dead time.  If you set an initial temperature, wait for the oven to reach the initial temperature, and then you determine that you set the wrong temperature -- then it will take a while for the oven to reach the new temperature setpoint.  I = 0.6 gives the best response I = 1.1 borders on instability

16 PID Control Example I = 0.6 gives the best response
With more deadtime in the response to the controller, system shows instability at higher integral factors. The oscillation would be considered very bad control. In a PID control environment, the derivative part of the controller (the “D” in “PID”) would help to dampen these oscillations. Deadtime, or lag, are difficult conditions for PID feedback controllers to handle. The P & I parameters that work for one dead time are not necessarily optimal for another dead time.  In other words, for each process element (valve, motor, pump, heater, chiller, etc) you are trying to control -- you will have different process characteristics and will have to determine the optimal P, I, and possibly D constants.  Determining what these constants should be is called "tuning".  Theoretically, a control engineer will want to minimize the sum of absolute errors in order to get the best response from a controller. Derivative control takes into consideration that if you change the output, then it takes time for that change to be reflected in the process output.  For example, let's take heating of the oven.  If we start turning up the gas flow, it will take time for the heat to be produced, the heat to flow around the oven, and for the temperature sensor to detect the increased heat.  Derivative control sort of "holds back" the PID controller because some increase in temperature will occur without needing to increase the output further.  Setting the derivative constant correctly, allows you to become more aggressive with the P & I constants I = 0.6 gives the best response I = 1.2 & 1.4 unstable

17 Limitations of Feedback Control
Feedback control is not predictive Requires management or operators to change set points to optimize system Changes can bring instability into system Optimization of many input and output variables almost impossible Most processes are non-linear and change according to the state of the process Control loops are local A Proportional+Integral controller is optimal for a first order linear process without time-delays. Similarly, the PID controller is optimal for a second order linear process without time-delays. In practice, process characteristics are nonlinear and can change with time. Thus the linear model used for initial controller design may not be applicable when process conditions change or when the process is operated at another region. One solution is to have a series of stored controller settings, each pertinent to a specific operating zone. Once it is detected that the operating regime has changed, the appropriate settings are switched in. This strategy (called parameter- or gain-scheduled control) is sometimes used in applications where the operating regions are changed according to a preset and constant pattern. In applications to continuous systems, however, the technique is not so effective. A more elegant technique is to implement the controller within an adaptive framework. Here the parameters of a linear model are updated regularly to reflect current process characteristics. The settings of the controller can be updated continuously according to changes in process characteristics. Such devices are therefore called auto-tuning/adaptive/self-tuning controllers. In some formulations, the controller settings are directly identified. A faster algorithm results because the model building stage has been avoided. Currently, many commercial auto-tuning PID controllers available from major control and instrumentation manufacturers. The simplest forms are those based upon the use of linear time-series models. Some PID controllers are also auto-tuned using pattern recognition methods. Nevertheless, there are instances when the adaptive mechanism may not be fast enough to capture changes in process characteristics due to system nonlinearities. Under such circumstances, the use of a nonlinear model may be more appropriate for PID controller design. Nonlinear time-series, and recently neural networks, have been used to combat these problems. A nonlinear PID controller may also be automatically tuned using an appropriate strategy, by posing the problem as an optimization problem. This may be necessary when the nonlinear dynamics of the plant are time-varying. Again, the strategy is to make use of controller settings most appropriate to the current characteristics of the controlled process.

18 Feedforward Control Feedforward Recognize window is open and
Furnace Window is open furnace turns on heats the house natural gas house temperature is currently OK turn on furnace Feedforward Recognize window is open and house will get cold in the future: Someone reacts and changes controller setpoint to turn on the furnace preemptively. Feedforward control preemptively makes changes to the process inputs to counter anticipated affects of a disturbance. These changes are based on a model or a prediction of the effects of the disturbance and the appropriate variables to manipulate to counteract this disturbance before it is detected in the process output. Decrease setpoint to turn furnace on Pre-emptive move to prevent house from getting cold

19 Feedforward Control Feedforward control avoids slowness of feedback control Disturbances are measured and accounted for before they have time to affect the system In the house example, a feedforward system measured the fact that the window is opened As a result, automatically turn on the heater before the house can get too cold Difficulty with feedforward control: effects of disturbances must be perfectly predicted There must not be any surprise effects of disturbances

20 Combined Feedforward/Feedback
Combinations of feedback and feedforward control are used Benefits of feedback control: controlling unknown disturbances and not having to know exactly how a system will respond Benefits of feedforward control: responding to disturbances before they can affect the system Generalized Minimum Variance (GMV) controller: minimizes the squared weighted difference between the desired value and the predicted output with a prediction horizon that is the time-delay of the system. GMV control, however, cannot effectively cope with variable time-delays and process constraints. This led to the development of long-range predictive controllers, the Generalized Predictive Controller (GPC) and Dynamic Matrix Control (DMC). A model is used to provide predictions of the output over a range of time-horizons into the future, usually the range is between the smallest and largest expected delays. This alleviates the problem of varying time-delays and hence enhances robustness. Calculation of the control signal is essentially an optimization problem with economic objectives as well as process constraints included. Today, 'predictive control' refers to the application of long-range predictive controllers that may be designed using linear or nonlinear models.

21 Multivariable Control
Most complex processes have many variables that have to be regulated To control multiple variables, multiple control loops must be used Example is a reactor with at least three control loops: temperature, pressure and level (flow rate) Multiple control loops often interact causing process instability Multivariable controllers account for loop interaction Models can be developed to provide feedforward control strategies applied to all control loops simultaneously Previously, one manipulated input and one controlled output in a single-input single-output (SISO) loop has been considered. With most processes, there are many variables that have to be regulated. A chemical reactor is a good example where level, temperatures, pressure and flow rates have to kept at design or controlled value and there are at least three control loops. If the actions of one controller affect other loops in the system, control-loop interaction exists that can lead to process instability. If each controller has been individually tuned to provide maximum performance, then depending on the severity of the interactions system instability may occur when all the loops are treated as independent SISO loops. Single SISO controllers, whether adaptive, linear or nonlinear strategies, may not be applicable to these processes. Models used in the design of SISO controllers do not contain information about the effects of loop interactions. Therefore, for a multi-loop strategy to work, individual SISO controllers are usually detuned (made less sensitive), resulting in sluggish performances for some or all loops. The solution is that multivariable controllers should be applied to systems where interactions occur. As opposed to multi-loop control, multivariable controllers take into account loop interactions and their destabilizing effects. By regarding loop interactions as feed-forward disturbances, the multiple control loops are included in the controller’s model description. Multivariable controllers that provide time-delay compensation and handle process constraints can be developed by building a model of the process on-line resulting in adaptive multivariable control strategies.

22 Internal Model-Based Control
Process models have some uncertainty Sensitive multivariate controller will also be sensitive to uncertainties and can cause instability Filter attenuates unknowns in the feedback loop Difference between process and model outputs Moderates excessive control This strategy is powerful and framework of model-based control Robust control involves quantifying the uncertainties (errors) in a nominal process model that will essentially give a description of the process under all possible operating conditions. The next stage involves the design of a controller that will maintain stability as well as achieve specified performance over this range of operating conditions. A sensitive controller is required to achieve performance objectives. Unfortunately, such a controller will also be sensitive to process uncertainties and also cause instability problems. On the other hand, a controller that is insensitive to process uncertainties will have a sluggish and poorer performance. The robust control problem is therefore formulated as a compromise between achieving performance and ensuring stability under assumed process uncertainties and performance objectives may be sacrificed in favor of stability objectives. If the process model is invertible, then the controller is simply the inverse of the model. If the model is accurate and there are no disturbances, then perfect control is achieved if a filter is not present. This also implies that if an applications engineer knows the behavior of the process exactly, then feedback is not necessary. The primary role of a filter is to attenuate uncertainties in the feedback, generated by the difference between process and model outputs and serves to moderate excessive control effort. This strategy is very powerful and is the essence of model based control. All model based controllers are designed within this framework.

23 Important Data Issues Inputs to advanced control systems require accurate, clean and consistent process data “garbage in garbage out” Many key product qualities cannot be measured on-line but require laboratory analyses Inferential estimation techniques use available process measures, combined with delayed lab results, to infer product qualities on-line Available sensors may have to be filtered to attenuate noise Time-lags may be introduced Algorithms using SPC concepts have proven very useful to validate and condition process measurement With many variables to manipulate, control strategy and design is critical to limit control loop interaction The quality and consistency of the data input from process sensors are critical to achieving success with advanced process control algorithms. A major problem is the lack of on-line instrumentation to measure quantities that define product quality, e.g. stickiness of adhesives, smoothness of sheet material, melt flow index of polymers, flash points of fuels, etc. These are often provided by laboratory analyses resulting in infrequent feedback and substantial measurement delays, rendering automatic process control impossible. On-line component measured as available (such as on-line FTIR instruments), but these instruments often found to be unreliable and require frequent downtimes off-line to calibrate and maintain the instrument. Inferential estimation is one method that has been designed to overcome this problem. The technique has also been called 'sensor-data fusion' and 'soft-sensing'. There are usually other variables such as temperatures, pressures, flows, etc., associated with a process that are indicative of changes in product quality. Thus, by monitoring suitable secondary variables, it is often possible to 'infer' the state of the quality variable. Inferential Estimation technique uses obtainable on-line measurements known to influence product quality, together with those of product quality when available, to generate estimates of product quality. Even with readily available instrumentation and sensors, the resulting data may not be of sufficient quality to be useful to advanced process control models. Signals from plant are often corrupted by noise of varying magnitudes. All control methods are data driven. If appropriate measures are not taken to condition and validate the measured signals, then even the most sophisticated scheme will fail. Redundancy configuring software or hardware sensors in duplicate or triplicate can be used in safety critical applications. In less critical applications, duplex or triplex redundancy configurations are not cost effective. Therefore, noisy signals are filtered to attenuate noise with a penalty incurred in time-lags introduced into the filtered signal. These time-lags may be reduced by employing 'logic' filters which combine conventional filter algorithms with SPC concepts to validate and condition process measurements. This integrated approach has been shown to be very effective. Even if 'clean' data is available, there may be many variables associated with a particular process unit. The specification of an appropriate control strategy and controller design become complicated. Which variable should be manipulated to control another? What is the effect of this choice of manipulated-input controlled-output pairings? Inappropriate choice of input-output pairs exacerbate the problem of loop interactions. If interactions are significant, then a multivariable control design is necessary. If the input-output relations indicate nonlinear behavior, then nonlinear controllers may have to be applied.

24 Distillation Tower Example
Simple distillation column with APC Column objective is to remove pentanes and lighter components from bottom naphtha product APC input: Column top tray temperature Top and bottom product component laboratory analyses Column pressures Unit optimization objectives APC controlled process variables Temperature of column overhead by manipulating fuel gas control valve Overhead reflux flow rate Bottom reboiler outlet temperature by manipulating steam (heat) input control valve Note that product flow rates not controlled Overhead product controlled by overhead drum level Bottoms product controlled by level in the tower bottom APC anticipates changes in stabilized naphtha product due to input variables and adjusts relevant process variables to compensate This is relatively a very simple example of APC applied to a very common process application in the petrochemical industry. The distillation column is designed with internal trays that facilitates a temperature gradient from bottom to top. This temperature gradient allows for the lighter components that boil at a lower temperature to be separated from heavier components that boil at a higher temperature. Heat is applied to the bottom of the tower by recycling the bottoms product through a steam heat exchanger and back into the bottom of the tower. Heat is extracted from the top of the tower by a water cooler and the cool overhead product is recycled or “refluxed” back into the top of the tower. Therefore, the bottom of the tower is hot and the top of the tower is cool. The APC controller adjusts this tower temperature gradient by manipulating the overhead reflux rate, by manipulating the bottoms steam heat input rate and by manipulating the overhead vapor product temperature (before the water cooled exchanger) by adjusting the overhead flow to fuel gas (this effectively adjusts the pressure of the tower which directly affects the temperature). These three control loops are the only way to affect the distillation tower’s product qualities. The APC is constrained by the number of trays in the tower and by the efficiency of those trays (in effect, this constraint is the design of the tower and the design of the trays and the current cleanliness of those trays).

25 Distillation Tower APC Results
Note that the application of APC to the distillation tower significantly reduced the product variability. Also note that product composition moved much closer to the theoretical maximum limit. The higher the pentane (C5) composition in the tower overhead product means that these lighter pentanes have been removed from the bottoms naphtha product yielding a better quality product with a lower RVP. The inability of the control scheme of independent PID controllers before the application of APC was likely due to interactions between the controller that caused the process to become unstable (note that simple distillation towers such as this can usually be adequately controlled without APC). What is not shown is the effect of APC on the tower feedrate. It is likely that the feedrate was able to be increased substantially due to the better control offered by APC. Further, it is probable that the feedrate limitations of this tower before the application of APC was a system bottleneck. The type of problems before the application of APC can be caused by flow rates pushed above the ability of the control loops to handle the resulting tower instability. Therefore, the success of APC to substantially improve product quality was likely coupled with increased feedrates that could give very large financial incentives to solve the problem and make the APC application a significant success.

26 APC Application in Wafer Fab
Source: Carl Fiorletta, “Capabilities and Lessons from 10 Years of APC Success,” Solid State Technology, February 2004, pg

27 Exercise in PID Control
To give a better understanding concerning problems encountered in typical control schemes Use embedded excel spreadsheet on next slide to investigate response to a change in set point Double click on graph to open Graph shows controller output after a maximum of 50 iterations Simulates the response of PI (proportional + integral) controller Performance of control parameter given by sum of errors in controller output versus setpoint after 50 iterations Deadtime is the process delay in observing an output response to the controller input SP is the setpoint change

28 Exercise in PID Control
Questions: Set Deadtime = 0 With P = 0.4, what is the optimal I to obtain the optimal controller response (minimum Sum of Errors)? With P = 1.0, what is the optimal I to obtain the optimal controller response? Set Deadtime = 1 With P = 0.4, what is the optimal I to obtain the optimal controller response? What are the optimum values for P and I to obtain the optimal controller response? Is the controller always stable (are there values of P and I that make the controller response unstable)? Set Deadtime = 3 How does increasing the deadtime affect the capability of the controller? What control schemes are available to optimize controller capability? a. I = for a Sum of Errors = 184 b. I = 1.0 for a Sum of Errors = 100 2. a. I = for a Sum of Errors = 295 b. I = for a Sum of Errors = 517 c. P = 0.59 and I = 0.71 for a Sum of Errors = 257 d. Yes, for example with P = 0.4, any I > 2 will make the system unstable or for P = 1, any I > 1 will make the system unstable 3. a. I = for a Sum of Errors = 544 b. I = for a Sum of Errors = 1824 c. P = 0.45 and I = 0.42 for a Sum of Errors = 539 d. Yes, for example with P = 0.4, any I > 1 will make the system unstable or for P = 1, any I > 0.3 will make the system unstable 4. Deadtime increases the sensitivity of the system to become unstable. Also note that the sum of errors increases substantially with increasing deadtime that indicates increasingly poor performance of the controller. 5. Control schemes could include adding the derivative, D, term for the full PID control logic. The D term will dampen the system’s response and decrease the stability problems, but at a cost of lower performance. Other schemes could include more advanced algorithms to automatically optimize the PID parameters with a change in deadtime. Further, APC would integrate an individual controller into an overall control philosophy that would also take into account the individual controller response to deadtime.


30 Summary Local PID controllers only concerned with optimizing response of one setpoint in one variable APC manipulates local controller setpoints according to future predictions of embedded process model Hierarchal and multiobjective controller philosophy Optimizes local controller interactions and parameters Optimized to multiple economic objectives Benefits of APC: ability to reduce process variation and optimize multiple variables simultaneously Maximize the process capacity to unit constraints Reduce quality giveaway as products closer to specifications Ability to offload optimization responsibility from operator Local control is implemented by using appropriate controllers (usually PID controllers) to keep the process operating at desired conditions or setpoints. Although it is easier to tune and maintain simple controllers, some processes require control by more sophisticated techniques and algorithms. However, unless such sophisticated controllers are installed and maintained by well trained personnel, they can be prone to failure. Until recently, higher level tasks of monitoring, optimization and supervision were mainly carried out by operators or management. Due to the advent of modern technology and advances in the field of artificial intelligence, these processes can be automated and supervised by higher level systems. APC manipulates the setpoints of local controllers to control the process according to high-level process measurements that are representative to the process’ overall performance. The control logic used for APC is far more complex than that used for local PID controllers. APC loops typically control and manipulate multiple variables concurrently in a multivariable control algorithm. APC calculates outputs based on future predictions predictions of the controlled variables based on a dynamic process model embedded in the control loop. Therefore, APC are model predictive controllers. An advantage of APC is that the algorithm can control variables within limit values versus controlling to setpoints and can manipulate the output of the process considering multiple economic objectives. Therefore, APC are hierarchical or multiobjective controllers APC loops are designed to emulate the supervisory actions of console operators. These loops will typically outperform the operators, depending on the accuracy of embedded model and the ability of the control system to maintain the local controllers at the desired setpoint. Typically, the embedded model should be at least as good at predicting the process output as an operator. In addition, an APC model can evaluate a large number process variables simultaneously. The value of APC comes from its ability to maximize the capacity of the process by operating the unit at or near operating (design) constraints and from its ability to reduce product quality giveaway by controlling products closer to their specifications. Both benefits come from APC’s ability to reduce the variability of the process. Another benefit of APC comes from its ability to offload daily optimization responsibility of the console operators. This is a relevant benefit considering the current trend to consolidate operator manpower, often to remote locations.

31 Recommended References
Camacho E F & Bordons C, Model Predictive Control, Springer, 1999. Dutton K, Thompson S & Barraclough B, The Art of Control Engineering, Addison Wesley, 1997. Marlin T, Process Control: Designing Processes and Control Systems for Dynamic Performance, McGraw Hill, 1995. Ogunnaike B A & Ray W H, Process Dynamics, Modelling and Control, Oxford University Press, 1994.

32 Useful Websites There are a number of companies such as AspenTech, Hyperion and Honeywell that specialize in all aspects of advanced process control consulting, design, implementation and maintenance. In addition, a large number of individual consultants or consulting firms exist to provide expertise in advanced process control.

Download ppt "Advanced Process Control Training Presentation"

Similar presentations

Ads by Google