Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 DYNAMIC BEHAVIOR OF PROCESSES : Development of Empirical Models from Process Data ERT 210 Process Control & Dynamics.

Similar presentations


Presentation on theme: "1 DYNAMIC BEHAVIOR OF PROCESSES : Development of Empirical Models from Process Data ERT 210 Process Control & Dynamics."— Presentation transcript:

1 1 DYNAMIC BEHAVIOR OF PROCESSES : Development of Empirical Models from Process Data ERT 210 Process Control & Dynamics

2 2 Chapter 7 COURSE OUTCOME 1 CO1) 1. Theoretical Models of Chemical Processes 2. Laplace Transform 3. Transfer Function Models 4. Dynamic Behavior of First-order and Second-order Processes 5. Dynamic Response Characteristics of More Complicated Processes 6. Development of Empirical Models from Process Data - DEFINE, REPEAT and DEVELOP empirical steady-state and dynamic models for process. DEFINE and IDENTIFY the concept of discrete-time model, APPLY steady-state and dynamic empirical models for both continuous and discrete- time model, ANALYZE first-order and second-order dynamic model from step response data using analytical solutions.

3 3 Chapter 7 Development of Empirical Models 1. Model Development Using Linear or Nonlinear Regression 2. Fitting First- and Second-Order Models Using Step Tests 3. Neural Network Models 4. Development of Discrete-Time Dynamic Models 5. Identifying Discrete-Time Models from Experimental Data Development of Empirical Models from Process Data

4 4 Chapter 7 1.Model Development Using Linear Or Nonlinear Regression

5 5 Chapter 7 1.1 Model Building Procedure

6 6 1.2. Simple Linear Regression: Steady-State Model Is the statistical analysis, that can be used to estimate unknown model parameters and to specify the uncertainty associated with the empirical model. Figure 7.2 Three models for scattered data In Fig. 7.2, straight-line model (Model 1) provides a reasonable fit, higher-order polynomial relations (Model 2) result in smaller errors between the data and the curve representing the empirical model (Model 3). For linear model, the least-squares approach is widely used to estimate model parameters.

7 7 As an illustrative example, consider a simple linear model between an output variable y and input variable u, where and are the unknown model parameters to be estimated and  is a random error. Predictions of y can be made from the regression model, where and denote the estimated values of  1 and  2, and denotes the predicted value of y. Let Y denote the measured value of y. Each pair of (u i, Y i ) observations satisfies: 2. Simple Linear Regression: Steady-State Model

8 8 The Least Squares Approach The least squares method is widely used to calculate the values of  1 and  2 that minimize the sum of the squares of the errors S for an arbitrary number of data points, N: Replace the unknown values of  1 and  2 in (7-2) by their estimates. Then using (7-3), S can be written as: = This approach can be extended to more general models, with; 1. More than one input or output variable. 2. Functions of the input variables u, such as polynomials and exponentials, providing that the unknown parameters appear linearly.

9 9 The least squares solution that minimizes the sum of squared errors, S, is given by: where: The Least Squares Approach (continued)

10 10 A general nonlinear steady-state model which is linear in the parameters has the form, Extensions of the Least Squares Approach Then p unknown parameters (  j ) are to be estimated, and the X j are the unknown parameters  j appear linearly in the model, and a constant term can be included by setting X 1 = 1. The sum of the squares function analogous to (7-2) is For X ij the first subscript corresponds to the ith data point, and the second index refers to the jth function of u.

11 11 Chapter 7 where the superscript T denotes the matrix transpose and: This expression can be written in matrix notation as The least squares estimates is given by, providing that matrix X T X is nonsingular so that its inverse exists. Note that the matrix X is comprised of functions of u j ; for example, if:, then X 1 = 1, X 2 = u, and X 3 = u 2.

12 12 Chapter 7 Conclusions: 1.If the number of data points is equal to the number of model parameters, (N = p), Eq. 7-10 provides a unique solution to the parameter estimation problem, one that provides perfect agreement with the data points, as long as X T X is nonsingular. 2.For N > p a least-squares solution results than minimizes the sum of the squared deviations between each of the data points and the model predictions. 3.The least-squares solution in Eq.7-10 provides a point estimate for the unknown model parameters  i but it does indicate how accurate the estimate are. 4.The degree of accuracy is expressed by confidence intervals that have the form,. The  1 are calculated from the (u,y) data for a specified confidence level.

13 13 Chapter 7 Example 7.1 An experiment have been performed to determine the steady-state power delivered by a gas turbine-driven generator as a function of fuel flow rate. The following normalized data were obtained: Fuel Flow Rate, u i Power Generated, Y i 1.02.0 2.34.4 2.95.4 4.07.5 4.99.1 5.810.8 6.512.3 7.714.3 8.415.8 9.016.8 The linear model in (7-1) should be satisfactory because a plot of the data reveals a generally monotonic pattern. Find the best linear model and compare it with the best quadratic model.

14 14 Chapter 7 SOLUTION To solve for the linear model, Eqs. 7-5 and 7-6 could be applied directly. However to illustrate the use of Eq. 7-7, first define the terms in linear model: X 1 = 1, and X 2 = u. The following matrices are then obtained: (7- 5) (7- 6) (7-10)

15 15 Chapter 7 The predicted values of are compared with the measured values (actual data) in Table 7.1 for both the linear and quadratic models. It is evident from this comparison that the linear model is adequate and that little improvement results from the more complicated quadratic model. Solving for and using Eq. 7-10 yields the same results given in Eqs. 7-5 and 7-6, with and (95% confidence limits are shown). To determine how much the model accuracy can be improved by using quadratic model, Eq. 7-10 is again applied, this time with X 1 = 1, X 2 = u, and X 3 = u 2. The estimated parameters and 95 % confidence limits for this quadratic model are: and

16 16 Chapter 7 uiyiyiLinear Model Prediction Quadratic Model Prediction 1.02.01.941.99 2.34.44.36 2.95.45.475.46 4.07.57.527.49 4.99.19.199.16 5.810.810.8610.83 6.512.312.1612.14 7.714.314.40 8.415.815.7015.72 9.016.816.8116.85 (S = 0.0613)(S= 0.0540) Table 7.1 A Comparison of Model Predictions from Example 7.1 Sum of the squares of the errors.

17 17 Chapter 7 3. Nonlinear Regression If the empirical model is nonlinear, then the nonlinear regression must be used. Why nonlinear: - For example, suppose that a the reaction rate expression of the form, r A = kc A n, is to be fit to experimental data, where r A is the reaction rate of component A, c A is the reaction concentration, and k and n are model parameters. - This model is linear with respect to rate constant k but is nonlinear with respect to reaction order n. A general nonlinear model can be written as; y = f(u 1, u 2, u 3, …,  1,  2,  3 …) where y is the model output, u j are inputs, and  j are the model parameters to be estimated. (7-11)

18 18 Chapter 7 1.3. Nonlinear Regression (cont’) In this case, the  j do not appear linearly in the model. However, we can define a sum of squares error criterion to be minimized by selecting the parameter set  j (7-12) where Yi and denote the ith output measurement and model prediction corresponding to the ith data point, respectively. The least-squares estimates are again denote by  j. For step response of higher-order model, nonlinear regression cannot be used. As alternative, a number of graphical correlation can be applied quickly to find the approximate values of  1 and  2 in second-order model (refer Eq. 5-48, page 117).

19 19 Chapter 7 2. Fitting First- and Second-Order Models Using Step Tests

20 20 Chapter 7 Simple transfer function models can be obtained graphically from step response data. A plot of the output response of a process to a step change in input is sometimes referred to as a process reaction curve. If the process of interest can be approximated by a first- or second-order linear model, the model parameters can be obtained by inspection of the process reaction curve. The response of a first-order model, Y(s)/U(s)=K/(  s+1), to a step change of magnitude M is: 2.1 Graphical Techniques for Second-Order Models

21 21 Chapter 7 The initial slope is given by: The gain can be calculated from the steady-state changes in u and y: with  y = KM

22 22 Chapter 7 Figure 7.3 Step response of a first-order system and graphical constructions used to estimate the time constant, Conclusion: 1.The intercept of the tangent at t = 0 with the horizontal line y/KM = 1, occurs at  = t. 2.As an alternative,  can be estimated from a step response plot using the value of t at which the response is 63. 2% complete.

23 23 Chapter 7 Example 7.2 Figure 7.4 shows the response of the temperature T in a continuous stirred-tank reactor to a step change in feed flow rate w from 120 to 125 kg/min. Find an approximate first-order model for the process and these operating conditions. Figure 7.4 Temperature response of a stirred-tank reactor for a step change in feed flow rate.

24 24 Chapter 7 Example 7.2 (cont’) SOLUTION First note that  w = M = 125-120 = 5 kg/min. Because  T = T(  )- T(0) = 160 – 140 = 20 o C, the steady-state gain is The time constant obtained from the graphical construction shown in Fig. 7.4 is  = 5 min. Note that this result is consistent with the “63.2% method” because Consequently, the resulting process model is where the steady-state gain is 4 o C/kg/min.

25 25 Chapter 7 Conclusions: Very few experimental step responses exhibit exactly first-order behavior because: 1.The true process model is usually neither first order nor linear. Only the simplest processes exhibit such ideal dynamics. 2.The output data are usually corrupted with noise; that is; the measurements contain a random component. 3.Another process input (disturbance) may change in an unknown manner during the duration of the step test. In the CSTR example, undetected changes in inlet composition or temperature are examples of such disturbances. 4.It can be difficult to generate a perfect step input. Process equipment, such as the pumps and control valves discussed, cannot be changed directly from one setting to another but must be ramped over a period of time.

26 26 Chapter 7 In order to account for higher-order dynamics that are neglected in a first-order model, a time-delay term can be included. This modification can improve the agreement between model and experimental response. The fitting of a first-order plus time-delay model (FOPTD) to the actual step response can be used for this propose, as shown in Figure 7.5. Figure 7.5 Graphical analysis of the process reaction curve to obtain parameters of a first-order plus time delay model.

27 27 2. The line drawn tangent to the response at maximum slope (t =  ) intersects the y/KM=1 line at (t =    ). 3. The step response is essentially complete at t=5 . In other words, the settling time is t s =5 . 1. The response attains 63.2% of its final response at time, t = . First-Order Plus Time Delay (FOPTD) Model For this FOPTD model, we note the following charac- teristics of its step response: Chapter 7

28 28 Chapter 7 Figure 7.5 Graphical analysis of the process reaction curve to obtain parameters of a first-order plus time delay model.

29 29 Chapter 7 There are two generally accepted graphical techniques for determining model parameters , , and K. Method 1: Slope-intercept method First, a slope is drawn through the inflection point of the process reaction curve in Fig. 7.5. Then  and  are determined by inspection. Alternatively,  can be found from the time that the normalized response is 63.2% complete or from determination of the settling time, t s. Then set  =t s /5. Method 2. Sundaresan and Krishnaswamy’s Method This method avoids use of the point of inflection (as a result of measurement noise and small-scale recorder charts or computer display), construction entirely to estimate the time delay.

30 30 Chapter 7 They proposed that two times, t 1 and t 2, be estimated from a step response curve, corresponding to the 35.3% and 85.3% response times, respectively. The time delay and time constant are then estimated from the following equations: These values of  and  approximately minimize the difference between the measured response and the model, based on a correlation for many data sets. Sundaresan and Krishnaswamy’s Method

31 31 Chapter 7 In general, a better approximation to an experimental step response can be obtained by fitting a second-order model to the data. Figure 7.6 shows the range of shapes that can occur for the step response model, Figure 7.6 includes two limiting cases:, where the system becomes first order, and, the critically damped case. The larger of the two time constants,, is called the dominant time constant. Graphical Techniques for Second-Order Models

32 32 Chapter 7 Figure 7.6 Step response for several overdamped second- order systems.

33 33 Smith’s Method 1.Determine t 20 and t 60 from the step response. 2.Find ζ and t 60 /  from Fig. 7.7. 3.Find t 60 /  from Fig. 7.7 and then calculate  (since t 60 is known)  Chapter 7 Assumed model: Procedure:

34 34 Chapter 7

35 35 Chapter 7 In Chapter 5 we considered the response of a first-order process to a step change in input of magnitude M: For short times, t < , the exponential term can be approximated by so that the approximate response is: 2.2 Fitting an Integrator Model to Step Response Data

36 36 Chapter 7 is virtually indistinguishable from the step response of the integrating element In the time domain, the step response of an integrator is Hence an approximate way of modeling a first-order process is to find the single parameter that matches the early ramp-like response to a step change in input.

37 37 Chapter 7 If the original process transfer function contains a time delay (cf. Eq. 7-16), the approximate short-term response to a step input of magnitude M would be where S(t-  ) denotes a delayed unit step function that starts at t= .

38 38 Chapter 7 Figure 7.10. Comparison of step responses for a FOPTD model (solid line) and the approximate integrator plus time delay model (dashed line).

39 39 Chapter 7 3. Neural Network Models Will not be discussed further in this chapter. Neural networks (NN) or artificial neural networks are an important class of empirical nonlinear models. It have been used extensively to model a wide range of physical and chemical phenomena and to model other nonengineering situations such as stock market analysis, speech recognition and medical diagnose. Figure 7.12: Multilayer neural network with three layers

40 40 Chapter 7 A digital computer by its very nature deals internally with discrete-time data or numerical values of functions at equally spaced intervals determined by the sampling period. Thus, discrete-time models such as difference equations are widely used in computer control applications. One way a continuous-time dynamic model can be converted to discrete-time form is by employing a finite difference approximation. Consider a nonlinear differential equation, where y is the output variable and u is the input variable. 4. Development of Discrete-Time Dynamic Models

41 41 Chapter 7 This equation can be numerically integrated (though with some error) by introducing a finite difference approximation for the derivative. For example, the first-order, backward difference approximation to the derivative at is where is the integration interval specified by the user and y(k) denotes the value of y(t) at. Substituting Eq. 7-26 into (7-27) and evaluating f (y, u) at the previous values of y and u (i.e., y(k – 1) and u(k – 1)) gives:

42 42 Chapter 7 Example 7.4 For the first-order differential equation, Derive a recursive relation for y(k) using first-order backwards difference for dy/dt. SOLUTION (7-31) (7-32) (7-30) The corresponding differences equation after approximating the first derivative is; Rearranging gives The new value of y(k) is a weighted sum of the previous value y(k-1) and the previous input u(k-1).

43 43 Chapter 7 Second-Order Difference Equation Models Parameters in a discrete-time model can be estimated directly from input-output data based on linear regression. This approach is an example of system identification (Ljung, 1999). As a specific example, consider the second-order difference equation in (7-36). It can be used to predict y(k) from data available at time (k – 1) and (k – 2). In developing a discrete-time model, model parameters a 1, a 2, b 1, and b 2 are considered to be unknown.

44 44 Chapter 7 For more details: - Try analyze EXAMPLE 7.5 AND EXAMPLE 7.6

45 45 Chapter 7 THANK YOU


Download ppt "1 DYNAMIC BEHAVIOR OF PROCESSES : Development of Empirical Models from Process Data ERT 210 Process Control & Dynamics."

Similar presentations


Ads by Google