Download presentation

Presentation is loading. Please wait.

Published byDallas Burgher Modified over 3 years ago

1
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 Chapter 6: Steady-State Data Reconciliation with Model Uncertainties

2
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.1 Models with Uncertainties In the previous chapters, the models employed in the DR were considered exact. That is to say, the DR algorithms force the reconciled data to satisfy the models,, exactly. However, perfect models rarely exist and the reconciliation of measured values will more likely be a compromise between inaccuracies in both measurements and process models. In such cases, the reconciled values of the process variables should be obtained by: minimizing (6.1) subject to where is a C 1 vector of model residuals that are the estimates of random model errors.

3
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.1 Models with Uncertainties The process models employed in the DR formulated by (6.1) have uncertainties that account for inaccuracies generated by approximations and thus reflect the modeling error. The variance and covariance of the model errors, indicate the accuracy of the models. For linear models, the variance and covariance of the model errors can be obtained analytically. But for nonlinear models, they are calculated either by a Monte Carlo simulation, or, neglecting correlations between the model errors, the variances can be approximated using a first order Taylor s expansion around the measurement values to give: (6.2) where j is the variance of the measurement y j.

4
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.1 Models with Uncertainties In the Monte Carlo simulation, a series of normally distributed noise for each variable, y j, having the same variance as the measurement, are generated by a computer program. Then the functions, f(y, z), are calculated, along with their variance and covariance. = f(y, z) Generated noise vectorsCalculated model errors Calculate V( 1 ), V( 2 ), Cov( 1, 1 ), ……

5
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.2 Data Reconciliation Algorithm with Uncertain Models When there are uncertain models, the reconciled data should be obtained such that it simultaneously minimizes both the measurement and model errors. Therefore, the data reconciliation problem (6.1) becomes: minimizing (6.3) subject to where is the covariance matrix of the model errors. The data reconciliation algorithm formulated by (6.3) considers the reconciled data as being a compromise between the measurements and models. The variance of the models can also be treated as tuning parameters of the algorithm. In other words, if we have a lot of confidence about the models used,

6
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.2 Data Reconciliation Algorithm with Uncertain Models then we can set the variance at a small value, otherwise, we set the variance at a large value. There are two limiting cases: (i) The variance of the models is very large compared to the variance of the measurements, then the reconciled data found by (6.3) will be the actual measurements; (ii) The variance of the models is very small compared to the variance of the measurements, then the reconciled data found by (6.3) will behave like the models are exact, which is like using the conventional data reconciliation algorithm formulated by Equation 1.2.

7
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.3 Solutions to Data Reconciliation with Uncertain Models Note that the functions for the uncertain models in Equation 6.3 can be linear or nonlinear. For the simplest case, all the variables are measured and the model functions are linear; thus the data reconciliation problem is formulated as: minimizing (6.4) The conditions for the minimum of (6.4) are Rewriting the objective function gives: Taking the first derivative, we have: (6.5)

8
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.3 Solutions to Data Reconciliation with Uncertain Models Rearranging Equation 6.5 gives: which is the solution to the DR with linear uncertain models. For the cooling water network illustrated in Figure 1.3, suppose the true values of the flows satisfy mass imbalances for each node because of fluctuations in the plant. The variance of the imbalances are obtained as: (6.6)

9
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.3 Solutions to Data Reconciliation with Uncertain Models Using Equation 6.6, the reconciled data constrained by the mass imbalances can be obtained by the following MATLAB code: ********************************************** y=[110.5;60.8;35.0;68.9;38.6;101.4]; V=[0.6724 0 0 0 0 0;0 0.2809 0 0 0 0;0 0 0.2116 0 0 0;0 0 0 0.5041 0 0;0 0 0 0 0.2025 0;0 0 0 0 0 1.44]; A=[1 -1 -1 0 0 0;0 1 0 -1 0 0;0 0 1 0 -1 0;0 0 0 1 1 -1]; O=[1.165 0 0 0;0 0.785 0 0;0 0 0.414 0;0 0 0 2.147]; yhat=inv(A'*inv(O)*A+inv(V))*inv(V)*y *********************************************

10
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.3 Solutions to Data Reconciliation with Uncertain Models The calculation results are listed here in Table 6.1. Table 6.1: Results of data reconciliation with imbalanced models for a cooling water network Stream No. Raw (kt/h) Reconciled (kt/h) Adjustment (kt/h) 1110.5106.73-3.77 260.863.462.66 335.036.761.76 468.966.52-2.38 538.637.88-0.72 6101.4102.601.20 FI 1 3 24 56 CWS CWR Figure 1.3

11
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.3 Solutions to Data Reconciliation with Uncertain Models It is worth noting that the reconciled data for the flows in Table 6.1 dont satisfy the mass balances for each plant. However, if we want to force the reconciled data to exactly satisfy the mass balances, we can artificially use very small values for the variance of the imbalances, and then using Equation 6.6, the reconciled flows will exactly match mass balances for each plant. On the other hand, if we do not have any confidence in the mass balances, we can use very large values for the variance of the imbalances, and then the reconciled data will be equal to the raw measurements.

12
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.3 Solutions to Data Reconciliation with Uncertain Models The variances of the model errors are treated as tuning parameters of the DR algorithm. The change in the reconciled data with the changing of the tuning parameters is illustrated in Figure 6.1. Figure 6.1: Change in reconciled data with the change of the model variance Raw measurementsReconciled by exact models Reconciled data Variance of models

13
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.3 Solutions to Data Reconciliation with Uncertain Models If there exist unmeasured variables in the constraints or if the uncertain models used in the DR are nonlinear, the problem can be formulated by Equation 6.3 and solved by such as the quasi-Newton method. As mentioned in Chapter 1, data reconciliation techniques can also be used to estimate model parameters. The unknowns in the vector,, in (6.3) can be unmeasured variables or model parameters. It is worth noting that the number of model equations used in (6.3) must be larger than the number of unmeasured variables and model parameters, so that the DR algorithm has sufficient redundancies in the estimation.

14
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.4 Quiz Question 1: The reconciled data for DR constrained by uncertain models (a) will satisfy exact models. (b) will be equal to the raw measurements. (c) is a compromise between the measurements and models. (d) is all of the above. Question 2: In DR which has infinite variance for its model uncertainties, the reconciled data then (a) will satisfy exact models. (b) will be equal to the raw measurements. (c) is a compromise between the measurements and models. (d) is all of the above.

15
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.4 Quiz Question 3: For the model, y - x =, the variance for y and x are 2.0 and 2.2 respectively. The variance of the model errors is (a) 2.2 - 2.0 = 0.2. (b) 2.0 - 2.2 = –0.2. (c) 2.2. (d) 2.0 +2.2 = 4.2. Question 3: For the model, y 2 - x =, the variance for y and x are 2.0 and 2.2 respectively. The variance of is (a) 2.0 2 + 2.2 = 6.2. (b) 2.0 2 - 2.2 = 1.8. (c) calculated by Monte Carlo simulation. (d) approximated by linearizing the function y 2 - x.

16
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 6 Steady-State Data Reconciliation with Model Uncertainties 6.5 Suggested Readings Kao, E.P.C. (1997). An Introduction to Stochastic Processes. Duxbury Press, Belmont, Calif., USA. Maquin, D.; Adrot, O. and Ragot, J. (2000). Data reconciliation with uncertain models, ISA Transactions, 39, 35-45. Mandel, D. ; Abdollahzadeh, A.; Maquin, D. and Ragot, J. (1998). Data reconciliation by inequality balance equilibration: a LMI approach. International Journal of Mineral Processing, 53, 157-171.

17
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 Chapter 7: Dynamic Data Reconciliation

18
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.1 Formulation of Dynamic Data Reconciliation In the previous chapters, the data reconciliation problem was discussed for processes at steady state. However, many chemical processes are intrinsically dynamic and disturbances frequently occur. Consequently, it is desirable to develop dynamic data reconciliation (DDR) strategies for dynamic processes, so that at every sampling instant more accurate and reliable process data is available for real-time control and optimization. In steady-state data reconciliation, only one set of data is used at the current sampling time. Steady-state data reconciliation uses the spatial redundancy of the measurements. There is no previous information used to estimate the current state of the process.

19
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.1 Formulation of Dynamic Data Reconciliation Along with the spatial redundancy of measurements, when a process is continuously sampled at discrete points in time, we also have temporal redundancy. The temporal redundancy of measurements is usually used for three types of state estimation as shown in Figure 7.1. Figure 7.1: Types of estimation using temporal redundancy of measurements. t Data used for estimation Filtering Data used for estimation Prediction Data used for estimation Smoothing

20
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.1 Formulation of Dynamic Data Reconciliation When the measurements prior to time t, including the measurement at time t, are used to estimate the current state of the process, the estimation is called filtering. When the measurements prior to time t are used to predict the process variables at time t, t+1, t+2, etc., the estimation is called prediction. When the measurements prior to time t, and after time t, are used to estimate the process variables at time t, the estimation is called smoothing.

21
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.1 Formulation of Dynamic Data Reconciliation Filtering and prediction can be applied on-line, whereas smoothing can only be used off-line. Dynamic data reconciliation uses both spatial and temporal redundancy to estimate the state of a process, so that at every sampling instant, more accurate and reliable process data are available. For discretely sampled measurements, the DDR problem is typically formulated as: minimizing (7.1)

22
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.1 Formulation of Dynamic Data Reconciliation subject to where y t is an M 1 vector of observed values for measured process variables at time t ( t = 0, 1, 2, …, t p ), is the vector of reconciled values for the measured process variables at time t, is an N 1 vector of estimates for unmeasured variables or model parameters at time t, V is an M M covariance matrix for the measurements, f is a C 1 vector of differential model equality constraints, g is an S 1 vector of algebraic model equality constraints, and h is an E 1 vector of inequality model constraints including simple upper and lower bounds.

23
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.1 Formulation of Dynamic Data Reconciliation The dynamic data reconciliation formulated by Equation 7.1 makes the reconciled data satisfy exact models. It applies a moving window approach where t p represents the window width. At each sampling time, only the measurements within the window width, t = 0, 1, 2, …, t p, are reconciled, and only the reconciled values for the current time measurements are used for on-line monitoring and control. Other reconciled values of measurements prior to the current time are discarded. Then the moving window moves forward for the next time step. Moving window Timetptp 0 Window width=10

24
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation Often, process dynamics can be described or approximated by linear differential equations. For a discretely sampled system, it is convenient to use differential equations. The dynamics of a linear time invariant (LTI) process can be written as: where x t is an M 1 vector of the true values of state variables at time t, u t-1 is a P 1 vector of manipulated inputs or disturbances to the process at time t-1, and w is an M 1 vector of random variables that represent model errors (accounting for model mismatch and unknown disturbances corrupting the process). A and B are matrices of appropriate dimensions whose coefficients are known at all times. (7.2)

25
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation In dynamic DR, at each sampling time, both model predicted values found by Equation 7.2 and measurements are available. Under the assumption that measurement and model errors are normally distributed with zero mean and known variance, the estimates (reconciled or adjusted values) of the measured process variables can be obtained by solving an optimization problem that simultaneously minimizes the weighted sum of square measurements and model errors: minimizing subject to (7.3)

26
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation where x m,t denotes the M 1 vector of model predicted values. The manipulated inputs or disturbances u t-1 in (7.2) are assumed to always be exactly known. In Equation 7.3, is an M M variance-covariance matrix of the model predictions. The reconciled values at each sampling time, t, can be obtained analytically by Equation 7.3. Taking partial derivatives of the objective function with respect to and setting it equal to zero results in:

27
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation Rearranging the above equation gives: In Equation 7.4, the variance and covariance of the measurements in V can be obtained from the raw measurements. However, it is sometimes difficult to obtain the variance and covariance of the model predictions in. This is not only because of model inadequacies, but also because of various kinds of unknown disturbances corrupting the process. As a result, we can treat the ratio of the variance of the model predictions, m 2, to that of the process measurements, 2, as tuning parameters in the DDR algorithm. (7.4)

28
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation When m 2 / 2 is used as a tuning parameter for the DDR algorithm, the locus of the reconciled values typically follows an asymptotic curve as a function of m 2 / 2. Figure 7.2, for example, illustrates the locus of a reconciled value with a change in m 2 / 2. The measured value for the liquid level of a tank is 60.0 cm, and the model predicted value of the liquid level is 60.5 cm, at time t. The reconciled value is a point on the curve whose location depends on the values of m 2 / 2, and is bounded by the measured and predicted values. There are two limiting cases: i) m 2 / 2 = 0 which corresponds to an exact model, making the reconciled value equal to the model predicted value; and ii) m 2 / 2 = indicating that the level of confidence on model predictions is poor, and the reconciled values approach the measured values.

29
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation Figure 7.2: Locus of reconciled data as a function of the ratio of model to measurement uncertainties in the DDR algorithm

30
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation In process control, the presence of measurement noise will often deteriorate controller performance. It is therefore necessary to attenuate the measurement noise before it reaches the controllers. The linear DDR strategy introduced can be applied inside a control loop as a filter to reduce the propagation of measurement noise. The DDR algorithm as an integral part of conventional PID control loops is illustrated in Figure 7.3. The raw measurements of controlled variables are first reconciled by the DDR algorithm, and then used by the controller to calculate the control moves.

31
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation Figure 7.3: Application of a DDR algorithm inside a PID control loop Process Noise Digital Controllers Raw measurements Dynamic Data Reconciliation Reconciled data Control Valve Manipulated variables Controlled variables

32
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation To illustrate dynamic DR strategies inside a control loop, consider a liquid storage tank process with a PI controller represented in Figure 7.3. Figure 7.3: Schematic diagram of a storage tank process LC FI Feed Outlet

33
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation The diameter and height of the tank are 1.0 m and 1.2 m, respectively. A PI controller is used to regulate the liquid level of the tank by manipulating the outlet flow. The feed flow to the tank is measured but not controlled. The sampling interval is 1 minute. The measurement errors are assumed to be Gaussian white noise, and the standard deviations for the measurements of the feed flow rate and the liquid level are 0.036 m 3 /h and 0.012 m, respectively. The PI controller parameters are K c = -1.87m 2 /h and I = 30.0 min. The dynamics of the storage tank process corresponding to a 20% step change in feed flow rate was simulated, and the simulation results are presented in Figure 7.4.

34
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation Figure 7.4: Dynamics of a storage tank for a 20% step change in feed flow at time t = 90 min

35
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation As shown in Figure 7.4, the measurements of the liquid level have noise. The manipulated variable, the outlet flow, exhibits considerable variations because the noisy measurements affect the calculated control moves. To attenuate the impact of the measurement noise, a DDR algorithm was embedded into the control loop to reconcile the liquid level before calculating the control moves. The reconciled liquid level at each sampling time is obtained by: minimizing (7.5) subject to

36
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation where the model constraint is the mass balance around the tank in discrete form, A is the cross-section of the tank, and t is the discrete sampling time. The ratio of the variance of the model predicted values to the variance of the measurements, m 2 / 2, is given as 0.25 in testing the performance of the DDR algorithm. The same feed flow perturbation is used to evaluate the controller performance with the DDR algorithm embedded in the control loop. As shown in Figure 7.5, the reconciled data for the liquid level was less noisy than the raw measurements and closer to their true values. The manipulated variables displayed much smaller variations when the reconciled data was used to calculate the controller action.

37
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation Figure 7.5: Dynamics of storage tank with embedded DDR inside a PI control loop, for a 20% step change in feed flow at time, t = 90 min

38
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.2 Linear Dynamic Data Reconciliation This example demonstrates that the DDR algorithm developed can efficiently attenuate measurement noise, resulting in enhanced controller performance. In the application of DDR, dynamic models have to be developed. For the simple process of a storage tank, a fundamental model, the mass balance, was used. But for complex processes, for example, a distillation column, it is often difficult or impractical to develop fundamental dynamic models. Therefore, empirical input-output models have to be identified. Different techniques are available for developing an empirical dynamic model. These techniques are described under model identification in several textbooks (Ljung, 1987).

39
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.3 Dynamic Data Reconciliation with Kalman Filter For convenience, we rewrite the linear time invariant, discrete, dynamic model (7.2) as: where x t is the M 1 vector of the true values of process variables at time, t, u t-1 is the P 1 vector of process input variables (manipulated and disturbances) that are assumed to be exactly known at time, t-1, A is an M M matrix whose coefficients are known at all times, B is an M P matrix whose coefficients are also known at all times, and w is an M 1 vector of random variables which are assumed to be Gaussian white noise with zero mean and known variance. Equation 7.6 describes the dynamic evolution of the stochastic process variables. (7.6)

40
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.3 Dynamic Data Reconciliation with Kalman Filter The random variable vector, w, in this equation considers the following factors that affect the process: (i) random fluctuations of external disturbances, (ii) random errors in manipulated inputs resulting from electrical noise during control signal transmission and imprecise actuator positioning, and (iii) modeling error during model identification. If all the state variables are measured, the measurement model can be written as: where is the M 1 vector of random measurement error which is also assumed to be Gaussian white noise with zero mean and known variance. (7.7)

41
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.3 Dynamic Data Reconciliation with Kalman Filter The two random vectors, w and, are assumed to uncorrelated. The statistical properties of these two random vectors are summarized as: Given Equations 7.6 and 7.7, the optimal estimation of the process variables can be given by the Kalman Filter.

42
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.3 Dynamic Data Reconciliation with Kalman Filter The detailed derivations of the Kalman filter can be found in the textbook Introduction to Optimal Estimation (Kamen and Su, 1999). The Kalman filter has a recursive form, and its equations are given by: where is the model predicted values by the deterministic model: is called the Kalman gain and is given as: The matrix in (7.10) is the covariance matrix of. The matrix is given by: (7.8) (7.9) (7.10)

43
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.3 Dynamic Data Reconciliation with Kalman Filter where is the covariance matrix of, which is given by: The calculations of the recursive Equations 7.8~12 can be summarized as: Step 1 Filter Initialization: Set t=1 Step 2 Time update: (7.11) (7.12)

44
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.3 Dynamic Data Reconciliation with Kalman Filter Step 3 Measurement update: Step 4 Time increment: Increment t and repeat step 2. It is important to note that the estimates in the LTI case generated by a Kalman filter have been proven to be unbiased and have a minimum mean squared error.

45
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.3 Dynamic Data Reconciliation with Kalman Filter In addition, the matrices,, and are independent of the measurements,, but are dependant upon the increment of time, t. They will become constants with the increasing of time, therefore, they can be calculated off-line. Recall the example of the storage tank process. If we neglect the reconciling of the feed flow, and solve the optimization problem (7.5), the reconciled liquid level, at each sampling time, t, can be obtained by the explicit equation: Equation 7.13 can be written in the form: (7.13) (7.14)

46
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.3 Dynamic Data Reconciliation with Kalman Filter Compare the form of Equation 7.14 to Equation 7.8. The term in (7.14) is equivalent to the Kalman gain in (7.8). If we model the dynamics of the storage tank as a stochastic difference equation, we obtain: where w is Gaussian white noise with zero mean and known variance of R=3.6 10 -5 m 2. The measurement model of the liquid level is: (7.15) (7.16)

47
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.3 Dynamic Data Reconciliation with Kalman Filter where is the white noise of the liquid level measurements with zero mean and known variance V=1.44 10 -4 m 2. The optimal estimation for the liquid level of the tank, given the two models, (7.15) and (7.16), can be obtained by the Kalman filter: where is the model predicted value given by: and K is the Kalman gain. The value of the Kalman gain in this case can be calculated recursively through steps 1 to 4 as shown in Table 7.1.

48
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.3 Dynamic Data Reconciliation with Kalman Filter Table 7.1: Calculation of Kalman gain for the storage tank The Kalman gain converges at K=0.39, so Equation 7.14 can be written as: IterationPtPt Pt-Pt- KtKt 0 1.0 10 -5 1 3.49 10 -5 4.60 10 -5 0.242 2 4.75 10 -5 8.35 10 -5 0.367 3 5.29 10 -5 8.89 10 -5 0.382 4 5.50 10 -5 9.10 10 -5 0.387 5 5.58 10 -5 9.17 10 -5 0.389 6 5.60 10 -5 9.20 10 -5 0.390 7 5.61 10 -5 9.21 10 -5 0.390 A guess of an initial value convergence

49
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.4 Nonlinear Dynamic Data Reconciliation When nonlinear dynamic models are used as constraints for Equation 7.1 in dynamic data reconciliation, it is generally not possible to analytically obtain a discrete form of the differential constraint equations to represent the process. Liebman et al. (1992) proposed an approach, called nonlinear dynamic data reconciliation (NDDR) to solve this problem. The NDDR method discretizes the nonlinear differential equations by the method of orthogonal collocation on finite elements, such that the differential equations are transformed into algebraic equations at each sampling time. After the discretization, the remaining problem is to minimize the least-squares objective function with the constraints of algebraic equalities and inqualities that can be efficiently solved by the nonlinear programming techniques at each sampling time.

50
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.4 Nonlinear Dynamic Data Reconciliation An alternative method to solving the nonlinear dynamic data reconciliation problem is to use the extended Kalman filter (EKF). This extension typically involves replacing the nonlinear model equations with first-order approximations around the current estimates. Detailed descriptions about the EKF method can be found in the textbook Introduction to Optimal Estimation (Kaman and Su, 1999).

51
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.5 Quiz Question 1: If a process is described by a stochastic dynamic model, the term, w, in Equation 7.2 considers (a) modeling error. (b) random external perturbations. (c) errors in input variables. (d) all of the above. Question 2: Measurement noise in a control loop (a) deteriorates the performance of the controller. (b) causes high-frequency oscillations of manipulated variables. (c) masks the true values of controlled variables. (d) all of above.

52
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.5 Quiz Question 3: In the application of Kalman filter, if K I, then (a) the output of the filter approaches model predicted values. (b) the output of the filter approaches raw measurements. (c) it indicates that the variance of model errors are very very large compared to the measurement variance. (d) all of the above. Question 4: The Kalman gain in a Kalman filter (a) is calculated recursively. (b) can be calculated off-line. (c) converges after a certain amount of iterations. (d) all of the above.

53
Basic Concepts in Data Reconciliation © North Carolina State University, USA © University of Ottawa, Canada, 2003 CHAPTER 7 Dynamic Data Reconciliation 7.6 Suggested Readings Ljung, L. (1987). System Identification – Theory for the User. Prentice-Hall, Englewood Cliff, N.J. Kamen, E.W. and Su, J.K. (1999). Introduction to Optimal Estimation, Springer, London. Bai, S. (2003). Assessment of Controller Performance with Embedded Dynamic Data Reconciliation. Master thesis, University of Ottawa, Canada. Liebman, M.J.; Edgar, T.F. and Lasdon, L.S. (1992). Efficient data reconciliation and estimation for dynamic process using nonlinear programming techniques. Comp. Chem. Engng, 16, 963-986.

Similar presentations

OK

Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.

Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on holographic technology youtube Convert word doc to ppt online ticket Free ppt on business etiquette Ppt on natural numbers examples Transparent lcd display ppt on tv Ppt on nse national stock exchange Ppt on machine translation software Ppt on sri lanka history Ppt on review of related literature on research Ppt on hdfc bank