Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently.

Similar presentations


Presentation on theme: "Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently."— Presentation transcript:

1 Chapter 3 1 Parameter Identification

2 Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently Rich Inputs GG radient Algorithms Based on the Linear Model LL east-Squares Algorithms PP arameter Identification Based on DPM PP arameter Identification Based on B-SPM PP arameter Projection RR obust Parameter Identification RR obust Adaptive Laws SS tate-Space Identifiers AA daptive Observers 2

3 Introduction 3 The purpose of this chapter is to present the design, analysis, and simulation of algorithms that can be used for online parameter identification. This involves three steps: Step 1 (Parametric model ). Express the form of the parametric model SPM, DPM, B-SPM, or B-DPM. Step 2 (Parameter Identification Algorithm). The estimation error is used to drive the adaptive law that generates online. The adaptive law is a differential equation of the form where is a time-varying gain vector that depends on measured signals. Step 3 (Stability and Parameter Convergence). Establish conditions that guarantee

4 Example: One-Parameter Case 4 Consider the first-order plant model Step 1: Parametric Model

5 Example: One-Parameter Case 5 Step 2: Parameter Identification Algorithm parameter error Adaptive Law The simplest adaptive law for In scalar form may be introduced as provided. In practice the effect of noise especially when is close to zero, may lead to erroneous parameter estimates.

6 Example: One-Parameter Case 6 Step 2: Parameter Identification Algorithm Another approach is to update in a direction that minimizes a certain cost of the estimation error. As an example, consider the cost criterion: where is a scaling constant or step size which we refer to as the adaptive gain and where is the gradient of J with respect to. We ill have adaptive law

7 The adaptive law should guarantee that: parameter estimate and speed of adaptation are bounded and estimation error gets smaller and smaller with time. Example: One-Parameter Case 7 Step 3: Stability and Parameter Convergence Note that these conditions still do not imply that unless some conditions on the vector referred to as the regressor vector.

8 Example: One-Parameter Case 8 Step 3: Stability and Parameter Convergence Analysis 1.Solving 2.Lyapunov Solving

9 and are bounded Example: One-Parameter Case 9 Step 3: Stability and Parameter Convergence is always bounded for any is bounded

10 Analysis by Lyapunov Example: One-Parameter Case 10 Step 3: Stability and Parameter Convergence or

11 Example: One-Parameter Case 11 is uniformly stable (u.s.) is uniformly bounded (u.b.) asymptotic stability So, we need to obtain additional properties for asymptotic stability

12 Example: One-Parameter Case 12

13 Example: One-Parameter Case 13 adaptive law summary (i) (ii)

14 Example: One-Parameter Case 14 The PE property of is guaranteed by choosing the input u appropriately. Appropriate choices of u: and any bounded input u that is not vanishing with time.

15 Example: One-Parameter Case 15 Summary

16 Example: Two-Parameter Case 16 Consider the first-order plant model Step 1: Parametric Model Step 2: Parameter Identification Algorithm Estimation Model: Estimation Error: A straightforward choice: where is the normalizing signal such that

17 Example: Two-Parameter Case 17 Adaptive Law: Use the gradient method to minimize the cost,

18 Example: Two-Parameter Case 18 Step 3: Stability and Parameter Convergence Stability of the equilibrium will very much depend on the properties of the time-varying matrix, which in turn depends on the properties of.

19 Example: Two-Parameter Case 19 For simplicity let us assume that the plant is stable, i.e.,. If we choose at steady state is only marginally stable is bounded but does not necessarily converge to 0. constant input does not guarantee exponential stability.

20 Persistence of Excitation and Sufficiently Rich Inputs 20 Definition Since is always positive semi-definite, the PE condition requires that its integral over any interval of time of length is a positive definite matrix. Definition

21 Persistence of Excitation and Sufficiently Rich Inputs 21 Let us consider the signal vector generated as where and is a vector whose elements are strictly proper transfer functions with stable poles. Theorem

22 22 Example: in the last example we had: Persistence of Excitation and Sufficiently Rich Inputs In this case n = 2 and is nonsingular For, is PE.

23 23 Example: Persistence of Excitation and Sufficiently Rich Inputs Possible u

24 Example: Vector Case 24 Consider the first-order plant model Parametric Model

25 Example: Vector Case 25 Filtering with where, a monic Hurwitz polynomial

26 Example: Vector Case 26 If is Hurwitz, a bilinear model can be obtained as follows: Consider the polynomials which satisfy the Diophantine equation where is a monic Hurwitz polynomial of order 2n-m-1.

27 Example: Vector Case 27 Filtering by B-SPM model

28 Example: Vector Case 28 Note that in this case contains not the coefficients of the plant transfer function but the coefficients of the polynomials. In certain adaptive control systems such as MRAC, the coefficients of are the controller parameters, and the above parameterizations allow the direct estimation of the controller parameters.

29 Example: Vector Case 29 If some of the coefficients of the plant transfer function are known, then the dimension of the vector can be reduced. For example, if are known, then we have: where,

30 Gradient Algorithms Based on the Linear Model 30 Different choices for cost function lead to different algorithms. As before we have: Instantaneous Cost Function referred to as the adaptive gain.

31 31 Theorem Gradient Algorithms Based on the Linear Model Instantaneous Cost Function

32 32 Integral Cost Function where is a design constant acting as a forgetting factor Gradient Algorithms Based on the Linear Model

33 33 Integral Cost Function Gradient Algorithms Based on the Linear Model

34 34 Theorem: Integral Cost Function Gradient Algorithms Based on the Linear Model

35 Least-Squares Algorithms 35 LS problem: Minimize the cost: Let us now extend this problem Now we present different versions of the LS algorithm, which correspond to different choices of the LS cost function.

36 Least-Squares Algorithms 36 Recursive LS Algorithm with Forgetting Factor where are design constants and is the initial parameter estimate.

37 Least-Squares Algorithms 37 covariance matrix is covariance matrix where Non-recursive LS algorithm Recursive LS Algorithm with Forgetting Factor

38 38 Using the identity recursive LS algorithm with forgetting factor Theorem: Least-Squares Algorithms Recursive LS Algorithm with Forgetting Factor

39 39 When the above algorithm reduces to: which is referred to as the pure LS algorithm. Theorem Least-Squares Algorithms Pure LS Algorithm

40 40 The pure LS algorithm guarantees that without any restriction on the regressor. If, however, is PE, then. Convergence of the estimated parameters to constant values is a unique property of the pure LS algorithm. Least-Squares Algorithms Pure LS Algorithm

41 41 One of the drawbacks of the pure LS algorithm is that the covariance matrix P may become arbitrarily small and slow down adaptation in some directions. This is due to the fact that This is the so-called covariance wind-up problem. Another drawback of the pure LS algorithm is that parameter convergence cannot be guaranteed to be exponential. Least-Squares Algorithms Pure LS Algorithm

42 42 Modified LS Algorithms One way to avoid the covariance wind-up problem is using covariance resetting modification to obtain Least-Squares Algorithms where is the time at which and are some design scalars. Due to covariance resetting,

43 43 Modified LS Algorithms Therefore, P is guaranteed to be positive definite for all t > 0. In fact, the pure LS algorithm with covariance resetting can be viewed as a gradient algorithm with time-varying adaptive gain P, and its properties are very similar to those of a gradient algorithm. Least-Squares Algorithms modified LS algorithm with forgetting factor Where is a constant that serves as an upper bound for.

44 44 Modified LS Algorithms Theorem: Least-Squares Algorithms

45 45 Parameter Identification Based on DPM Consider the DPM, it may be written as: Where is chosen so that is a proper stable transfer function, and is a proper strictly positive real (SPR) transfer function. Normalizing error

46 46 Parameter Identification Based on DPM state-space representation where there exist matrices such that:

47 47 Parameter Identification Based on DPM Theorem: The adaptive law is referred to as the adaptive law based on the SPR-Lyapunov synthesis approach. It has the same form as the gradient algorithm.

48 48 Parameter Identification Based on B-SPM Consider the B-SPM. The estimation error is generated as Let us consider the cost where is available for measurement. where are the adaptive gain.

49 49 Parameter Identification Based on B-SPM Since is unknown, this adaptive law cannot be implemented. We bypass this problem by employing the equality where. Since is arbitrary any can be selected without having to know.Therefore, the adaptive laws may be written as

50 50 Parameter Identification Based on B-SPM Theorem:

51 51 Parameter Projection In many practical problems, we may have some a priori knowledge of where is located in. This knowledge usually comes in terms of upper and/or lower bounds for the elements of or in terms of location in a convex subset of. If such a priori information is available, we want to constrain the online estimation to be within the set where the unknown parameters are located. For this purpose we modify the gradient algorithms based on the unconstrained minimization of certain costs using the gradient projection method. where is a convex subset of with smooth boundary

52 52 Parameter Projection The adaptive laws based on the gradient method can be modified to guarantee that by solving the constrained optimization problem given above to obtain where denote the boundary and the interior, respectively, of and is the projection operator.

53 53 Parameter Projection The gradient algorithm based on the instantaneous cost function with projection is obtained by substituting

54 54 Parameter Projection The pure LS algorithm with projection becomes:

55 55 Parameter Projection Theorem: The gradient adaptive laws and the LS adaptive laws with the projection modifications respectively, retain all the properties that are established in the absence of projection and in addition guarantee that provided

56 56 Parameter Projection Example: Consider the plant model where a, b are unknown constants that satisfy some known bounds, e.g., b ≥ 1 and 20 ≥a ≥ -2. SPM The gradient adaptive law in unconstrained case is: Now, apply the projection method by defining:

57 57 Parameter Projection applying the projection algorithm for each set, we obtain the following adaptive laws:

58 58 Parameter Projection Example: Let us consider the gradient adaptive law SPM with the a priori knowledge that for some known bound. In most applications, we may have such a priori information. We define use projection method with to obtain the adaptive law

59 59 Robust Parameter Identification In the previous sections we designed and analyzed a wide class of PI algorithms based on the parametric models that are assumed to be free of disturbances, noise, unmodeled dynamics, time delays, and other frequently encountered uncertainties. In the presence of plant uncertainties we are no longer able to express the unknown parameter vector in the form of the SPM or DPM where all signals are measured and is the only unknown term. In this case, the SPM or DPM takes the form where is an unknown function that represents the modeling error terms. The following examples are used to show how above form arises for different plant uncertainties.

60 60 Robust Parameter Identification Example: Consider a system with a small input delay Actual plantNominal plant where

61 61 Robust Parameter Identification Instability Example Consider the scalar constant gain system where d is a bounded unknown disturbance and. The adaptive law for estimating derived for d = 0 is given by where and the normalizing signal is taken to be 1. Parameter error equation Now consider d ≠ 0, we have

62 62 Robust Parameter Identification Instability Example In this case we cannot guarantee that the parameter estimate is bounded for any bounded input u and disturbance d. For example for: i.e., the estimated parameter drifts to infinity even though the disturbance disappears with time. This instability phenomenon is known as parameter drift. It is mainly due to the pure integral action of the adaptive law, which, in addition to integrating the "good" signals, integrates the disturbance term as well, leading to the parameter drift phenomenon.

63 63 Robust Adaptive Laws Consider the general plant where is the dominant part, are strictly proper with stable poles and d is a bounded disturbance. where SPM

64 64 Robust Adaptive Laws For robustness, we need to use the following modifications:  Design the normalizing signal to bound the modeling error in addition to bounding the regressor vector.  Modify the "pure" integral action of the adaptive laws to prevent parameter drift.

65 65 Robust Adaptive Laws Dynamic Normalization Assume that are analytic in for some known. 1)1) 2) 3)3)

66 66 Robust Adaptive Laws σ-Modification A class of robust modifications involves the use of a small feedback around the "pure“ integrator in the adaptive law, leading to the adaptive law structure where is a small design parameter and is the adaptive gain, which in the case of LS is equal to the covariance matrix P. The above modification is referred to as the σ -modification or as leakage. Different choices of lead to different robust adaptive laws with different properties.

67 67 Robust Adaptive Laws Fixed σ -Modification where σ is a small positive design constant. The gradient adaptive law takes the form If some a priori estimate is available, then the term may be replaced with so that the leakage term becomes larger for larger deviations of from rather than from zero.

68 68 Robust Adaptive Laws Fixed σ -Modification Theorem:

69 69 Robust Adaptive Laws Fixed σ -Modification Main drawback: If the modeling error is removed, i.e.,, it will not guarantee the ideal properties of the adaptive law since it introduces a disturbance of the order of the design constant σ. Advantage: No assumption about bounds or location of the unknown is made.

70 70 Robust Adaptive Laws Switching σ -Modification

71 71 Robust Adaptive Laws Switching σ -Modification Theorem:

72 72 Robust Adaptive Laws Switching σ -Modification Theorem:

73 73 Robust Adaptive Laws ε -Modification Another class of σ -modification involves leakage that depends on the estimation error ε, i.e., where is a design constant. This modification is referred to as the ε –modification and has properties similar to those of the fixed σ -modification in the sense that it cannot guarantee the ideal properties of the adaptive law in the absence of modeling errors.

74 For parametric model in order to avoid parameter drift, we constrain to lie inside a bounded convex set that contains. As an example, consider the set 74 Robust Adaptive Laws Parameter Projection where is chosen so that.Following the last discussion, we obtain where is chosen so that and

75 75 Robust Adaptive Laws Parameter Projection Theorem:

76 76 Robust Adaptive Laws Parameter Projection

77 77 Robust Adaptive Laws Parameter Projection The parameter projection has properties identical to switching σ -modification, as both modifications aim at keeping. In the case of the switching σ -modification, may exceed but remain bounded, whereas in the case of projection provided.

78 78 Robust Adaptive Laws Dead Zone The principal idea behind the dead zone is to monitor the size of the estimation error and adapt only when the estimation error is large relative to the modeling error. where is a known upper bound of the normalized modeling error. In other words, we move in the direction of the steepest descent only when the estimation error is large relative to the modeling error, i.e., when

79 79 Robust Adaptive Laws Dead Zone To the discontinuity in, the dead zone function is made continuous as :

80 80 Robust Adaptive Laws Dead Zone Normalized dead zone function

81 81 Robust Adaptive Laws Dead Zone Theorem:

82 82 Robust Adaptive Laws Dead Zone  The dead zone modification guarantees that the estimated parameters always converge to a constant.  As in the case of the fixed σ-modification, the ideal properties of the adaptive law are destroyed in an effort to achieve robustness. The robust modifications that include leakage, projection, and dead zone are analyzed for the case of the gradient algorithm for the SPM with modeling error. The same modifications can be used in the case of LS and DPM, B-SPM, and B-DPM with modeling errors.

83 83 State-Space Identifiers Consider the state-space plant model SSPM The above estimation model has been referred to as the series-parallel model in the literature. The estimation error vector is defined as A straightforward choice for

84 84 State-Space Identifiers estimation error dynamics where are the parameter errors. where are constant scalars. Adaptive laws:

85 85 State-Space Identifiers Theorem:

86 86 Adaptive Observers Consider the LTI SISO plant where. Assume that u is a piecewise continuous bounded function of time and that A is a stable matrix. In addition, we assume that the plant is completely controllable and completely observable. The problem is to construct a scheme that estimates both the plant parameters, i.e., A, B, C, as well as the state vector x using only I/O measurements. We refer to such a scheme as the adaptive observer.

87 87 Adaptive Observers A good starting point for designing an adaptive observer is the Luenberger observer used in the case where A, B, C are known. The Luenberger observer is of the form: Where K is chosen so that is a stable matrix, and guarantees that exponentially fast for any initial condition and any input u. For to be stable, the existence of K is guaranteed by the observability of (A, C).

88 88 Adaptive Observers A straightforward procedure for choosing the structure of the adaptive observer is to use the same equation as the Luenberger observer, but replace the unknown parameters A, B, C with their estimates, generated by some adaptive law. But the problem we face with this procedure is the inability to estimate uniquely the n^2+2n parameters of A, B, C from the I/O data. The best we can do in this case is to estimate the parameters of the plant transfer function and use them to calculate. These calculations, however, are not always possible because the mapping of the 2n estimated parameters of the transfer function to the n^2 + 2n parameters of is not unique unless (A,B,C) satisfies certain structural constraints. One such constraint is that (A, B, C) is in the observer form, i.e., the plant is represented as:

89 89 Adaptive Observers where We can use the techniques presented in the previous sections.

90 90 Adaptive Observers The disadvantage is that in a practical situation x may represent some physical variables of interest, whereas may be an artificial state vector. However, the adaptive observer motivated from the Luenberger observer structure and is given by

91 91 Adaptive Observers is a stable matrix that contains the eigenvalues of the observer. A wide class of adaptive laws may be used to generate online. As in last Chapter, develop the parametric model

92 92 THE END


Download ppt "Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently."

Similar presentations


Ads by Google