Chapter 3 1 Parameter Identification. Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently.

Slides:



Advertisements
Similar presentations

Advertisements

Lectures 12&13: Persistent Excitation for Off-line and On-line Parameter Estimation Dr Martin Brown Room: E1k Telephone:
Properties of State Variables
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Newton’s Method Application to LMS Recursive Least Squares Exponentially-Weighted.
EARS1160 – Numerical Methods notes by G. Houseman
Lecture #13 Stability under slow switching & state-dependent switching João P. Hespanha University of California at Santa Barbara Hybrid Control and Switched.
Visual Recognition Tutorial
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
© Goodwin, Graebe, Salgado, Prentice Hall 2000 Chapter7 Synthesis of SISO Controllers.
A De-coupled Sliding Mode Controller and Observer for Satellite Attitude Control Ronald Fenton.
Ch 7.9: Nonhomogeneous Linear Systems
1 L-BFGS and Delayed Dynamical Systems Approach for Unconstrained Optimization Xiaohui XIE Supervisor: Dr. Hon Wah TAM.
Transfer Functions Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: The following terminology.
SYSTEMS Identification
Development of Empirical Models From Process Data
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley Asynchronous Distributed Algorithm Proof.
Transfer Functions Convenient representation of a linear, dynamic model. A transfer function (TF) relates one input and one output: The following terminology.
President UniversityErwin SitompulSMI 9/1 Dr.-Ing. Erwin Sitompul President University Lecture 9 System Modeling and Identification
F.B. Yeh & H.N. Huang, Dept. of Mathematics, Tunghai Univ Nov.8 Fang-Bo Yeh and Huang-Nan Huang Department of Mathematic Tunghai University The 2.
Adaptive Signal Processing
Normalised Least Mean-Square Adaptive Filtering
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
MODEL REFERENCE ADAPTIVE CONTROL
Chapter 1 Introduction to Adaptive Control
Algorithm Taxonomy Thus far we have focused on:
ORDINARY DIFFERENTIAL EQUATION (ODE) LAPLACE TRANSFORM.
Book Adaptive control -astrom and witten mark
CHAPTER 4 S TOCHASTIC A PPROXIMATION FOR R OOT F INDING IN N ONLINEAR M ODELS Organization of chapter in ISSO –Introduction and potpourri of examples Sample.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
FULL STATE FEEDBAK CONTROL:
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
Time-Varying Angular Rate Sensing for a MEMS Z-Axis Gyroscope Mohammad Salah †, Michael McIntyre †, Darren Dawson †, and John Wagner ‡ Mohammad Salah †,
1 Chapter 2 1. Parametric Models. 2 Parametric Models The first step in the design of online parameter identification (PI) algorithms is to lump the unknown.
CHAPTER 4 Adaptive Tapped-delay-line Filters Using the Least Squares Adaptive Filtering.
To clarify the statements, we present the following simple, closed-loop system where x(t) is a tracking error signal, is an unknown nonlinear function,
1 Adaptive Control Neural Networks 13(2000): Neural net based MRAC for a class of nonlinear plants M.S. Ahmed.
Chapter 7 Stability and Steady-State Error Analysis
Lecture #11 Stability of switched system: Arbitrary switching João P. Hespanha University of California at Santa Barbara Hybrid Control and Switched Systems.
September Bound Computation for Adaptive Systems V&V Giampiero Campa September 2008 West Virginia University.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Derivation Computational Simplifications Stability Lattice Structures.
EE 685 presentation Optimization Flow Control, I: Basic Algorithm and Convergence By Steven Low and David Lapsley.
1 Chapter 11 Compensator Design When the full state is not available for feedback, we utilize an observer. The observer design process is described and.
Model Reference Adaptive Control
Chapter 20 1 Overall Objectives of Model Predictive Control 1.Prevent violations of input and output constraints. 2.Drive some output variables to their.
SYSTEMS Identification Ali Karimpour Assistant Professor Ferdowsi University of Mashhad Reference: “System Identification Theory For The User” Lennart.
Lecture #14 Computational methods to construct multiple Lyapunov functions & Applications João P. Hespanha University of California at Santa Barbara Hybrid.
8.4.2 Quantum process tomography 8.5 Limitations of the quantum operations formalism 量子輪講 2003 年 10 月 16 日 担当:徳本 晋
Lecture #12 Controller realizations for stable switching João P. Hespanha University of California at Santa Barbara Hybrid Control and Switched Systems.
Recursive Least-Squares (RLS) Adaptive Filters
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
دانشگاه صنعتي اميركبير دانشكده مهندسي پزشكي استاد درس دكتر فرزاد توحيدخواه بهمن 1389 کنترل پيش بين-دکتر توحيدخواه MPC Stability-2.
(COEN507) LECTURE III SLIDES By M. Abdullahi
Lecture #7 Stability and convergence of ODEs João P. Hespanha University of California at Santa Barbara Hybrid Control and Switched Systems NO CLASSES.
1 Development of Empirical Models From Process Data In some situations it is not feasible to develop a theoretical (physically-based model) due to: 1.
State-Space Recursive Least Squares with Adaptive Memory College of Electrical & Mechanical Engineering National University of Sciences & Technology (NUST)
11-1 Lyapunov Based Redesign Motivation But the real system is is unknown but not necessarily small. We assume it has a known bound. Consider.
Quantum Two 1. 2 Angular Momentum and Rotations 3.
§7-4 Lyapunov Direct Method
Mathematical Descriptions of Systems
Digital and Non-Linear Control
8. Stability, controllability and observability
Homework 9 Refer to the last example.
Synthesis of SISO Controllers
Stability Analysis of Linear Systems
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
NONLINEAR AND ADAPTIVE SIGNAL ESTIMATION
Chapter 7 Inverse Dynamics Control
Presentation transcript:

Chapter 3 1 Parameter Identification

Table of Contents   O ne-Parameter Case TT wo Parameters PP ersistence of Excitation and SS ufficiently Rich Inputs GG radient Algorithms Based on the Linear Model LL east-Squares Algorithms PP arameter Identification Based on DPM PP arameter Identification Based on B-SPM PP arameter Projection RR obust Parameter Identification RR obust Adaptive Laws SS tate-Space Identifiers AA daptive Observers 2

Introduction 3 The purpose of this chapter is to present the design, analysis, and simulation of algorithms that can be used for online parameter identification. This involves three steps: Step 1 (Parametric model ). Express the form of the parametric model SPM, DPM, B-SPM, or B-DPM. Step 2 (Parameter Identification Algorithm). The estimation error is used to drive the adaptive law that generates online. The adaptive law is a differential equation of the form where is a time-varying gain vector that depends on measured signals. Step 3 (Stability and Parameter Convergence). Establish conditions that guarantee

Example: One-Parameter Case 4 Consider the first-order plant model Step 1: Parametric Model

Example: One-Parameter Case 5 Step 2: Parameter Identification Algorithm parameter error Adaptive Law The simplest adaptive law for In scalar form may be introduced as provided. In practice the effect of noise especially when is close to zero, may lead to erroneous parameter estimates.

Example: One-Parameter Case 6 Step 2: Parameter Identification Algorithm Another approach is to update in a direction that minimizes a certain cost of the estimation error. As an example, consider the cost criterion: where is a scaling constant or step size which we refer to as the adaptive gain and where is the gradient of J with respect to. We ill have adaptive law

The adaptive law should guarantee that: parameter estimate and speed of adaptation are bounded and estimation error gets smaller and smaller with time. Example: One-Parameter Case 7 Step 3: Stability and Parameter Convergence Note that these conditions still do not imply that unless some conditions on the vector referred to as the regressor vector.

Example: One-Parameter Case 8 Step 3: Stability and Parameter Convergence Analysis 1.Solving 2.Lyapunov Solving

and are bounded Example: One-Parameter Case 9 Step 3: Stability and Parameter Convergence is always bounded for any is bounded

Analysis by Lyapunov Example: One-Parameter Case 10 Step 3: Stability and Parameter Convergence or

Example: One-Parameter Case 11 is uniformly stable (u.s.) is uniformly bounded (u.b.) asymptotic stability So, we need to obtain additional properties for asymptotic stability

Example: One-Parameter Case 12

Example: One-Parameter Case 13 adaptive law summary (i) (ii)

Example: One-Parameter Case 14 The PE property of is guaranteed by choosing the input u appropriately. Appropriate choices of u: and any bounded input u that is not vanishing with time.

Example: One-Parameter Case 15 Summary

Example: Two-Parameter Case 16 Consider the first-order plant model Step 1: Parametric Model Step 2: Parameter Identification Algorithm Estimation Model: Estimation Error: A straightforward choice: where is the normalizing signal such that

Example: Two-Parameter Case 17 Adaptive Law: Use the gradient method to minimize the cost,

Example: Two-Parameter Case 18 Step 3: Stability and Parameter Convergence Stability of the equilibrium will very much depend on the properties of the time-varying matrix, which in turn depends on the properties of.

Example: Two-Parameter Case 19 For simplicity let us assume that the plant is stable, i.e.,. If we choose at steady state is only marginally stable is bounded but does not necessarily converge to 0. constant input does not guarantee exponential stability.

Persistence of Excitation and Sufficiently Rich Inputs 20 Definition Since is always positive semi-definite, the PE condition requires that its integral over any interval of time of length is a positive definite matrix. Definition

Persistence of Excitation and Sufficiently Rich Inputs 21 Let us consider the signal vector generated as where and is a vector whose elements are strictly proper transfer functions with stable poles. Theorem

22 Example: in the last example we had: Persistence of Excitation and Sufficiently Rich Inputs In this case n = 2 and is nonsingular For, is PE.

23 Example: Persistence of Excitation and Sufficiently Rich Inputs Possible u

Example: Vector Case 24 Consider the first-order plant model Parametric Model

Example: Vector Case 25 Filtering with where, a monic Hurwitz polynomial

Example: Vector Case 26 If is Hurwitz, a bilinear model can be obtained as follows: Consider the polynomials which satisfy the Diophantine equation where is a monic Hurwitz polynomial of order 2n-m-1.

Example: Vector Case 27 Filtering by B-SPM model

Example: Vector Case 28 Note that in this case contains not the coefficients of the plant transfer function but the coefficients of the polynomials. In certain adaptive control systems such as MRAC, the coefficients of are the controller parameters, and the above parameterizations allow the direct estimation of the controller parameters.

Example: Vector Case 29 If some of the coefficients of the plant transfer function are known, then the dimension of the vector can be reduced. For example, if are known, then we have: where,

Gradient Algorithms Based on the Linear Model 30 Different choices for cost function lead to different algorithms. As before we have: Instantaneous Cost Function referred to as the adaptive gain.

31 Theorem Gradient Algorithms Based on the Linear Model Instantaneous Cost Function

32 Integral Cost Function where is a design constant acting as a forgetting factor Gradient Algorithms Based on the Linear Model

33 Integral Cost Function Gradient Algorithms Based on the Linear Model

34 Theorem: Integral Cost Function Gradient Algorithms Based on the Linear Model

Least-Squares Algorithms 35 LS problem: Minimize the cost: Let us now extend this problem Now we present different versions of the LS algorithm, which correspond to different choices of the LS cost function.

Least-Squares Algorithms 36 Recursive LS Algorithm with Forgetting Factor where are design constants and is the initial parameter estimate.

Least-Squares Algorithms 37 covariance matrix is covariance matrix where Non-recursive LS algorithm Recursive LS Algorithm with Forgetting Factor

38 Using the identity recursive LS algorithm with forgetting factor Theorem: Least-Squares Algorithms Recursive LS Algorithm with Forgetting Factor

39 When the above algorithm reduces to: which is referred to as the pure LS algorithm. Theorem Least-Squares Algorithms Pure LS Algorithm

40 The pure LS algorithm guarantees that without any restriction on the regressor. If, however, is PE, then. Convergence of the estimated parameters to constant values is a unique property of the pure LS algorithm. Least-Squares Algorithms Pure LS Algorithm

41 One of the drawbacks of the pure LS algorithm is that the covariance matrix P may become arbitrarily small and slow down adaptation in some directions. This is due to the fact that This is the so-called covariance wind-up problem. Another drawback of the pure LS algorithm is that parameter convergence cannot be guaranteed to be exponential. Least-Squares Algorithms Pure LS Algorithm

42 Modified LS Algorithms One way to avoid the covariance wind-up problem is using covariance resetting modification to obtain Least-Squares Algorithms where is the time at which and are some design scalars. Due to covariance resetting,

43 Modified LS Algorithms Therefore, P is guaranteed to be positive definite for all t > 0. In fact, the pure LS algorithm with covariance resetting can be viewed as a gradient algorithm with time-varying adaptive gain P, and its properties are very similar to those of a gradient algorithm. Least-Squares Algorithms modified LS algorithm with forgetting factor Where is a constant that serves as an upper bound for.

44 Modified LS Algorithms Theorem: Least-Squares Algorithms

45 Parameter Identification Based on DPM Consider the DPM, it may be written as: Where is chosen so that is a proper stable transfer function, and is a proper strictly positive real (SPR) transfer function. Normalizing error

46 Parameter Identification Based on DPM state-space representation where there exist matrices such that:

47 Parameter Identification Based on DPM Theorem: The adaptive law is referred to as the adaptive law based on the SPR-Lyapunov synthesis approach. It has the same form as the gradient algorithm.

48 Parameter Identification Based on B-SPM Consider the B-SPM. The estimation error is generated as Let us consider the cost where is available for measurement. where are the adaptive gain.

49 Parameter Identification Based on B-SPM Since is unknown, this adaptive law cannot be implemented. We bypass this problem by employing the equality where. Since is arbitrary any can be selected without having to know.Therefore, the adaptive laws may be written as

50 Parameter Identification Based on B-SPM Theorem:

51 Parameter Projection In many practical problems, we may have some a priori knowledge of where is located in. This knowledge usually comes in terms of upper and/or lower bounds for the elements of or in terms of location in a convex subset of. If such a priori information is available, we want to constrain the online estimation to be within the set where the unknown parameters are located. For this purpose we modify the gradient algorithms based on the unconstrained minimization of certain costs using the gradient projection method. where is a convex subset of with smooth boundary

52 Parameter Projection The adaptive laws based on the gradient method can be modified to guarantee that by solving the constrained optimization problem given above to obtain where denote the boundary and the interior, respectively, of and is the projection operator.

53 Parameter Projection The gradient algorithm based on the instantaneous cost function with projection is obtained by substituting

54 Parameter Projection The pure LS algorithm with projection becomes:

55 Parameter Projection Theorem: The gradient adaptive laws and the LS adaptive laws with the projection modifications respectively, retain all the properties that are established in the absence of projection and in addition guarantee that provided

56 Parameter Projection Example: Consider the plant model where a, b are unknown constants that satisfy some known bounds, e.g., b ≥ 1 and 20 ≥a ≥ -2. SPM The gradient adaptive law in unconstrained case is: Now, apply the projection method by defining:

57 Parameter Projection applying the projection algorithm for each set, we obtain the following adaptive laws:

58 Parameter Projection Example: Let us consider the gradient adaptive law SPM with the a priori knowledge that for some known bound. In most applications, we may have such a priori information. We define use projection method with to obtain the adaptive law

59 Robust Parameter Identification In the previous sections we designed and analyzed a wide class of PI algorithms based on the parametric models that are assumed to be free of disturbances, noise, unmodeled dynamics, time delays, and other frequently encountered uncertainties. In the presence of plant uncertainties we are no longer able to express the unknown parameter vector in the form of the SPM or DPM where all signals are measured and is the only unknown term. In this case, the SPM or DPM takes the form where is an unknown function that represents the modeling error terms. The following examples are used to show how above form arises for different plant uncertainties.

60 Robust Parameter Identification Example: Consider a system with a small input delay Actual plantNominal plant where

61 Robust Parameter Identification Instability Example Consider the scalar constant gain system where d is a bounded unknown disturbance and. The adaptive law for estimating derived for d = 0 is given by where and the normalizing signal is taken to be 1. Parameter error equation Now consider d ≠ 0, we have

62 Robust Parameter Identification Instability Example In this case we cannot guarantee that the parameter estimate is bounded for any bounded input u and disturbance d. For example for: i.e., the estimated parameter drifts to infinity even though the disturbance disappears with time. This instability phenomenon is known as parameter drift. It is mainly due to the pure integral action of the adaptive law, which, in addition to integrating the "good" signals, integrates the disturbance term as well, leading to the parameter drift phenomenon.

63 Robust Adaptive Laws Consider the general plant where is the dominant part, are strictly proper with stable poles and d is a bounded disturbance. where SPM

64 Robust Adaptive Laws For robustness, we need to use the following modifications:  Design the normalizing signal to bound the modeling error in addition to bounding the regressor vector.  Modify the "pure" integral action of the adaptive laws to prevent parameter drift.

65 Robust Adaptive Laws Dynamic Normalization Assume that are analytic in for some known. 1)1) 2) 3)3)

66 Robust Adaptive Laws σ-Modification A class of robust modifications involves the use of a small feedback around the "pure“ integrator in the adaptive law, leading to the adaptive law structure where is a small design parameter and is the adaptive gain, which in the case of LS is equal to the covariance matrix P. The above modification is referred to as the σ -modification or as leakage. Different choices of lead to different robust adaptive laws with different properties.

67 Robust Adaptive Laws Fixed σ -Modification where σ is a small positive design constant. The gradient adaptive law takes the form If some a priori estimate is available, then the term may be replaced with so that the leakage term becomes larger for larger deviations of from rather than from zero.

68 Robust Adaptive Laws Fixed σ -Modification Theorem:

69 Robust Adaptive Laws Fixed σ -Modification Main drawback: If the modeling error is removed, i.e.,, it will not guarantee the ideal properties of the adaptive law since it introduces a disturbance of the order of the design constant σ. Advantage: No assumption about bounds or location of the unknown is made.

70 Robust Adaptive Laws Switching σ -Modification

71 Robust Adaptive Laws Switching σ -Modification Theorem:

72 Robust Adaptive Laws Switching σ -Modification Theorem:

73 Robust Adaptive Laws ε -Modification Another class of σ -modification involves leakage that depends on the estimation error ε, i.e., where is a design constant. This modification is referred to as the ε –modification and has properties similar to those of the fixed σ -modification in the sense that it cannot guarantee the ideal properties of the adaptive law in the absence of modeling errors.

For parametric model in order to avoid parameter drift, we constrain to lie inside a bounded convex set that contains. As an example, consider the set 74 Robust Adaptive Laws Parameter Projection where is chosen so that.Following the last discussion, we obtain where is chosen so that and

75 Robust Adaptive Laws Parameter Projection Theorem:

76 Robust Adaptive Laws Parameter Projection

77 Robust Adaptive Laws Parameter Projection The parameter projection has properties identical to switching σ -modification, as both modifications aim at keeping. In the case of the switching σ -modification, may exceed but remain bounded, whereas in the case of projection provided.

78 Robust Adaptive Laws Dead Zone The principal idea behind the dead zone is to monitor the size of the estimation error and adapt only when the estimation error is large relative to the modeling error. where is a known upper bound of the normalized modeling error. In other words, we move in the direction of the steepest descent only when the estimation error is large relative to the modeling error, i.e., when

79 Robust Adaptive Laws Dead Zone To the discontinuity in, the dead zone function is made continuous as :

80 Robust Adaptive Laws Dead Zone Normalized dead zone function

81 Robust Adaptive Laws Dead Zone Theorem:

82 Robust Adaptive Laws Dead Zone  The dead zone modification guarantees that the estimated parameters always converge to a constant.  As in the case of the fixed σ-modification, the ideal properties of the adaptive law are destroyed in an effort to achieve robustness. The robust modifications that include leakage, projection, and dead zone are analyzed for the case of the gradient algorithm for the SPM with modeling error. The same modifications can be used in the case of LS and DPM, B-SPM, and B-DPM with modeling errors.

83 State-Space Identifiers Consider the state-space plant model SSPM The above estimation model has been referred to as the series-parallel model in the literature. The estimation error vector is defined as A straightforward choice for

84 State-Space Identifiers estimation error dynamics where are the parameter errors. where are constant scalars. Adaptive laws:

85 State-Space Identifiers Theorem:

86 Adaptive Observers Consider the LTI SISO plant where. Assume that u is a piecewise continuous bounded function of time and that A is a stable matrix. In addition, we assume that the plant is completely controllable and completely observable. The problem is to construct a scheme that estimates both the plant parameters, i.e., A, B, C, as well as the state vector x using only I/O measurements. We refer to such a scheme as the adaptive observer.

87 Adaptive Observers A good starting point for designing an adaptive observer is the Luenberger observer used in the case where A, B, C are known. The Luenberger observer is of the form: Where K is chosen so that is a stable matrix, and guarantees that exponentially fast for any initial condition and any input u. For to be stable, the existence of K is guaranteed by the observability of (A, C).

88 Adaptive Observers A straightforward procedure for choosing the structure of the adaptive observer is to use the same equation as the Luenberger observer, but replace the unknown parameters A, B, C with their estimates, generated by some adaptive law. But the problem we face with this procedure is the inability to estimate uniquely the n^2+2n parameters of A, B, C from the I/O data. The best we can do in this case is to estimate the parameters of the plant transfer function and use them to calculate. These calculations, however, are not always possible because the mapping of the 2n estimated parameters of the transfer function to the n^2 + 2n parameters of is not unique unless (A,B,C) satisfies certain structural constraints. One such constraint is that (A, B, C) is in the observer form, i.e., the plant is represented as:

89 Adaptive Observers where We can use the techniques presented in the previous sections.

90 Adaptive Observers The disadvantage is that in a practical situation x may represent some physical variables of interest, whereas may be an artificial state vector. However, the adaptive observer motivated from the Luenberger observer structure and is given by

91 Adaptive Observers is a stable matrix that contains the eigenvalues of the observer. A wide class of adaptive laws may be used to generate online. As in last Chapter, develop the parametric model

92 THE END