# SYSTEMS Identification

## Presentation on theme: "SYSTEMS Identification"— Presentation transcript:

SYSTEMS Identification
Ali Karimpour Assistant Professor Ferdowsi University of Mashhad <<<1.1>>> ###Control System Design### {{{Control, Design}}} Reference: “System Identification Theory For The User” Lennart Ljung

Models of linear time invariant system
Lecture 4 Models of linear time invariant system Topics to be covered include: Linear models and sets of linear models. A family of transfer function models. State space models. Identifiability of some model structures.

Linear models and sets of linear models
Topics to be covered include: Linear models and sets of linear models. A family of transfer function models. State space models. Identifiability of some model structures.

Linear models and sets of linear models
A complete model is given by with A particular model thus corresponds to specification of the function G, H and fe. Most often fe not specified as a function, but first and second moments are specified as: It is also common to assume e(t) is Gaussian.

Linear models and sets of linear models
with A particular model thus corresponds to specification of the function G, H and fe. We try to parameterize coefficients so: Sets of models Where θ is a vector in Rd space. We thus have:

A family of transfer function models
Topics to be covered include: Linear models and sets of linear models. A family of transfer function models. State space models. Identifiability of some model structures.

A family of transfer function models
AR part Equation error model structure Adjustable parameters in this case are X part Define ARX model + u e y The ARX model structure So we have: where

A family of transfer function models
Equation error model structure We have: where Now if we introduce regression vector Linear regression in statistic Linear regression in statistic

A family of transfer function models
Exercise(4E.1): Consider the ARX model structure where b1 is known to be 0.5. Write the corresponding predictor in the following linear regression form. Linear regression in statistic

A family of transfer function models
ARMAX model structure AR part with X part MA part So we have: now where Let

A family of transfer function models
Then we have Or To start it up at time t = 0 and predict y(1) requires the knowledge of One can consider the data as zero but there is a difference that decays cμt where μ is the maximum magnitude of the zero of C(z). Exercise(4G.1): Show that the effect from an erroneous initial condition in is bounded by cμt .

A family of transfer function models
We saw that To start it up at time t = 0 and predict y(1) requires the knowledge of It is also possible to start the recursion at time max(n*, nb) and include the unknown initial condition (k|θ), k = 1, 2,…, nc , in the vector θ. Then Now if we introduce Pseudo linear regressions

A family of transfer function models
Other equation error type model structures AR part ARARX model With X part AR part We could use an ARMA description for error + u e y The equation error model family AR part X part ARMA part ARARMAX model

A family of transfer function models
Output error model structure If we suppose that the relation between input and undisturbed output w can be written as: Then With + u e y The output error model structure So OE model

A family of transfer function models
Output error model structure + u e y The output error model structure Let w(t) is never observed instead it is constructed from u So

A family of transfer function models
Box-Jenkins model structure A natural development of the output error model is to further model the properties of the output error. Let output error with ARMA model then BJ model This is Box and Jenkins model (1970) + u e y The BJ model structure

A family of transfer function models
A general family of model structure The structure we have discussed in this section may give rise to 32 different model sets, depending on which of the five polynomials A, B, C, D, F are used. For convenience, we shall therefore use a generalized model structure: General model structure + u e y General model structure

A family of transfer function models
+ u e y General model structure Sometimes the dynamics from u to y contains a delay of nk samples, so So But for simplicity

A family of transfer function models
The structure we have discussed in this section may give rise to 32 different model sets, depending on which of the five polynomials A, B, C, D, F are used. General model structure Some common black-box SISO models as special cases of generalized model structure Polynomial used Name of model Structure B FIR (finite impulse response) AB ARX ABC ARMAX AC ARMA ABD ARARX ABCD ARARMAX BF OE (output error) BFCD BJ (Box-Jenkins)

A family of transfer function models
General model structure Predictor A pseudolinear form for general model structure Predictor error is:

A family of transfer function models
So we have:

A family of transfer function models

A family of transfer function models
Other model structure Consider FIR model It is a linear regression (being a special case of ARX) The model can be effectively estimated. It is a an output error model (being a special case of OE) It is robust against noise. The basic disadvantage is that many parameters may be needed if the system has a small time constant. Whether it would be possible to retain the linear regression and output error features, while offering better possibilities to treat slowly decaying impulse responses.

Topics to be covered include:
State space models Topics to be covered include: Linear models and sets of linear models. A family of transfer function models. State space models. Identifiability of some model structures.

State Space models For most physical systems it is easier to construct models with physical insight in continuous time: θ is a vector of parameters that typically correspond to unknown values of physical coefficients, material constants, and the like. Let η(t) be the measurements that would be obtained with ideal, noise free sensors We can derive the transfer operator from u to η

State Space models Sampling the transfer function Let Then x(kT+t) is
So x(kT+T) is We can derive the transfer operator from u to η

State Space models Example 4.1: DC servomotor

State Space models Example 4.1: DC servomotor Let La ≈ 0 so we have

State Space models Example 4.1: DC servomotor
Assume that the actual measurement is made with a certain noise so: with v being white noise. The natural predictor is: This predictor parameterize using only two parameters. But ARX or OE model contains four adjustable parameters. But this method (2 parameters) is far more complicated than ARX or OE.

State Space models A standard discrete time state space model.
Corresponding to where Although sampling a time-continuous is a natural way to obtain the discrete model but for certain application direct discrete time is better since the matrices A, B and C are directly parameterize in terms of θ.

State Space models Noise Representation and the time-invariant Kalman filter A straightforward but entirly valid approach would be: with {e(t)} being white noise with variance λ. Note: The θ-parameter in H(q, θ) could be partly in common with those in G(q, θ) or be extra. process noise measurement noise {w(t)} and {v(t)} are assumed to be sequences of independent random variables with zero mean and

State Space models Noise Representation and the time-invariant Kalman filter {w(t)} and {v(t)} are assumed to be sequences of independent random variables with zero mean and process noise measurement noise {w(t)} and {v(t)} may often be signals whose physical origins are known. The load variation Tl(t) was a “process noise”. The inaccuracy in the potentiometer angular sensor is the “measurement noise”. In such cases it may of course not always be realistic to assume that the signals are white noises.

State Space models Exercise(4G.2) Colored measurement noise: (I)

State Space models For state space descriptions,
The conditional expectation of y(t), given data y(s), u(s), s≤t-1, is: The conditional expectation of x(t), by Kalman filter is: Here K(θ) is given by where is obtained as the psd solution of the stationary Riccati equation:

State Space models For state space descriptions,
The conditional expectation of y(t), given data y(s), u(s), s≤t-1, is: The conditional expectation of x(t), by Kalman filter is: The conditional expectation of x(t) is: The predictor filter can thus be written as: Exercise: Show that covariance matrix of state estimator error is

The innovation form of state space description
State Space models Innovation representation Innovation=Amounts of y(t) that cannot be predicted from past data Innovation Let it e(t) The innovation form of state space description Exercise: Show that the covariance of e(t) is:

The innovation form of state space description
State Space models Innovation representation The innovation form of state space description Let suppose Directly Parameterized Innovations form Which one involve with lower parameters? Both according to situation.

State Space models Innovation representation It is ARMAX model

So we have an ARMAX model with
State Space models Example 4.2 Companion form parameterization Let So we have an ARMAX model with

Identifiability of some model structures
Topics to be covered include: Linear models and sets of linear models. A family of transfer function models. State space models. Identifiability of some model structures.

Identifiability of some model structures
Some notation It is convenient to introduce some more compact notation One step ahead predictor is:

Identifiability of some model structures
Definition 4.1. A predictor model of a linear, time-invariant system is a stable filter W(q). Definition 4.2. A complete probabilistic model of a linear, time-invariant system is a pair (W(q),fe(x)) of a predictor model W(q) and the PDF fe(x) of the associated errors. Clearly, we can also have models where the PDFs are only partially specified (e.g., by the variance of e) We shall say that two models W1(q) and W2(q) are equal if

Identifiability of some model structures
Identifiability properties The problem is whether the identification procedure will yield a unique value of the parameter θ, and/or whether the resulting model is equal to the true system. Definition 4.6. A model structure M is globally identifiable at θ* if Definition 4.7. A model structure M is strictly globally identifiable if it is globally identifiable at all at This definition is quite demanding. A weaker and more realistic property is: Definition 4.8. A model structure M is globally identifiable if it is globally identifiable at almost all at For corresponding local property, the most natural definition of local identifiability of M at θ* would be to require that there exist an ε such that

Identifiability of some model structures
Use of the Identifiability concept The identifiability concept concerns the unique representation of a given system description in a model structure. Let such a description as: Let M be a model structure based on one-step-ahead predictors for Then define the set DT(S,M) as those θ-values in DM for which S=M (θ) The set is empty in case Now suppose that so that S=M(θ0)

Identifiability of some model structures
A model structure is globally identifiable at θ* if and only if Parameterization in Terms of Physical Parameters

Identifiability of some model structures

Identifiability of some model structures

Identifiability of some model structures

Identifiability of some model structures

Identifiability of some model structures

Identifiability of some model structures

Identifiability of some model structures

Identifiability of some model structures

Identifiability of some model structures

Identifiability of some model structures

Identifiability of some model structures

Identifiability of some model structures

Similar presentations