Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed state estimation with moving horizon observers Marcello Farina, Giancarlo Ferrari-Trecate, Riccardo Scattolini Dipartimento di Elettronica.

Similar presentations


Presentation on theme: "Distributed state estimation with moving horizon observers Marcello Farina, Giancarlo Ferrari-Trecate, Riccardo Scattolini Dipartimento di Elettronica."— Presentation transcript:

1 Distributed state estimation with moving horizon observers Marcello Farina, Giancarlo Ferrari-Trecate, Riccardo Scattolini Dipartimento di Elettronica e Informazione, Politecnico di Milano 23rd June 2009 K.U. Leuven Optimization in Engineering Center

2 Leuven, 23 rd June 2009Marcello Farina2 Outline Introduction Statement of the problem State of the art The DMHE algorithm Some improvements Conclusions

3 Leuven, 23 rd June 2009Marcello Farina3 Outline Introduction Statement of the problem State of the art The DMHE algorithm Some improvements Conclusions

4 Leuven, 23 rd June 2009Marcello Farina4 Introduction A sensor network is composed by a large number of sensor nodes Features of the sensor nodes in a sensor network: 1.Cooperative effort 2.They locally carry out computations and transmit only required and partially processed data 3.Sensor network algorithms and protocols must possess self-organization capabilities Advances in wireless communications Advances in electronics Development of sensor nodes, with sensing, data processing, and communicating components: low cost low dimension low power consumption low memory low computational power + -

5 Leuven, 23 rd June 2009Marcello Farina5 Introduction Applications: Health: sensor nodes can be deployed to monitor patients Environmental monitoring: preventing forest fires, forecast pollutant distribution over regions Home: Improve quality and energy efficiency of environmental controls (air conditiong, ventilation systems, …), while allowing reconfiguration and customization, besides saving wiring costs. distributed state estimation in sensor networks is a key problem Advantages (and challenges) of sensor networks : Build large-scale networks Implement sophisticated protocols Reduce amount of communication required to perform tasks by distributed and/or local computations. Implement complex power saving modes of operation

6 Leuven, 23 rd June 2009Marcello Farina6 Introduction 1 3 2 4 Base station Introductory example [3]: temperature measurement Dynamic evolution of the temperature: Measured by 4 sensors ( i =1,..., 4), with sensing models: To obtain a more reliable temperature estimate: DATA FUSION CASE I: centralized estimator (base station) where: l : optimal Kalman gain have the same variance

7 Leuven, 23 rd June 2009Marcello Farina7 Introduction 1 3 2 4 CASE II: distributed estimation The sensors are arranged in a communication graph configuration The measurements can not be sent simultaneously and instantaneously to a base station Each sensor computes a local estimate of T k, based on the available information: It is obtained according to the equation: mean upon “regional” quantities  measures  estimates Average between regional values obtained with consensus. Increasing the number of transmissions ( N T ) among neighbors (in a sampling time): and the local filters become “optimal”, with performances of the centralized filter

8 Leuven, 23 rd June 2009Marcello Farina8 Introduction Consensus: provides agreement among local variables 1 3 2 4 measures: Averaging is carried out by means of matrix K: { stochastic: compatible with the graph: graph: vertices: edges: Note: it is doubly stochastic if also: Take x 0 =y. A “static” consensus (on measurements) algorithm is such that, at each consensus step: x s+1 =Kx s, s ¸ 0

9 Leuven, 23 rd June 2009Marcello Farina9 Outline Introduction Statement of the problem State of the art The DMHE algorithm Some improvements Conclusions

10 Leuven, 23 rd June 2009Marcello Farina10 Statement of the problem The measured system evolves according to the dynamics: constrained state constrained disturbance random variable with mean  and var ( x 0 )=  0 w t white noise with covariance The system is sensed by M nodes, with sensing models: white noise with covariance

11 Leuven, 23 rd June 2009Marcello Farina11 Statement of the problem The communication network is described by a directed graph : set of vertices : set of edges Neighbors: Example: 12 34 We associate to the graph the matrix { stochastic: compatible with the graph: N T : number of transmissions occurring in a sampling interval

12 Leuven, 23 rd June 2009Marcello Farina12 Statement of the problem Local, regional, collective quantities Assumption: at a generic time instant t, a sensor i collects its measures and the ones of its neighbors. Example: N T =2 12 34 Node 4 local measure: Node 4 regional measure: Network’s collective measure: Definition : The system is locally observable by node i if is observable regionally observable by node i if is observable collectively observable if is observable Notation: local quantity (wrt i): referred to node i solely (indicated with z i ) regional quantity (wrt i): referred to (indicated with ) collective quantity: referred to the whole network (indicated with )

13 Leuven, 23 rd June 2009Marcello Farina13 Statement of the problem i -th sensor regional observability matrix: Further definitions is the i -th sensor regionally unobservable subspace is the orthogonal projection matrix on Example: if the system is observable by all nodes of the graph, then:

14 Leuven, 23 rd June 2009Marcello Farina14 Statement of the problem Isolated and strongly connected subgraphs is not strongly connected are strongly connected subgraphs is the only isolated subgraph (no incoming paths)

15 Leuven, 23 rd June 2009Marcello Farina15 Outline Introduction Statement of the problem State of the art The DMHE algorithm Some improvements Conclusions

16 Leuven, 23 rd June 2009Marcello Farina16 State of the art: Distributed Kalman filters Starting point: CENTRALIZED Kalman filter (base station) INFORMATION FILTER PREDICTOR CORRECTOR to carry out this step one needs collective data:

17 Leuven, 23 rd June 2009Marcello Farina17 State of the art: Distributed Kalman filters estimate of x k 1 carried out by sensor i at instant k 2 Local estimates obtained with consensus filters on the basis of regional measurements. If regional observability does not hold, the algorithm does not give reliable results Good approximation for N T >>1 Th 1 : Th 2 : if A is stable, then the local state estimates converge in mean to the centralized state estimate. DECENTRALIZED Kalman filter (DKF), [7,8,10] PREDICTOR CORRECTOR locally by the M sensors

18 Leuven, 23 rd June 2009Marcello Farina18 State of the art: Distributed Kalman filters Transmission of measurements and estimates [10] PREDICTOR CORRECTOR + CONSENSUS ON ESTIMATES CONSENSUS ON MEASURES NOTE THAT The proof of the convergence of the estimate for a similar (continuous-time) algorithm is given, if collective observability is satisfied. This does not guarantee the convergence of the discrete-time algorithm The optimal gain, in the case of a distributed algorithm, is not the Kalman gain (Carli et al). Therefore, optimality is not guaranteed by this algorithm.

19 Leuven, 23 rd June 2009Marcello Farina19 State of the art: Distributed Kalman filters To guarantee optimality of DKF, the Kalman gain and the weights of the sensor’s estimates should be a result of an optimization [1,2] INFORMATION UPDATE (consensus on measures) CONSENSUS ON ESTIMATES (REGIONAL) PREDICTION Optimization of G i and K={k ij } to minimize the steady state error covariance matrices NOTE THAT The minimization problem on G i and K={k ij } is not convex [3]. A bootstrap (iterative 2-step) algorithm is proposed [1,2], which does not necessarily guarantee optimality Convergence of the algorithm has not been addressed.

20 Leuven, 23 rd June 2009Marcello Farina20 State of the art: Moving Horizon Estimators OUR starting point: Moving Horizon Estimators [11,12] Given, the optimal state estimate is obtained from the solution of: subject to where: (1) are the optimizers. The sequence can be obtained by the dynamic constraints given by the model equations.

21 Leuven, 23 rd June 2009Marcello Farina21 State of the art: Moving Horizon Estimators We rearrange the problem by breaking the interval [1,t] into [1,t-N] and [t-N+1,t], and so: Note that (1) is equivalent to where Arrival cost In the unconstrained case,  t-N/t-N ( x ) is obtained as: where  t-N/t-N is Kalman filter error covariance matrix.

22 Leuven, 23 rd June 2009Marcello Farina22 State of the art: Moving Horizon Estimators In the constrained case, only an approximation of  t-N/t-N ( x ) can be given, with  t-N (¢) (initial penalty term). Is sufficient to make the MHE convergent. The choice: In this way, the MHE cost function is:

23 Leuven, 23 rd June 2009Marcello Farina23 State of the art: Moving Horizon Estimators smoothing update An alternative approach is also proposed (smoothing update [12]) TRANSIT COST t t+N t-N estimate is passed between adjacent data windows tt-Nt+1 t-N+1 estimate is passed between overlapping data windows

24 Leuven, 23 rd June 2009Marcello Farina24 Outline Introduction Statement of the problem State of the art The DMHE algorithm Some improvements Conclusions

25 Leuven, 23 rd June 2009Marcello Farina25 The DMHE algorithm The basic algorithm has been designed for N T =1 [5] For N¸ 1, each sensor performs the local MHE-i constrained minimization problem: REGIONAL MEASUREMENTS (consensus on measures) Under the constraints: Center of mass of the estimates provided by the i -th neighbors at time t-1 (of x t-N ) (smoothing update to use recent information for the consensus)  t-N i is a initial penalty and consensus on estimates term

26 Leuven, 23 rd June 2009Marcello Farina26 The DMHE algorithm HOW TO COMPUTE IT? I) LOCAL UPDATE (Riccati-like update) II) GLOBAL UPDATE (Consensus-on-estimates) The matrix  t-N/t-1 is the result of Subject to the LMI: IN THIS FORMULATION, IT IS NOT A DISTRIBUTED PROBLEM Proposition: the following distributed update satisfies the previous LMI, for all i

27 Leuven, 23 rd June 2009Marcello Farina27 The DMHE algorithm Can we chose K such that the matrices  i t-N/t-1 remain bounded? Theorem 1 : If is non-empty and, for all, there exists a path to i from an observable sensor, then there exists K, compatible with such that  i t-N/t-1 are bounded. : set of observable nodes: set of unobservable nodes How to obtain a conservative choice of K ? Algorithm 1 Terms on the lower triangular part of K can be “re-added” Theorem 1 states that one can chose non-zero upper triangular terms, and still guarantee boundedness

28 Leuven, 23 rd June 2009Marcello Farina28 The DMHE algorithm Convergence issues Definition: The system  is: with w t =0 and initial condition x 0. We denote x  (t,x 0 ) its solution (assumed feasible). DMHE is convergent if: we define is the orthogonal projection matrix on Lemma 1: if  t-N/t-1 i are chosen as previously stated, and are bounded, then: where and Theorem 2: if  t-N/t-1 i are chosen as previously stated, and are bounded, then DMHE is convergent if the matrix  is Schur.

29 Leuven, 23 rd June 2009Marcello Farina29 The DMHE algorithm Example 1 =0.9264 2 =0.4517 3 =0.99+0.4795 i 4 =0.99-0.4795 i unstable system Regional measurements x 3 and x 4 are not regionally observable to node 2 (unstable subsystem) x 1 and x 2 are not regionally observable to node 4 The state is regionally observable to node 1 The state is regionally observable to node 3

30 Leuven, 23 rd June 2009Marcello Farina30 The DMHE algorithm With the choice: EIGENVALUES OF  t-1/t+N i x 3 and x 4 are not regionally observable to node 2 (unstable subsystem) x 1 and x 2 are not regionally observable to node 4 (stable subsystem)  i t-1/t-N are bounded

31 Leuven, 23 rd June 2009Marcello Farina31 The DMHE algorithm Let e t be a white noise signal with covariance matrix UNCONSTRAINED CASECONSTRAINED CASE sensor 4’s estimates sensor 2’s estimates

32 Leuven, 23 rd June 2009Marcello Farina32 Outline Introduction Statement of the problem State of the art The DMHE algorithm Some improvements Conclusions

33 Leuven, 23 rd June 2009Marcello Farina33 Some improvements If state constraints are not required [4], Lemma 1 and Theorem 2 hold also if  t-N/t-1 is not a bounded sequence, that is: Lemma 1 * : if  t-N/t-1 i are chosen as previously stated, then: Theorem 2 * : if  t-N/t-1 i are chosen as previously stated, then DMHE is convergent if the matrix  is Schur. This result is obtained by defining an alternative initial penalty term, i.e.: OBSERVABLE PARTUNOBSERVABLE PART

34 Leuven, 23 rd June 2009Marcello Farina34 Some improvements How to enforce state estimation [6] by exploiting a better transmission of measurements? 1.If N T ¸1, the set of regional measurements becomes larger as N T increases 2.If N T =1 use past measurements (as also suggested in [1]) Two transmission protocols are introduced: P 1 ) N T ¸1. Sets of collected regional measurements at time t : P 2 ) N T =1. Sets of collected regional measurements at time t : EXAMPLE (P 2 ) Augmented system approach [1]

35 Leuven, 23 rd June 2009Marcello Farina35 Some improvements basic algorithm collects the regional measurements of the set its size p i t -k is non-decreasing as (t-k) increases In general, it is a time-varying system 1) The dynamic constraints change 2) The consensus-on-estimate matrix changes 1-step consensus matrix: K with N T communication steps: K * compatible with graph, generated by K N T ( K * = K N T is a possible choice) 3) The regional observability properties change The s-steps observability matrix is: The system is regionally observable by sensor i on horizon N if

36 Leuven, 23 rd June 2009Marcello Farina36 Some improvements Accordingly, the following are redefined: is the i -th sensor unobservable subspace on horizon N is the orthogonal projection matrix on The transition matrix of the error dynamics changes: If the graph is strongly connected and collectively observable: if N (and/or N T ) increases:if N T increases the eigenvalues of K * decrease For example K * =K N T. j =eig(K), (| j |· 1 by definition) j N T =eig(K * )

37 Leuven, 23 rd June 2009Marcello Farina37 Outline Introduction Statement of the problem State of the art The DMHE algorithm Some improvements Conclusions

38 Leuven, 23 rd June 2009Marcello Farina38 Conclusions The problem of state estimation in distributed systems is a challenging task: –convergence –optimality The problem must be rigorously stated –Local, regional, collective variables and observability notions –Graph and subgraphs We analyze the available distributed estimation algorithms (DKF) –advantages –limitations We rely on moving horizon approach to state estimation The DMHE algorithm for constrained systems and convergence results Some improvements –If state constraints are not given –If the graph is strongly connected and collectively observable, we can always design a DMHE converging observer

39 Leuven, 23 rd June 2009Marcello Farina39 References 1.Alriksson, P. and Rantzer, A. (2006) Distributed Kalman filtering using weighted averaging. In Proc. of the 17° International symposium on Mathematical Theory of Networks and Systems. 2.Alriksson, P. and Rantzer, A. (2008). In Proc. IFAC World Congress. 3.Carli, R. and Chiuso, A. and Schenato, L. and Zampieri, S. (2008). Distributed Kalman filtering based on consensus strategies. IEEE Journal on selected Areas in Communications, 4. 4.Farina, M. and Ferrari-Trecate, G. and Scattolini, R. (2009) A moving horizon scheme for distributed state estimation. Submitted 5.Farina, M. and Ferrari-Trecate, G. and Scattolini, R. (2009) Distributed moving horizon estimation for linear constrained systems. Submitted. 6.Farina, M. and Ferrari-Trecate, G. and Scattolini, R. (2009) Distributed moving horizon estimation for sensor networks. IFAC Workshop on Estimation and Control of Networked Systems (NecSys'09), 24-26 September, 2009, Venice (Italy). To appear. 7.Kamgarpour, M. and Tomlin, C. (2008) Convergence properties of a decentralized Kalman filter. Proc 47° IEEE CDC Conference 8.Olfati-Saber, R. (2005) Distributed Kalman filter with embedded consensus filters. Proc 44° IEEE CDC Conference. 9.Olfati-Saber, R. and Shamma, J. (2005) Consensus filters for sensor networks and distributed sensor fusion. Proc 44° IEEE CDC Conference. 10.Olfati-Saber, R.(2007) Distributed Kalman filtering for sensor networks. Proc 46° IEEE CDC Conference. 11.Rao, C. V., Rawlings, J. and Lee, J. (1999). Stability of constrained linear moving horizon estimation. Proc. ACC. 12.Rao, C. V., Rawlings, J. and Lee, J. (2001). Constrained linear state estimation - a moving horizon approach. Automatica, 37.


Download ppt "Distributed state estimation with moving horizon observers Marcello Farina, Giancarlo Ferrari-Trecate, Riccardo Scattolini Dipartimento di Elettronica."

Similar presentations


Ads by Google