Fault Detection and Diagnosis in Engineering Systems Basic concepts with simple examples Janos Gertler George Mason University Fairfax, Virginia.

Slides:



Advertisements
Similar presentations
1 GMU – GM PROJECT Model-Based On-Board Fault Diagnosis of Emission Control Components in Automotive Engines “FaultFinder”
Advertisements

U D A Neural Network Approach for Diagnosis in a Continuous Pulp Digester Pascal Dufour, Sharad Bhartiya, Prasad S. Dhurjati, Francis J. Doyle III Department.
Use of Kalman filters in time and frequency analysis John Davis 1st May 2011.
Properties of State Variables
A New Eigenstructure Fault Isolation Filter Zhenhai Li Supervised by Dr. Imad Jaimoukha Internal Meeting Imperial College, London 4 Aug 2005.
Robust control Saba Rezvanian Fall-Winter 88.
AMI 4622 Digital Signal Processing
FAULT ISOLATION IN UNCERTAIN SYSTEMS VIA ROBUST H-infinity FILTERING Internal Control Meeting Zhenhai Li Working with Dr. Imad Jaimoukha, Emmanuel Mazars.
Linear Matrix Inequality Solution To The Fault Detection Problem Emmanuel Mazars co-authors Zhenhai li and Imad Jaimoukha Imperial College IASTED International.
University of Colorado Boulder ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 8: Stat.
Fault Detection and Isolation: an overview María Jesús de la Fuente Dpto. Ingeniería de Sistemas y Automática Universidad de Valladolid.
280 SYSTEM IDENTIFICATION The System Identification Problem is to estimate a model of a system based on input-output data. Basic Configuration continuous.
3D Geometry for Computer Graphics
Problem Statement Given a control system where components, i.e. plant, sensors, controllers, actuators, are connected via a communication network, design.
Curve-Fitting Regression
A Concept of Environmental Forecasting and Variational Organization of Modeling Technology Vladimir Penenko Institute of Computational Mathematics and.
1 Engineering Computation Part 4. 2 Enrique Castillo University of Cantabria An algorithm that permits solving many problems in Algebra. Applications.
Goals of Adaptive Signal Processing Design algorithms that learn from training data Algorithms must have good properties: attain good solutions, simple.
3D Geometry for Computer Graphics
Lecture 11 Vector Spaces and Singular Value Decomposition.
11/05/2009NASA Grant URC NCC NNX08BA44A1 Control Team Faculty Advisors Dr. Helen Boussalis Dr. Charles Liu Student Assistants Jessica Alvarenga Danny Covarrubias.
Linear and generalised linear models
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Basics of regression analysis
Linear and generalised linear models Purpose of linear models Least-squares solution for linear models Analysis of diagnostics Exponential family and generalised.
CH 1 Introduction Prof. Ming-Shaung Ju Dept. of Mechanical Engineering NCKU.
1 D r a f t Life Cycle Assessment A product-oriented method for sustainability analysis UNEP LCA Training Kit Module k – Uncertainty in LCA.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
Introduction to estimation theory Seoul Nat’l Univ.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Linear System Theory Instructor: Zhenhua Li Associate Professor Mobile : School of Control Science and Engineering, Shandong.
EE3010 SaS, L7 1/19 Lecture 7: Linear Systems and Convolution Specific objectives for today: We’re looking at continuous time signals and systems Understand.
1 Sensitivity Analysis of Narrow Band Photonic-Crystal Waveguides and Filters Ben Z. Steinberg Amir Boag Ronen Lisitsin Svetlana Bushmakin.
Book Adaptive control -astrom and witten mark
Structural analysis Supervisory control slides Control Engineering Department 2006.
Ch. 9 Application to Control. 9.1 Introduction to Control Consider a causal linear time-invariant system with input x(t) and output y(t). Y(s) = Gp(s)X(s)
Fault Detection in a Continuous Pulp Digester Motivation Digester Operation and Faults Fault Detection Techniques NL dynamic gross error detection Abnormal.
Model Reference Adaptive Control (MRAC). MRAS The Model-Reference Adaptive system (MRAS) was originally proposed to solve a problem in which the performance.
September Bound Computation for Adaptive Systems V&V Giampiero Campa September 2008 West Virginia University.
Nordic Process Control Workshop, Porsgrunn, Norway Application of the Enhanced Dynamic Causal Digraph Method on a Three-layer Board Machine Cheng.
PROCESS MODELLING AND MODEL ANALYSIS © CAPE Centre, The University of Queensland Hungarian Academy of Sciences Statistical Model Calibration and Validation.
Mechanical Engineering Department Automatic Control Dr. Talal Mandourah 1 Lecture 1 Automatic Control Applications: Missile control Behavior control Aircraft.
Feedback Control Systems (FCS) Dr. Imtiaz Hussain URL :
A Trust Based Distributed Kalman Filtering Approach for Mode Estimation in Power Systems Tao Jiang, Ion Matei and John S. Baras Institute for Systems Research.
Lecture 25: Implementation Complicating factors Control design without a model Implementation of control algorithms ME 431, Lecture 25.
© 2009 SPACE LAB, California State University Los Angeles Fault Detection, Isolation of a Segmented Telescope Testbed Authors: Presenter: Jose D. Covarrubias.
Discovery and Systems Health Technical Area NASA Ames Research Center - Computational Sciences Division Automated Diagnosis Sriram Narasimhan University.
September 28, 2000 Improved Simultaneous Data Reconciliation, Bias Detection and Identification Using Mixed Integer Optimization Methods Presented by:
1 Use of Multiple Integration and Laguerre Models for System Identification: Methods Concerning Practical Operating Conditions Yu-Chang Huang ( 黃宇璋 ) Department.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Signals and Systems Lecture #6 EE3010_Lecture6Al-Dhaifallah_Term3321.
Anders Nielsen Technical University of Denmark, DTU-Aqua Mark Maunder Inter-American Tropical Tuna Commission An Introduction.
Computacion Inteligente Least-Square Methods for System Identification.
1 Life Cycle Assessment A product-oriented method for sustainability analysis UNEP LCA Training Kit Module k – Uncertainty in LCA.
Thomas F. Edgar (UT-Austin) RLS – Linear Models Virtual Control Book 12/06 Recursive Least Squares Parameter Estimation for Linear Steady State and Dynamic.
Locating a Shift in the Mean of a Time Series Melvin J. Hinich Applied Research Laboratories University of Texas at Austin
Tijl De Bie John Shawe-Taylor ECS, ISIS, University of Southampton
Deep Feedforward Networks
State Space Representation
OSE801 Engineering System Identification Spring 2010
Linear Control Systems
Autonomous Cyber-Physical Systems: Dynamical Systems
Digital Control Systems (DCS)
Digital Control Systems (DCS)
Digital Control Systems Waseem Gulsher
State Space Analysis UNIT-V.
Identification of Wiener models using support vector regression
Bayes and Kalman Filter
Project Progress Overview: Process Control, Modeling and Diagnosis
Outline Control structure design (plantwide control)
Presentation transcript:

Fault Detection and Diagnosis in Engineering Systems Basic concepts with simple examples Janos Gertler George Mason University Fairfax, Virginia

Outline What is a fault What is diagnosis Diagnostic approaches –Model - free methods –Principal component approach –Model - based methods –Systems identification Application example: car engine diagnosis

What is a fault Fault: malfunction of a system component - sensor fault- bias - actuator fault- parameter change - plant fault- leak, etc. Symptom: an observable effect of a fault Noise and disturbance: nuissances that may affect the symptoms

What is a fault actuator command actuator leak sensor faults fault sensor readings Sensor fault: reading is different from true value Actuator fault: valve position is different from command Plant fault: leak

What is fault diagnosis Fault detection: indicating if there is a fault Fault isolation: determining where the fault is Detection + Isolation = Diagnosis Fault identification: –Determining the size of the fault –Determining the time of onset of the fault

Model-free methods Fault-tree analysis - cause-effect trees analysed backwards Spectrum analysis - fault-specific frequencies in sound, vibration, etc Limit checking - checking measurements against preset limits

flow l1 l2 l3 s1 s2 s3 y 1 y 2 y 3 Limit checking y 1 y 2 y 3 S1 faultoffnormalnormal Leak3normalnormaloff Leak2normaloffoff Leak1offoffoff High/low flowoffoffoff

Limit checking Easy to implement Requires no design BUT To accommodate “normal” variations, must have limited fault sensitivity Has limited fault specificity (symptom explosion)

Principal Component Approach Modeling phase: based on normal data - determine the subspace where normal data exists (representation space, RepS) - determine the spread (variances) of data in the RepS Monotoring phase: compare observations to representation space - if outside RepS, there are faults - if inside RepS but outside thresholds, abnormal operating conditions

Principal Component Approach u flow y 1 = u y 2 = u y 1 y 2 y 2 Representation space Fault Normal spread y 1 u

Principal component modeling Centered normalised measurements x(t) = [x 1 (t) … x n (t)]’ Data matrix: X = [ x(1) x(2) … x(N)] Covariance matrix: R = XX’/N Compute eigenvalues 1 … n and eigenvectors q 1 … q n q 1 … q k, k  n, belonging to nonzero 1 … k,, span RepS 1 … k are the variances in the respective directions

Principal Components – Residual Space Residual Space (ResS): complement of Representation Space, spanned by the e-vectors q k+1 … q n, belonging to (near) - zero e-values Residual = (Observation) – (Its projection on RepS) Residuals exist in ResS ResS provides isolation information - directional property (fault-specific response directions) - structural property (fault-specific Boolean structures)

Residual Space – Directional Property u flow  u  y 1  y 2 y 1 y 2 y 2 residual observation Repres. Space q 1 y 1 u on  u q 3 q 2 on  y 1 on  y 2 Residual Space

Residual Space – Structural Property  u  u  u r 2 r 3 r 1  y 1  y 2  y 1  y 2  y 1  y 2 r 1, r 2, r 3 : residuals obtained by projection  u  y 1  y 2 Structure matrix r r Fault codes r

Model-Based Methods faults f(t) disturbances d(t) noise n(t) outputs y(t) inputs u(t) parameters  Complete model: y(t) = f[u(  ), f(  ), d(  ), n(  ),  ] Nominal model: y^(t) = f[u(  ),  ] Models are: static/dynamic linear/nonlinear

Obtaining Models First principle models Empirical models - “classical” systems identification - principal component approach - neuronets

Analytical Redundancy d(t) f(t) n(t) u(t) y(t) PLANT + e(t) RESIDUAL r(t) PROCESSING - MODEL y^(t) Primary residuals: e(t) = y(t) – y^(t) Processed residuals: r(t)

Analytical redundancy f(t) d(t) n(t) u(t) y(t) PLANT RESIDUAL GENERATOR r(t)

Residual Properties Detection properties - sensitive to faults - insensitive to disturbances (disturbance decoupling) - insensitive to model errors (model-error robustness)  perfect decoupling under limited circumstances  “optimal” decoupling - insensitive to noise  noise filtering  statistical testing

Residual Properties Isolation properties - selectively sensitive to faults  structured residuals perfect  directional residuals decoupling  “optimal” residuals

Residual Generation u flow Model:  u y 1 = u +  u +  y 1  y 1  y 2 y 1 y 2 y 2 = u +  u +  y 2 Primary residuals: e 1 = y 1 – u =  u +  y 1  u  y 1  y 2 e 2 = y 2 – u =  u +  y 2 r Processed residuals: r r 1 = e 1 =  u +  y 1 r r 2 = e 2 =  u +  y 2 r 3 = e 2 – e 1 =  y 2 –  y 1 Structured residuals

Residual Generation u flow Model:  u y 1 = u +  u +  y 1  y 1  y 2 y 1 y 2 y 2 = u +  u +  y 2 Primary residuals: e 1 = y 1 – u =  u +  y 1 e 2 = y 2 – u =  u +  y 2 Processed residuals: r 1 = e 1 =  u +  y 1 r 2 = e 2 =  u +  y 2 r 3 = e 1 – e 2 =  y 1 –  y 2 r 3 on  y 1 r 2 on  u r 1 on  y 2 Directional residuals

Linear Residual Generation Methods Perfect decoupling - direct consistency relations - parity relations from state-space model - Luenberger observer - unknown input observer Approximate decoupling - the above with singular value decomposition - constrained least-squares - H-infinity optimization

Linear Residual Generation Methods Under identical conditions (same plant, same response specification) the various methods lead to identical residual generators

Dynamic Consistency Relations System description: y(t) = M(q)u(t) + S f (q)f(t) + S d (q)d(t) q : shift operator Primary residuals: e(t) = y(t) – M(q)u(t) = S f (q)f(t) + S d (q)d(t) Residual transformation: r(t) = W(q)e(t) = W(q)[S f (q)f(t) + S d (q)d(t)]

Dynamic Consistency Relations Response specification: r(t) =  f (q)f(t) +  d (q)d(t)  f (q) : specified fault response (structured or directional)  d (q) : specified disturbance response (decoupling)  W(q)[S f (q) S d (q)] = [  f (q)  d (q)] Solution for square system: W(q) = [  f (q)  d (q)] [S f (q) S d (q)] -1

Dynamic Consistency Realtions Realization: The residual generator W(q) must be causal and stable; [S f (q) S d (q)] -1 is usually not so Modified specification: W(q) = [  f (q)  d (q)]  (q) [S f (q) S d (q)] -1  (q) : response modifier, to provide causality and stability without interfering with specification Implementation: inverse is computed via the fault system matrix

Diagnosis via Systems Identification Approach: - create reference model by identification - re-identify system on-line  discrepancy indicates parametric fault Difficulty: discrete-time model parameters are nonlinear functions of plant parameters  for small faults, fault-effect linearization  continuous-time model identification (noise sensitive or requires initialization)

Applications Very large systems - Principal Components are widely used in chemical plants - reliable numerical package is available An intermediate-size system: rain-gauge network in Barcelona, Spain (structured parity relations) Aerospace: traditionally Kalman filtering

Applications Mass-produced small systems: on-board car-engine diagnosis car-to-car variation (model variation robustness) - GM: parity relations - Ford: neuronets - Daimler: parity relations + identification Many published papers “with application to” are just simulation studies

GM – GMU On-Board Diagnosis Project OBD-II: any component fault causing emissions (CH, CO, NOX) go 50% over limit must be detected on-line Pilot project: intake manifold subsystem (THR, MAP, MAF, EGR) Structured parity relations based on direct identification After more in-house development, this is being gradually introduced on GM cars

Filtered and integrated residual with fault

On-board report – MAP fault

GM fleet experiment Fleet of “identical” vehicles (Chevy Blazer) available at GM Collect data from 25 vehicles Identify models from combined data from 5 vehicles Test on data from 25 vehicles Residual means and variances vary  increase thresholds (sacrifice sensitivity) Only a 50% increase is necessary

Fault sensitivities – GM fleet experiment Critical fault sizes for detection and diagnosis (fleet experiment) ThrIacEgrMapMaf detection 2%10%12% 5% 2% diagnosis6%20%17% 7% 8%