Presentation is loading. Please wait.

Presentation is loading. Please wait.

Analysis and Interpretation of Sensor Data

Similar presentations


Presentation on theme: "Analysis and Interpretation of Sensor Data"— Presentation transcript:

1 Analysis and Interpretation of Sensor Data
FEM4: Analysis and Interpretation of Sensor Data 1

2 Purpose To expand your knowledge base on the topic of SHM data analysis. To develop your knowledge of technical terms related to data handling while understanding the steps that acquired SHM data goes through.

3 Expected Learning Outcomes
At the completion of this module you will be able to: explain the various possible sources of errors in SHM and the means used to minimize such errors. identify factors/conditions that affect SHM sensor performance/data and explain methods to minimize the effects of these factors/conditions. interpret and analyze SHM sensor data for the purpose of identifying anomalies.

4 Your Assignment After you have read and reviewed the content provided, you will be required to: Take an online examination to determine if you have achieved a satisfactory mastery of these learning outcomes. Submit an online response to several questions related to the content of this module and list any questions you might have concerning anything you might not understand about the material. The online responses will be discussed in an interactive manner in a classroom setting on the date indicated.

5 Deadlines Deadline for completion of assignment: TBD
Date of classroom discussion of online responses: TBD

6 The Statistical Pattern Recognition Paradigm (1)
One approach to the selection, analysis, and interpretation of SHM data is termed the “statistical pattern recognition paradigm” (Farrar and Worden, 2007). This approach encompasses the following four steps: Operational evaluation Data acquisition, normalization, and cleansing Feature selection and information condensation Statistical development for future discrimination

7 The Statistical Pattern Recognition Paradigm (2)
Data Mining Diagram2

8 Operational Evaluation (1)
As presented by Farrar and Worden (2007), this phase of the SHM process consists of answering the following four questions: What are the life safety and/or economic justifications for performing SHM? How is damage defined for the system being investigated and, for multiple damage possibilities, which cases are of the most concern? What are the conditions, operational and environmental, under which the system to be monitored functions? What are the limitations on acquiring data in the operational environment?

9 Operational Evaluation (2)
Operational Evaluation Steps3

10 Operational Evaluation (3)
Operational evaluation begins to set the limitations on what will be monitored and how the monitoring will be accomplished. This evaluation starts to tailor the damage identification process to features that are unique to the system being monitored and tries to take advantage of unique features of the damage that is to be detected. Whereas the specific approach herein discussed is targeted to assess damage, not all SHM installations are implemented for that purpose (see Section B of FEM1).

11 Data Acquisition, Normalization and Cleansing
FEM3 was devoted to data acquisition and included a brief discussion of data normalization and cleansing. Since good quality, reliable data are fundamental to the SHM process, additional discussion of SHM data processing is warranted.

12 Data Acquisition, Normalization and Cleansing: Signal Conditioning (1)
“To achieve best-in-class quality for measurements, you need to consider the several types of conditioning required for sensor measurements as well as the several types of analog components used in the instrumentation including analog-to-digital converters (ADCs)” ( ).

13 Data Acquisition, Normalization and Cleansing: Signal Conditioning (2)
Signal Conditioning Diagram4

14 Data Acquisition, Normalization and Cleansing: Signal Conditioning (3)
An ADC takes an analog signal and turns it into a binary number. Each binary number from the ADC represents a certain voltage level, and the ADC returns the highest possible level without going over the actual voltage level of the analog signal.

15 Data Acquisition, Normalization and Cleansing: Signal Conditioning (4)
Example of a Flash ADC5

16 Data Acquisition, Normalization and Cleansing: Signal Conditioning (5)
Resolution refers to the number of binary levels the ADC can use to represent a signal. To determine the number of binary levels available based on the resolution, simply take 2Resolution.

17 Data Acquisition, Normalization and Cleansing: Signal Conditioning (6)
The following figure shows a digital representation of signals by 12-, 16-, and 24-bit ADCs. You can now use the 24-bit technology, which allows for extremely accurate measurements, for static as well as dynamic applications.

18 Data Acquisition, Normalization and Cleansing: Signal Conditioning (7)
Digital Representation of Signals4

19 Data Acquisition, Normalization and Cleansing: Data Normalization (1)
This is the process whereby changes in sensor readings caused by damage are separated from those caused by operational and environmental conditions. The term normalization refers to a procedure whereby the sensor outputs (measured responses) are referenced to the measured inputs (e.g., variations in load, temperature, wind direction and velocity).

20 Data Acquisition, Normalization and Cleansing: Data Normalization (2)
Un-normalized vs. Normalized Data6

21 Data Acquisition, Normalization and Cleansing: Data Normalization (3)
In circumstances of significant environment or operational variability, it may be necessary to normalize the data within a defined time frame to allow comparison of the data at similar times of an environmental or operational cycle. During the early stages of data acquisition, data variability and its sources need to be identified and monitored. Efforts should be made to either eliminate or minimize the factors contributing to data variability.

22 Data Acquisition, Normalization and Cleansing: Data Normalization (4)
Example of Data Variability7

23 Data Acquisition, Normalization and Cleansing: Data Normalization (5)
Any analysis of data variability must be conducted under reasonably consistent environmental and operational conditions. Thus, the sensor network must include an adequate array of sensors to measure environmental (temperature; wind speed, direction and velocity; and, possibly, rainfall/snowfall) and operational (load) conditions.

24 Data Acquisition, Normalization and Cleansing: Data Cleansing (1)
Most data acquisition systems allow the user to selectively choose to pass on or reject data. This process is generally under the control of an experienced individual who is directly involved with the SHM project, particularly the data acquisition component.

25 Data Acquisition, Normalization and Cleansing: Data Cleansing (2)
Data from sensors that are obviously relaying unreliable and/or inaccurate data should be selectively deleted from the data set. Care should be taken to ensure that elimination of readings from a given sensor is technically justified.

26 Data Acquisition, Normalization and Cleansing: Data Cleansing (3)
Whereas the data cleansing process generally takes place in the early stages of implementation of the SHM project, it along with other aspects of the data acquisition and processing should not be static. That is, the data should be continually monitored to ensure its reliability and accuracy. Data anomalies should be investigated and resolved immediately.

27 Data Acquisition, Normalization and Cleansing: Data Cleansing (4)
Data Cleansing Diagrams8

28 Data Acquisition, Normalization and Cleansing: Data Cleansing (5)
Signal processing techniques such as filtering and resampling can also be thought of as data cleansing procedures. Thus, the SHM design and implementation team should include professionals with skills and experience in signal processing.

29 Data Acquisition, Normalization and Cleansing: Data Cleansing (6)
Example of Data Resampling9

30 Data Acquisition, Normalization and Cleansing: Data Cleansing (7)
Alternatively, data cleansing could be performed during the post- processing stage. In this case, almost all raw data will be acquired and stored with minimal exclusions during the data acquisition stage. Different algorithms can then be applied to the stored data to get rid of anomalies or erroneous readings. Again, an experienced engineer with knowledge in both structural engineering and SHM should perform this task.

31 Data Acquisition, Normalization and Cleansing: Data Cleansing (8)
The idea is not to miss any data that may be pointing out to an actual structural performance. For example, crack initiation may lead to a sudden increase in sensor reading if it passes through its gage length, or a sudden drop in the reading if it is just outside the gage length. Such changes are actual structural behavior and should not be taken out during the data cleansing stage.

32 Data Acquisition, Normalization and Cleansing: Data Cleansing (9)
Example of Crack Initiation: A structural behavior that should not be taken out during the Data Cleansing Cycle. 10

33 Feature Selection and Information Condensation (1)
This stage of the SHM process is concerned with the identification of data features that allows one to distinguish between the undamaged and damaged structure. In many cases, this requires establishing a ‘baseline’ for subsequent comparisons for distinguishing between damaged and non-damaged conditions.

34 Feature Selection and Information Condensation (2)
Example of Feature Selection and Information Condensation11

35 Feature Selection and Information Condensation (3)
The response of the structure prior to damage occurrence is used as the baseline. Inherent in this feature selection process is the condensation of the data. The best features for damage identification are, again, application specific.

36 Feature Selection and Information Condensation (4)
One of the most common feature extraction methods is based on correlating measured system response quantities, such as vibration amplitude or frequency, with the first-hand observations of the degrading system. Another method of developing features for damage identification is to apply engineered flaws, similar to ones expected in actual operating conditions, to systems and develop an initial understanding of the parameters that are sensitive to the expected damage.

37 Feature Selection and Information Condensation (5)
The flawed system can also be used to validate that the diagnostic measurements are sensitive enough to distinguish between features identified from the undamaged and damaged system. The use of analytical tools such as experimentally validated finite element models can be a great asset in this process. In many cases, the analytical tools are used to perform numerical experiments where the flaws are introduced through computer simulation.

38 Feature Selection and Information Condensation (6)
Example of SHM Finite Element Analysis12

39 Feature Selection and Information Condensation (7)
Damage accumulation testing, during which significant structural components of the system under study are degraded by subjecting them to realistic loading conditions, can also be used to identify appropriate features. This process may involve induced-damage testing, fatigue testing, corrosion growth or temperature cycling to accumulate certain types of damage in an accelerated fashion.

40 Feature Selection and Information Condensation (8)
Example of various data accumulation testing.13

41 Feature Selection and Information Condensation (9)
Insight into the appropriate features can be gained from several types of analytical and experimental studies as previously described and is usually the result of information obtained from some combination of these studies.

42 Feature Selection and Information Condensation (10)
The operational implementation and diagnostic measurement technologies needed to perform SHM produce an excessive amount of data which complicates analysis. Farrar and Worden (2007) suggested the following analysis strategies to help deal with this circumstance: “A condensation of the data is advantageous and necessary when comparisons of many feature sets obtained over the lifetime of the structure are envisioned.”

43 Feature Selection and Information Condensation (11)
Because data will be acquired from a structure over an extended period of time and in an operational environment, robust data reduction techniques must be developed to retain feature sensitivity to the structural changes of interest in the presence of environmental and operational variability. To further aid in the extraction and recording of quality data needed to perform SHM, the statistical significance of the features should be characterized and used in the condensation process.

44 Feature Selection and Information Condensation (12)
Diagram of Data Reduction Techniques14

45 Statistical Development for Future Discrimination (1)
According to Farrar and Worden (2007): “Statistical model development is concerned with the implementation of the algorithms that operate on the extracted features to quantify the damage state of the structure. The algorithms used in statistical model development usually fall into three categories.

46 Statistical Development for Future Discrimination (2)
When data are available from both the undamaged and damaged structure, the statistical pattern recognition algorithms fall into the general classification referred to as supervised learning.  Group classification and regression analysis are categories of the supervised learning algorithms. 

47 Statistical Development for Future Discrimination (3)
Unsupervised learning refers to algorithms that are applied to data not containing examples from the damaged structure.  Outlier or novelty detection is the primary class of algorithms applied in unsupervised learning applications. All of the algorithms analyze statistical distributions of the measured or derived features to enhance the damage identification process.”

48 Statistical Development for Future Discrimination (4)
Examples of Unsupervised and Supervised Learning15

49 Statistical Development for Future Discrimination (5)
“The damage state of a system can be described as a five-step process along the lines of the process discussed in Rytter (1993) to answer the following questions. Existence. Is there damage in the system? Location. Where is the damage in the system? Type. What kind of damage is present? Extent. How severe is the damage? Prognosis. How much useful life remains? Answers to these questions in the order presented represent increasing knowledge of the damage state.

50 Statistical Development for Future Discrimination (6)
When applied in an unsupervised learning mode, statistical models are typically used to answer questions regarding the existence and location of damage. When applied in a supervised learning mode and coupled with analytical models, the statistical procedures can be used to better determine the type of damage, the extent of damage and remaining useful life of the structure.

51 Statistical Development for Future Discrimination (7)
The statistical models are also used to minimize false indications of damage. False indications of damage fall into two categories: false-positive damage indication (indication of damage when none is present) false-negative damage indication (no indication of damage when damage is present)

52 Statistical Development for Future Discrimination (8)
Example of False Positives and False Negatives16

53 Statistical Development for Future Discrimination (9)
False positive errors are undesirable, as they will cause unnecessary downtime and consequent loss of revenue as well as loss of confidence in the monitoring system. More importantly, there are clear safety issues if misclassifications of false negative data occur.

54 Statistical Development for Future Discrimination (10)
Many pattern recognition algorithms allow one to weigh one type of error above the other; this weighting may be one of the factors decided at the operational evaluation stage. Articles appearing within this theme issue that focus on the statistical modelling portion of the SHM process include Hayton et al. (2007), Sohn (2007) and Worden & Manson (2007).” 

55 Example of a Simplified SHM and Damage Mitigation System
This video illustrates a simplified example of the use of pattern recognition as a means to identify that damage has occurred. It further illustrates one possible means that could be used to repair and/or mitigate damage.

56 Discussion Question Response Form for FEM4
Your assignment…. Answer the following question. Save a copy of your response for discussion in a subsequent class period. SHM systems can be installed on in-service bridges or newly constructed bridges. Given that the basic objective of SHM is to identify damage, discuss the ways in which the analysis and interpretation of the sensor data in these two circumstances differ and identify the factors which may influence the analysis.

57 Sources (1): http://www.mdpi.com/1424-8220/14/9/15861/htm
cy.aspx accurately-predicts-failure-rates/

58 Sources (2): http://benthamopen.com/FULLTEXT/TOBCTJ-10-136
areas/structural-health-monitoring/footbridge-monitoring-project-shm-finite- element-analysis ed.org/EducationResources/CommunityCollege/Materials/Mechanical/Fatigue. htm patterns UnsupervisedlearningSupervisedlearningClusteringDBSCAN_on_a_toy_dataset ClassicationSVM aman-hussain


Download ppt "Analysis and Interpretation of Sensor Data"

Similar presentations


Ads by Google