# MSA Example: Attribute or Categorical Data

## Presentation on theme: "MSA Example: Attribute or Categorical Data"— Presentation transcript:

MSA Example: Attribute or Categorical Data
MSA for Attribute or Categorical Data MSA Example: Attribute or Categorical Data

MSA Operational Definitions
MSA for Attribute or Categorical Data Accuracy: Overall agreement of the measured value with the true value (which may be an “expert” value). Bias plus precision. Attribute Data: Discrete qualitative data. Attribute Measurement System: Compares parts to a specific set of criteria and accepts the item if the criteria are satisfied. Bias: A systematic difference from the true value. Revealed in the differences in averages from the true value. Precision: Variation in the measurement process. R&R: Repeatability and Reproducibility. Two elements of precision. Repeatability: The variation observed when the same operator measures the same item repeatedly with the same device. Reproducibility: The variation observed when different operators measure the same parts using the same device, sometimes it can be the same operator using different devices. Definitions Accuracy: Overall agreement of the measured value with the true value (which may be an “expert” value). Bias plus precision. Attribute Data: Discrete qualitative data. Attribute Measurement System: Compares parts to a specific set of criteria and accepts the item if the criteria are satisfied. Bias: A systematic difference from the true value. Revealed in the differences in averages from the true value. Precision: Variation in the measurement process. R&R: Repeatability and Reproducibility. Two elements of precision. Repeatability: The variation observed when the same operator measures the same item repeatedly with the same device. Reproducibility: The variation observed when different operators measure the same parts using the same device, sometimes it can be the same operator using different devices. The list provides a quick reference for key terms used in Measurement System Analysis. MSA for Continuous Processes 2 .PPT

The Fundamental MSA Question
MSA for Attribute or Categorical Data “Is the variation (spread) of the measurement system too large to study the current level of process variation?” + = (Observed Variability) Total Variability Product Variability Process Variability Variation in the measurement process Quantifying Variation Like all processes, the measurement process has CTQs. The graph above lists some of the most common CTQs used for the measurement process. MSA quantifies the amount of variation for: Accuracy Repeatability Reproducibility Stability (this is typically covered in the Black Belt workshop) Linearity (covered in Black Belt workshops) MSA for Continuous Processes 3 .PPT

Possible Causes of Bias
MSA for Attribute or Categorical Data True Value or Standard Bias Observed Average Possible Causes of Bias Sensor not properly calibrated Improper use of sensor Unclear procedures Human limitations Bias Bias is the difference between the observed average of measurements and the true average. Validating accuracy is the process of quantifying the amount of bias in the measurement process. Experience has shown that bias and linearity are typically not major sources of measurement error for continuous data, but they can be. In service and transaction applications, evaluating bias most often involves testing the judgment of people carrying out the measurements. Example A team wants to establish the accuracy of its process to measure defects in invoices. First, they gather a “standard” group of invoices and have an “expert” panel establish the type and number of defects in the group. Next, they have the standard group of invoices measured by the “normal” measurement process. Differences between averages the measurement process came up with, and what the known defect level was from the expert panel represented the bias of the measurement process. MSA for Continuous Processes 4 .PPT

Possible Causes of Poor Repeatability
MSA for Attribute or Categorical Data Repeatability Possible Causes of Poor Repeatability Equipment Gage instrument needs maintenance The gage needs to be more rigid People Environmental conditions (lighting, noise) Physical conditions (eyesight) Repeatability Repeatability is the variation in measurements obtained when one operator uses the same measurement process for measuring the identical characteristics of the same parts or items. Repeatability is determined by taking one person, or one measurement device, and measuring the same units or items repeatedly. Differences between the repeated measurements represent the ability of the person or measurement device to be consistent. Possible causes of the lack of repeatability are listed on the slide. MSA for Continuous Processes 5 .PPT

Possible Causes of Poor Reproducibility
MSA for Attribute or Categorical Data Reproducibility Mean of the measurements of Operator B of Operator A Possible Causes of Poor Reproducibility Measurement procedure is not clear Operator is not properly trained in using and reading gage Operational Definitions not established Reproducibility Reproducibility is very similar to repeatability. The only difference is that instead of looking at the consistency of one person, you are looking at the consistency between people. Reproducibility is the variation in the average of measurements made by different operators using the same measurement process when measuring identical characteristics of the same parts or items. Possible causes of poor reproducibility include: measurement process is not clear, operator not properly trained in using the measurement system, and operational definitions are not clear nor well established. MSA for Continuous Processes 6 .PPT

Attribute Measurement Systems Study
MSA for Attribute or Categorical Data Discrete qualitative data Go/no-go basis; or limited data categories Compares parts to specific criteria for accept/not accept or to be placed in category Must screen for effectiveness to discern good parts from bad At least two appraisers and two trials each If available, have Quality Master rate parts first Listed here are the key highlights of conducting a MSA for attribute or categorical data. The “parts” can be invoices, parts or reason codes for customer returns, for example. MSA for Continuous Processes 7 .PPT

Attribute MSA Example Attribute MSA Study
MSA for Attribute or Categorical Data Attribute MSA Example Appraiser A Appraiser B Master 1 G G G G G 2 G G G G G 3 G NG G G G 4 NG NG NG NG NG 5 G G G G G 6 G G G NG G 7 NG NG NG G NG 8 NG NG NG G G 9 G G G G G 10 G G G G G 11 G G G G G 12 G G G G G 13 G G NG G G 14 G G G G G 15 NG G G G G 16 G G G G G 17 G G G G G 18 G G G G G 19 G G G G G 20 NG G G G G G = Good NG = Not Good This shows the results of 2 rounds using 2 appraisers, assessing the same 20 items. MSA for Continuous Processes 8 .PPT

Challenges of Continuous Process MSA
MSA for Attribute or Categorical Data MSA study is an experiment Requires two or more trials for calculating Repeatability Needs a way to present the inspection units to the appraiser multiple times Is not possible within the continuous process When conducting an MSA for a continuously running process, parts should be taken off-line to conduct the MSA study. MSA for Continuous Processes 9 .PPT

Case Example: Visual Inspection of Glass
MSA for Attribute or Categorical Data Catwalk Glass Inspector Cutter Packers The example given here is that of visual inspection of glass. MSA for Continuous Processes 10 .PPT

Case Example: Challenges to Overcome
MSA for Attribute or Categorical Data Bias to the standard could be evaluated on-line. Repeatability and Reproducibility (R & R) could not be evaluated on-line. A method had to be devised to allow the inspectors to view the same pieces of glass repeatedly. The solution was an off-line conveyor which simulated the on-line condition as closely as possible. MSA for Continuous Processes 11 .PPT

Case Example: Attribute MSA Method Employed
MSA for Attribute or Categorical Data 20 pieces of glass from the process that included both good and bad samples were selected. A team of people well versed in the quality standard classified each piece of glass as either “pass” or “fail.” All regular inspectors independently evaluated each piece twice (in random order). The inspectors used a log sheet to record the data. Minitab® was used to analyze the data. There were two outcomes in this inspection or measurement process: pass or fail. Twenty pieces, a team of inspectors, and two rounds (or trials) were used in the MSA. MSA for Continuous Processes 12 .PPT

Case Example: Attribute MSA Study Data
MSA for Attribute or Categorical Data Excerpt of full data for 20 inspectors This slide shows the data in MINITAB®. The “Standard” column documents the correct or expert answer for each piece of glass. MSA for Continuous Processes 13 .PPT

Case Example: Attribute MSA Study Results
MSA for Attribute or Categorical Data The graph on the left shows the agreement (or repeatability) of each appraiser between Trial 1 and Trial 2. The graph on the right shows the agreement of each appraiser with the Standard. As shown on both graphs, the blue dots show the percent agreement and the redlines are the 95% confidence intervals. MSA for Continuous Processes 14 .PPT

Case Example: Attribute MSA Study Results (continued)
MSA for Attribute or Categorical Data This slide shows the Within Appraiser agreement. For example, Larry scored 100% - Trial 1 and Trial 2 are in full agreement. On the other hand, Allen scored only 50% agreement – there is only a 50% agreement between his Trial 1 and Trial 2 measurements. Or he disagrees with himself half the time! MSA for Continuous Processes 15 .PPT

Case Example: Attribute MSA Study Results (continued)
MSA for Attribute or Categorical Data This slide shows the Agreement of each Appraiser (across both trials) with the Standard. For example, Larry has 89% agreement with the Standard but, Allen only has 39% agreement with the Standard. MSA for Continuous Processes 16 .PPT

Case Example: Attribute MSA Study Results (continued)
MSA for Attribute or Categorical Data This shows the level of agreement across all Appraisers. In this case, only 5.56% agreement! MSA for Continuous Processes 17 .PPT

Case Example: Attribute MSA Study Conclusions
MSA for Attribute or Categorical Data What could have caused the poor agreement? What was done to improve consistency? Given the results of the MSA study, what could have caused the poor agreement? And what should be done to improve the measurement system? The measurement system must be improved and tested again (with another MSA study) to reach at least 90% agreement before the data can be used for base-lining process performance or further analysis. MSA for Continuous Processes 18 .PPT