Presentation is loading. Please wait.

Presentation is loading. Please wait.

DSQR Training Attribute MSA

Similar presentations


Presentation on theme: "DSQR Training Attribute MSA"— Presentation transcript:

1 DSQR Training Attribute MSA
HEATING, COOLING & WATER HEATING PRODUCTS DSQR Training Attribute MSA Fred Nunez Corporate Quality

2 Improving a Discrete Measurement System
Assessing the accuracy, repeatability, and reproducibility of a discrete measurement system Discrete data are usually the result of human judgment (“which category does this item belong in?”) When categorizing items (good/bad; type of call; reason for leaving), you need a high degree of agreement on which way an item should be categorized The best way to assess human judgment is to have all operators categorize several known test units Look for 100% agreement Use disagreements as opportunities to determine and eliminate problems Sue’s Mark’s Fred’s Jim’s Call# Call Type Categorization 1 Inquiry-Amt 2 Change of Addr 3 Inquiry-Status 4 Inquiry-Rate 5 Drop-Rate Drop-Term 6 Change-Bill date 7 Different categorizations; learn why

3 Attribute MSA A MSA Attribute data study is the primary tool for assessing the reliability of a qualitative measurement system. Attribute data has less information content than variables data but often it is all that's available and it is still important to be diligent about the integrity of the measurement system. Attribute inspection generally does one of three things: Classifies an item as either Conforming or Nonconforming Classifies an item into one of multiple categories Counts the number of "non-conformities" per item inspected Thus, a "perfect" MSA attribute data system would Correctly classify every item Always produce a correct count of an item's non-conformities

4 Attribute MSA Roadmap The roadmap to planning, implementing. collecting data for a MSA attribute data follows: Step 1. To start the MSA attribute data study, identify the metric and agree within the team on its operational definition. Often the exact measurement terms aren’t immediately obvious. For example, in many transactional service processes, it could be the initial writing of the line items to an order, the charging of the order to a specific account, or the translation of the charges into a bill. Each of these might involve a separate classification step. Step 2. Define the defects and classifications for what makes an item defective. These should be mutually exclusive (a defect cannot fall into two categories) and exhaustive. If an item is defective it must fall into at least one defined category, If done correctly every entity must fall into one and only one category . Step 3. Select samples to be used in the MSA. Use a sample size calculator. From 30 to 50 samples are necessary. The samples should span the normal extremes of the process with regards to the attribute being measured. Measure the samples independent from one another. The majority of the samples should be from the "gray" areas, and a few from clearly good and clearly bad. For example, for a sample of 30 units, five units might be clearly defective and five units might be clearly acceptable. The remaining samples would vary in quantity and type of defects. Step 4. Select at least 10 appraisers to conduct the MSA. These should be people who normally conduct the assessment. Step 5. Perform the appraisal. Randomly provide the samples to each appraiser (without them knowing which sample it is or the other appraisers witnessing the appraisal) and have him classify the item per the defect defintions. After the first appraiser has reviewed all items, repeat with the remaining appraisers. Appraisers must inspect and classify independently. After all appraisers have classified each item, repeat the whole process for one additional trial. Step 6. Conduct an expert appraisal or compare to a standard. In Step 5 the appraisers were compared to themselves (Repeatability) and to one another (Reproducibility). If the appraisers are not compared to a standard, the team might gain a false sense of security in the Measurement System. Step 7. Enter the data into a statistical software package such as Minitab and analyze it. Data is usually entered in columns (Appraiser, Sample, Response, and Expert). The analysis output typically includes -Percentage overall agreement -Percentage agreement within each appraisers(Repeatability) -Percentage agreement between appraisers (Reproducibility) -Percentage agreement with known standard (Accuracy)

5 Attribute MSA Roadmap Step 3. Select samples to be used in the MSA. From 30 to 50 samples are necessary. The samples should span the normal extremes of the process with regards to the attribute being measured. Measure the samples independent from one another. The majority of the samples should be from the "gray" areas, and a few from clearly good and clearly bad. For example, for a sample of 30 units, five units might be clearly defective and five units might be clearly acceptable. The remaining samples would vary in quantity and type of defects. Step 4. Select at least 3 appraisers to conduct the MSA. These should be people who normally conduct the assessment. Step 5. Perform the appraisal. Randomly provide the samples to each appraiser (without them knowing which sample it is or the other appraisers witnessing the appraisal) and have him classify the item per the defect defintions. After the first appraiser has reviewed all items, repeat with the remaining appraisers. Appraisers must inspect and classify independently. After all appraisers have classified each item, repeat the whole process for one additional trial. Step 6. Conduct an expert appraisal or compare to a standard. In Step 5 the appraisers were compared to themselves (Repeatability) and to one another (Reproducibility). If the appraisers are not compared to a standard, the team might gain a false sense of security in the Measurement System. Step 7. Enter the data into a statistical software package such as Minitab and analyze it. Data is usually entered in columns (Appraiser, Sample, Response, and Expert). The analysis output typically includes -Percentage overall agreement -Percentage agreement within each appraisers(Repeatability) -Percentage agreement between appraisers (Reproducibility) -Percentage agreement with known standard (Accuracy)

6 Attribute MSA Roadmap Step 5.
Perform the appraisal. Randomly provide the samples to each appraiser (without them knowing which sample it is or the other appraisers witnessing the appraisal) and have him classify the item per the defect definitions. After the first appraiser has reviewed all items, repeat with the remaining appraisers. Appraisers must inspect and classify independently. After all appraisers have classified each item, repeat the whole process for one additional trial. Step 6. Conduct an expert appraisal or compare to a standard. In Step 5 the appraisers were compared to themselves (Repeatability) and to one another (Reproducibility). If the appraisers are not compared to a standard, the team might gain a false sense of security in the Measurement System.

7 Attribute MSA Roadmap Step 7.
Enter the data into a statistical software package such as Minitab and analyze it. Data is usually entered in columns (Appraiser, Sample, Response, and Expert). The analysis output typically includes: Percentage overall agreement Percentage agreement within each appraisers(Repeatability) Percentage agreement between appraisers (Reproducibility) Percentage agreement with known standard (Accuracy)

8 Minitab Follow Along: Attribute Gage R&R
Data: C:\SixSigma\Data\Attributes.mtw Conduct an attribute Gage R&R study: Stat>Quality Tools>Attribute Agreement Analysis…

9 Minitab Follow Along: Attribute Gage R&R, cont.
Columns containing the appraised attributes

10 Minitab Follow Along: Attribute Gage R&R, cont.
Session Window Output: Attribute Gage R&R Study for Jane1, Jane2, Bob1, Bob2, Alex1, Alex2 Within Appraiser Assessment Agreement Appraiser # Inspected # Matched Percent (%) % CI Jane ( 57.7, 90.1) Bob ( 54.1, 87.7) Alex ( 40.6, 77.3) # Matched: Appraiser agrees with him/herself across trials. Each Appraiser vs Standard Jane ( 43.9, 80.1) Bob ( 40.6, 77.3) # Matched: Appraiser's assessment across trials agrees with standard.

11 Minitab Follow Along: Attribute Gage R&R, cont.
Session Window Output: Between Appraisers Assessment Agreement # Inspected # Matched Percent (%) % CI ( 9.9, 42.3) # Matched: All appraisers' assessments agree with each other. All Appraisers vs Standard # Matched: All appraisers' assessments agree with standard.

12 Minitab Follow Along: Attribute Gage R&R, cont.

13 The Kappa Statistic Pobserved = Proportion of units classified in which the raters agreed Pchance = Proportion of units for which one would expect agreement by chance The Kappa statistic tells us how much better the measurement system is than random chance. If there is substantial agreement, there is the possibility that the ratings are accurate. If agreement is poor, the usefulness of the ratings is extremely limited. The Kappa statistic will always yield a number between -1 and +1. A value of -1 implies totally random agreement by chance. A value of +1 implies perfect agreement. What Kappa value is considered to be good enough for a measurement system? That very much depends on the applications of your measurement system. As a general rule of thumb, a Kappa value of 0.7 or higher should be good enough to use for investigation and improvement purposes.

14 Attribute MSA - Example
GB D-Ring Cover Case Study Attribute MSA - Example Objective: Analyzing and interpretation of an Attribute Gage R&R Study Background: In addition to the Gage R&R Study on the hole diameter an attribute R&R study was conducted to check the visual assessment system used. Four appraisers looked at 30 units repeatedly. Data: C:\SixSigma/Data/CSGRR Instructions: Part A: Analyze the initial data (column “assessment”) and assess the quality of the measurement system. If necessary, recommend improvement actions. Part B: The team made suggestions for improving the measurement system and used the same parts to conduct a second attribute Gage R&R study. Analyze the data after improvement (column “re-assessment”). Time: 10 min

15 Attribute MSA - Example
GB D-Ring Cover Case Study Attribute MSA - Example

16 Attribute MSA - Example
GB D-Ring Cover Case Study Attribute MSA - Example Between Appraisers Assessment Agreement # Inspected # Matched Percent (%) % CI ( 69.3, 96.2) # Matched: All appraisers' assessments agree with each other. All Appraisers vs Standard # Matched: All appraisers' assessments agree with standard. The attribute R&R study showed that two of the inspectors consistently classified the parts good or bad in agreement with the standard and two did not. There was complete agreement on only 87% of the samples. This measurement system is not adequate. The situation was discussed with the inspectors and it was agreed that the lighting level in the inspection area was poor, the area needed housekeeping, and there was no procedure for measuring the parts. The area was cleaned up, the lighting was improved and a procedure was developed. The study was then repeated (reassessment).

17 Attribute MSA - Example –Ans. B
GB D-Ring Cover Case Study Attribute MSA - Example –Ans. B

18 Attribute MSA - Example –Ans. B
GB D-Ring Cover Case Study Attribute MSA - Example –Ans. B Between Appraisers Assessment Agreement # Inspected # Matched Percent (%) % CI ( 82.8, 99.9) # Matched: All appraisers' assessments agree with each other. All Appraisers vs Standard # Matched: All appraisers' assessments agree with standard. The new attribute R&R study showed more acceptable results only one operator disagreed with the standard. There was complete agreement on 97% of the samples. This measurement system is adequate.

19 Reasons MSA Attribute Data Fails
Appraiser Visual acuity (or lack of it) Misinterpretation of the reject definitions Appraisal Defect probability. If this is very high, the appraiser tends to reduce the stringency of the test. The appraiser becomes numbed or hypnotized by the sheer monotony of repetition. If this is very low, the appraiser tends to get complacent and tends to see only what he expects to see. Fault type. Some defects are far more obvious than others. Number of faults occurring simultaneously. If this is the case, the appraiser must judge the correct defect category. Not enough time allowed for inspection. Infrequent appraiser rest periods. Poor illumination of the work area. Poor inspection station layout. Poor objectivity and clarity of conformance standards and test instructions.

20 Reasons MSA Attribute Data Fails
Organization and environment. Appraiser training and certification. Peer standards. Defectives are often deemed to reflect badly on coworkers. Management standards. Knowledge of operator or group producing the item. Proximity of inspectors.

21 Notes AIAG describes a “Short Method” requiring:
20 samples 2 inspectors 100% agreement to standard AIAG also has a “Long Method” for evaluating an attribute gage against known standards that could be measured with continuous data.


Download ppt "DSQR Training Attribute MSA"

Similar presentations


Ads by Google