Prepared By: Mr. Prashant S. Kshirsagar (Sr.Manager-QA dept.)

Slides:



Advertisements
Similar presentations
Basic Principles of GMP
Advertisements

MSA Example: Attribute or Categorical Data
Statistically-Based Quality Improvement
Repeatability & Reproducibility
Ch 12- Control Charts for Attributes
Introduction to Statistical Quality Control, 4th Edition Chapter 14 Lot-by-Lot Acceptance Sampling for Attributes.
Types of Data This module was developed by Business Process Improvement. For more modules, please contact us at or visit our website
PRAGMATIC APPLICATION OF GAUGE-MANPOWER CORRELATION FISHER CONTROLS Alexandria Stewart, Michael Stinn, Justin Thede, Alicia Wieland.
Data Collection Six Sigma Foundations Continuous Improvement Training Six Sigma Foundations Continuous Improvement Training Six Sigma Simplicity.
Chapter 9- Control Charts for Attributes
Quality Control Chapter 9- Lot-by-Lot Acceptance Sampling
Chapter 10 Quality Control McGraw-Hill/Irwin
Overview Lesson 10,11 - Software Quality Assurance
Audit Sampling. Definition: Audit Sampling Audit sampling is the application of an audit procedure to less than 100 percent of the items within an account.
Engineering Management Six Sigma Quality Engineering Week 5 Chapters 5 (Measure Phase) Variable Gauge R&R “Data is only as good as the system that measures.
Rev. 02/24/06SJSU Bus David Bentley1 Chapter 11 – Statistical Thinking and Applications “Red bead experiment”, random vs. non-random variation,
Chapter 13: Audit Sampling Spring Overview of Sampling.
Statistical Process Control
What is Data ? Data is a numerical expression of an activity. Conclusions based on facts and data are necessary for any improvement. -K. Ishikawa If you.
Lot-by-Lot Acceptance Sampling for Attributes
Statistically-Based Quality Improvement
Acceptance Sampling Lot-by-lot Acceptance Sampling by AttributesLot-by-lot Acceptance Sampling by Attributes Acceptance Sampling SystemsAcceptance Sampling.
Chapter 11 Quality Control.
Measurement System Analysis Kevin B. Craner Boise State University October 6, 2003.
© ABSL Power Solutions 2007 © STM Quality Limited STM Quality Limited Measurement Systems Analysis TOTAL QUALITY MANAGEMENT M.S.A.
1. 1. Product Control: to prove a particular gage is capable of distinguishing good parts from bad parts and can do so accurately every time. ◦ Ideal.
© 2007 Pearson Education Managing Quality Integrating the Supply Chain S. Thomas Foster Chapter 12 Statistically-Based Quality Improvement for Variables.
Lecture #9 Project Quality Management Quality Processes- Quality Assurance and Quality Control Ghazala Amin.
Success depends upon the ability to measure performance. Rule #1:A process is only as good as the ability to reliably measure.
Software Project Management Lecture # 10. Outline Quality Management (chapter 26)  What is quality?  Meaning of Quality in Various Context  Some quality.
Paper Cutting Exercise
©2008 Pearson Education, Inc., Upper Saddle River, NJ. All rights reserved. This material is protected under all copyright laws as they currently exist.
Acceptance Sampling McGraw-Hill/Irwin Copyright © 2012 by The McGraw-Hill Companies, Inc. All rights reserved.
Audit Sampling: An Overview and Application to Tests of Controls
Measurement Systems Analysis Introduce Measurement Systems Assess Measurement Systems Performance Understand Measurement System Variation.
1 Chapter 3: Attribute Measurement Systems Analysis (Optional) 3.1 Introduction to Attribute Measurement Systems Analysis 3.2 Conducting an Attribute MSA.
Statistical Quality Control/Statistical Process Control
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin 8-1 Chapter Eight Audit Sampling: An Overview and Application.
Chapter 6: Analyzing and Interpreting Quantitative Data
Chapter 21 Measurement Analysis. Measurement It is important to define and validate the measurement system before collecting data. –Without measurement.
1 © The McGraw-Hill Companies, Inc., Technical Note 7 Process Capability and Statistical Quality Control.
CHAPTER 7 STATISTICAL PROCESS CONTROL. THE CONCEPT The application of statistical techniques to determine whether the output of a process conforms to.
Module 2Slide 1 of 26 WHO - EDM Quality Management Basic Principles of GMP Part One.
In the name of Allah,the Most Beneficient, Presented by Nudrat Rehman Roll#
Quality Control Copyright © 2015 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill.
Section 5 Control Charts. 4 Control Chart Applications Establish state of statistical control Monitor a process and signal when it goes out of control.
Supplier Performance Development Process
LSM733-PRODUCTION OPERATIONS MANAGEMENT By: OSMAN BIN SAIF LECTURE 30 1.
DIMAC 12/2/2003Six Sigma Green Belt (Ref: The Six Sigma Way Team Fieldbook)1 Measurement and Analysis Interaction (A1) Measure Hypothesize stratification.
Unit-3 Control chart Presented by N.vigneshwari. Today’s topic  Control chart.
DSQR Training Attribute MSA
Lot-by-Lot Acceptance Sampling for Attributes
Audit Sampling: An Overview and Application
Audit Sampling: An Overview and Application to Tests of Controls
Control Charts for Attributes
MSA / Gage Capability (GR&R)
1 2 3 INDIAN INSTITUTE OF TECHNOLOGY ROORKEE PROJECT REPORT 2016
McGraw-Hill/Irwin ©2009 The McGraw-Hill Companies, All Rights Reserved
Supplier Performance Development Process
What will be covered? What is acceptance sampling?
LESSON 20: HYPOTHESIS TESTING
Quality Measurable characteristic Cyclomatic complexity Cohesion
What will be covered? What is acceptance sampling?
Supplier Performance Development Process
Chapter 11 Quality Control.
ACCEPTANCE SAMPLING FOR ATTRIBUTES
Measurement System Analysis
Statistical Quality Control
Prepared By: Mr. Prashant S. Kshirsagar (Sr.Manager-QA dept.) 1 -: Quality Inspection :-
Measurement Systems Analysis
Presentation transcript:

Prepared By: Mr. Prashant S. Kshirsagar (Sr.Manager-QA dept.)

◦ Introduce the basic concepts of an attribute measurement systems analysis (MSA). ◦ Understand operational definitions for inspection and evaluation. ◦ Define attribute MSA terms. ◦ Define Procedure for conducting attribute MSA ◦ Demonstrate trial for conducting attribute MSA 2

 A measurement systems analysis is an evaluation of the efficacy of a measurement system.  The purpose of Measurement System Analysis is to qualify a measurement system for use by quantifying its accuracy, precision, and stability.  It is applicable to both continuous and attribute data.

 Most problematic measurement system issues come from measuring attribute data in terms that rely on human judgment such as good/bad, pass/fail, etc. This is because it is very difficult for all testers to apply the same operational definition of what is “good” and what is “bad.”

 When, we are not getting any measurement values then the tool used for this kind of analysis is called Attribute gage R&R.  The R&R stands for repeatability and reproducibility.  Repeatability : is the variation in measurements obtained with one measurement instrument when used several times by one appraiser while measuring the identical characteristic on the same part.  Reproducibility : It is defined as the variation in the average of the measurements made by different appraisers using the same measuring instrument when measuring the identical characteristic on the same part. .

 To evaluate product features and make accept/reject decisions.  Mandatory criteria for establishment and use of operational definitions include:  A) Criteria that can be applied to an object (or a group of objects) which precisely describes what is acceptable and unacceptable.  B) A written description of the process for collecting data, including the method in which accept/reject decisions will be made.  C) Review of the accept/reject criteria with people who will do the inspections to ensure that the requirements are understood.

 Select at least 20 parts to be evaluated during the study.  At least 5 of the parts should be defective in some way. If larger sample sizes are used, include at least 25% defective parts.  Care should be taken when selecting defective parts – If possible select parts which are slightly beyond the specification limits or acceptance standards. Label each part with proper identification.  Three inspectors will evaluate each part thrice (Three trials).  A fourth person should record the data. Note down the observations in the form of 1 or 0, 1 is OK, 0 is not ok.  The order of inspections should be randomized after each group of inspections to minimize the risk that the inspector will remember previous accept/reject decisions. The inspectors must work independently and cannot discuss their accept/reject decisions with each other.

Appraiser AAABBBCCC Trialsiiiiiiiiiiiiiiiiii The data recorder may use a table similar to the one given below. 0 Not Ok 1 Ok

 Type 1 Errors: when a good part is rejected.  Type 1 errors increase ‐  Manufacturing costs. Incremental labor and material expenses are necessary to re – inspect, repair, or dispose the suspect parts.  Type 1 errors are also called as “Producer’s Risk” or alpha errors.  Type 2 Errors: when a bad part is accepted.  Type 2 errors may occur  Perhaps the inspector was poorly trained or rushed through the inspection and inadvertently overlooked a Small defect on the part.  When Type 2 errors occur, defects slip through the containment net and are shipped to the customer.  Because Type 2 errors put the customer at risk of receiving defective parts; customer may raised the complaint!  Type 2 errors are sometimes called as “Consumer’s Risk”.  Type 2 errors are also called as “beta” errors.

What is effectiveness?  The effectiveness of an inspection process is correct call! ◦ Correct Call (Cc):- The number of times of which the operator (s) identify a good sample as a good one.  Effectiveness = number of correct evaluations number of total opportunities

What is False Alarm?  False Alarm (Fa) – The number of times of which the operator (s) identify a good sample as a bad one.  The probability of a false alarm, also known as Type I error or producer’s risk, is given by:  Fa (False Alarm) = number of false alarms number of non-defective items

What is Miss rate?  A miss is a defective item that is classified as non- defective.  Miss rate (Mr) – The number of times of which the operators identify a bad sample as a good one.  The probability of a miss, also known as Type II error or consumer’s risk, is given by: Mr (Miss rate) = number of misses number of defective items

 Acceptability criteria: If all measurement results agree, the gage is acceptable. If the measurement results do not agree, the gage can not be accepted, it must be improved and re-evaluated. EFFECTIVENESS (< 80% - Not Acceptable) MISS - RATE ( > 5% - Not Acceptable ) FALSE ALARM RATE( > 10% -Not Acceptable)

 What could have caused the poor agreement?  What should be done to improve the measurement system?  What should be done to improve consistency?  Do the Brain Storming!

 If any of the decisions disagree, the measurement system may need improvement.  Improvement actions include:  Reworking the gage,  Re ‐ training the inspectors,  Clarifying the accept/reject criteria,  Adding more lighting  After implementing the improvement actions, repeat the study. If the error cannot be eliminated,  Must take appropriate corrective actions, such as switching to a new measurement system, adding redundant inspections, or conducting a more extensive study.

Exercise