DMAIC : The Breakthrough Strategy

Slides:



Advertisements
Similar presentations
Lecture 8: Hypothesis Testing
Advertisements

STATISTICS Linear Statistical Models
STATISTICS HYPOTHESES TEST (I)
STATISTICS INTERVAL ESTIMATION Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National Taiwan University.
STATISTICS POINT ESTIMATION Professor Ke-Sheng Cheng Department of Bioenvironmental Systems Engineering National Taiwan University.
Quantitative Capability Assessment
BUS 220: ELEMENTARY STATISTICS
Add Governors Discretionary (1G) Grants Chapter 6.
CALENDAR.
Variation, uncertainties and models Marian Scott School of Mathematics and Statistics, University of Glasgow June 2012.
Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA
Lecture 2 ANALYSIS OF VARIANCE: AN INTRODUCTION
1 Contact details Colin Gray Room S16 (occasionally) address: Telephone: (27) 2233 Dont hesitate to get in touch.
Assumptions underlying regression analysis
Chapter 7 Sampling and Sampling Distributions
The 5S numbers game..
Simple Linear Regression 1. review of least squares procedure 2
The basics for simulations
Chapter 4: Basic Estimation Techniques
Department of Engineering Management, Information and Systems
Elementary Statistics
Chapter 7 Hypothesis Testing
Briana B. Morrison Adapted from William Collins
Chi-Square and Analysis of Variance (ANOVA)
Hypothesis Tests: Two Independent Samples
Statistics Review – Part I
Statistical Analysis SC504/HS927 Spring Term 2008
Chapter 15 ANOVA.
When you see… Find the zeros You think….
1 Non Deterministic Automata. 2 Alphabet = Nondeterministic Finite Accepter (NFA)
Statistical Inferences Based on Two Samples
Chapter Thirteen The One-Way Analysis of Variance.
Ch 14 實習(2).
Statistically-Based Quality Improvement
Copyright © 2013 Pearson Education, Inc. All rights reserved Chapter 11 Simple Linear Regression.
Module 20: Correlation This module focuses on the calculating, interpreting and testing hypotheses about the Pearson Product Moment Correlation.
Simple Linear Regression Analysis
Correlation and Linear Regression
Multiple Regression and Model Building
9. Two Functions of Two Random Variables
4/4/2015Slide 1 SOLVING THE PROBLEM A one-sample t-test of a population mean requires that the variable be quantitative. A one-sample test of a population.
Commonly Used Distributions
Chapter 13 Comparing Two Populations: Independent Samples.
Regression Analysis Once a linear relationship is defined, the independent variable can be used to forecast the dependent variable. Y ^ = bo + bX bo is.
Chapter 10 Quality Control McGraw-Hill/Irwin
Six Sigma Quality Engineering
Inferences About Process Quality
1 BA 555 Practical Business Analysis Review of Statistics Confidence Interval Estimation Hypothesis Testing Linear Regression Analysis Introduction Case.
The Quality Improvement Model
1 Chapter 1: Introduction to Design of Experiments 1.1 Review of Basic Statistical Concepts (Optional) 1.2 Introduction to Experimental Design 1.3 Completely.
36.1 Introduction Objective of Quality Engineering:
Chapter 36 Quality Engineering (Part 2) EIN 3390 Manufacturing Processes Summer A, 2012.
1 Chapter 1: Introduction to Design of Experiments 1.1 Review of Basic Statistical Concepts (Optional) 1.2 Introduction to Experimental Design 1.3 Completely.
Analyze Improve Define Measure Control L EAN S IX S IGMA L EAN S IX S IGMA Chi-Square Analysis Chi-Square Analysis Chi-Square Training for Attribute Data.
Project Storyboard (Page 1 of 2)
Hypothesis Testing. Why do we need it? – simply, we are looking for something – a statistical measure - that will allow us to conclude there is truly.
Chapter Eight: Using Statistics to Answer Questions.
Green Belt – SIX SIGMA OPERATIONAL Intro to Analyze Phase.
Quality Control Copyright © 2015 McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill.
Chapter 36 Quality Engineering (Part 1) (Review) EIN 3390 Manufacturing Processes Fall, 2010.
DIMAC 12/2/2003Six Sigma Green Belt (Ref: The Six Sigma Way Team Fieldbook)1 Measurement and Analysis Interaction (A1) Measure Hypothesize stratification.
Chapter 36 Quality Engineering (Part 1) EIN 3390 Manufacturing Processes Spring, 2011.
36.3 Inspection to Control Quality
MSA / Gage Capability (GR&R)
Process Capability and Capability Index
36.1 Introduction Objective of Quality Engineering:
Basic Training for Statistical Process Control
Basic Training for Statistical Process Control
Project Champion: Process Owner: Organization: Project Location:
Six Sigma (What is it?) “Six sigma was simply a TQM process that uses process capabilities analysis as a way of measuring progress” --H.J. Harrington,
Presentation transcript:

DMAIC : The Breakthrough Strategy

What is Six Sigma? It is a business process that allows companies to drastically improve their bottom line by designing and monitoring everyday business activities in ways that minimize waste and resources while increasing customer satisfaction. Mikel Harry, Richard Schroeder

What Six Sigma Can Do For Your Company? 4.8 DFSS MAIC Average company

What Six Sigma Can Do For Your Company?

The Cost of Quality (COQ) Traditional Cost of Poor Quality (COQ) 5-8% Inspection Warranty Rejects Rework การติดตั้ง ยอดขายลดลง การขนส่งล่าช้า งานเอกสารส่งผิดที่ เวลาผลิตยาวนาน ค่าเร่งการผลิต รายการสั่งซื้อมากเกินไป ใช้เวลา Set up นาน ค่าของเงินตามกาลเวลา การสั่งวัตถุดิบมาก เกินความจำเป็น ความไม่พอใจ ความปลอดภัย ค่าบริการขนส่ง ค่าบัตรโทรศัพท์ ข้อมูลที่ไม่ถูกต้อง แนวทางที่แตกต่าง ในการทำธุรกิจ Lost Opportunity Group activities: create the flow after rejected units have occurred in the process. Less Obvious Cost of Quality (COQ) 15-20% Note: % of sales

DMAIC : The Yellow Brick Road Define C O R E P H A S Measure Analyze Characterization Optimization Breakthrough Strategy Improve Control

Breakthrough & People Champion Blackbelts Finance Rep.& Process Owner

Define What is my biggest problem? Customer complaints Low performance metrics Too much time consumed What needs to improve? Big budget items Poor performance Where are there opportunities to improve? How do I affect corporate and business group objectives? What’s in my budget?

Define : The Project Projects DIRECTLY tie to department and/or business unit objectives Projects are suitable in scope BBs are “fit” to the project Champions own and support project selection

Define : The Defect Rework High Defect Rates Customer Complaints Excessive Test and Inspection Constrained Capacity with High anticipated Capital Expenditures Bottlenecks High Defect Rates Low Yields Excessive Cycle Time Excessive Machine Down Time High Maintenance Costs High Consumables Usage

ปัญหาฝังแน่น (Chronic) Define : The Chronic Problem Special Cause ( ปัญหานาน ๆ ครั้ง ) Time Reject Rate ปัญหาฝังแน่น (Chronic) The Breakthrough Strategy Optimum Level

Define : The Persistent Problem Is process in control?

Define : Refine The Defect a2 a3 a4 a5 a6 a7 a1 Refined Defect = a1

MAIC --> Identify Leveraged KPIV’s Tools Outputs Process Map 30 - 50 Inputs Variables C&E Matrix and FMEA Potential Key Process Measure 10 - 15 Gage R&R, Capability Input Variables (KPIVs) Multi- Vari Studies, Correlations KPIVs Analyze 8 - 10 T-Test, ANOM, ANOVA Screening DOE’s Improve 4-8 Optimized KPIVs DOE’s , RSM Quality Systems 3-6 Key Leverage KPIVs Control SPC, Control Plans

Measure The Measure phase serves to validate the problem, translate the practical to statistical problem and to begin the search for root causes

Measure : Tools To validate the problem Measurement System Analysis To translate practical to statistical problem Process Capability Analysis To search for the root cause Process Map Cause and Effect Analysis Failure Mode and Effect Analysis

Work shop #1: Our products are the distance resulting from the Catapult. Product spec are +/- 4 Cm. for both X and Y axis Shoot the ball for at least 30 trials , then collect yield Prepare to report your result.

Measure : Measurement System Analysis Objectives: Validate the Measurement / Inspection System Quantify the effect of the Measurement System variability on the process variability

Measure : Measurement System Analysis Attribute GR&R : Purpose To determine if inspectors across all shifts, machines, lines, etc… use the same criteria to discriminate “good” from “bad” To quantify the ability of inspectors or gages to accurately repeat their inspection decisions To identify how well inspectors/gages conform to a known master (possibly defined by the customer) which includes: How often operators decide to over reject How often operators decide to over accept

Measure : Measurement System Analysis

Measure : Measurement System Analysis % Appraiser Score % REPEATIBILITY OF OPERATOR # 1 = 16/20 = 80% % REPEATIBILITY OF OPERATOR # 2 = 13/20 = 65% % REPEATIBILITY OF OPERATOR # 3 = 20/20 = 100%

Measure : Measurement System Analysis % Attribute Score % UNBIAS OF OPERATOR # 1 = 12/20 = 60% % UNBIAS OF OPERATOR # 2 = 12/20 = 60% % UNBIAS OF OPERATOR # 3 = 17/20 = 85% % Screen Effective Score % REPEATABILITY OF INSPECTION = 11/20 = 55 % % Attribute Screen Effective Score % UNBIAS OF INSPECTION 50 % = 10/20 = 50%

Measure : Measurement System Analysis Variable GR&R : Purpose Study of your measurement system will reveal the relative amount of variation in your data that results from measurement system error. It is also a great tool for comparing two or more measurement devices or two or more operators. MSA should be used as part of the criteria for accepting a new piece of measurement equipment to manufacturing. It should be the basis for evaluating a measurement system which is suspect of being deficient.

Measure : Measurement System Analysis

Measure : Measurement System Analysis Resolution? “Precision” (R&R) Calibration? Stability? Linearity? Bias?

Measurement System Metrics Measurement System Variance: s2meas = s2repeat + s2reprod To determine whether the measurement system is “good” or “bad” for a certain application, you need to compare the measurement variation to the product spec or the process variation Comparing s2meas with Tolerance: Precision-to-Tolerance Ratio (P/T) Comparing s2meas with Total Observed Process Variation (P/TV): % Repeatability and Reproducibility (%R&R) Discrimination Index

Uses of P/T and P/TV (%R&R) The P/T ratio is the most common estimate of measurement system precision Evaluates how well the measurement system can perform with respect to the specifications The appropriate P/T ratio is strongly dependent on the process capability. If Cpk is not adequate, the P/T ratio may give a false sense of security. The P/TV (%R&R) is the best measure for Analysis Estimates how well the measurement system performs with respect to the overall process variation %R&R is the best estimate when performing process improvement studies. Care must be taken to use samples representing full process range.

Number of Distinct Categories Automobile Industry Action Group (AIAG) recommendations: Categories Remarks < 2 System cannot discern one part from another = 2 System can only divide data in two groups e.g. high and low = 3 System can only divide data in three groups e.g. low, middle and high  4 System is acceptable

Measure : Measurement System Analysis Variable GR&R : Decision Criterion Note : Stability is analyzed by control chart

Enter the data and tolerance information into Minitab. Example: Minitab Enter the data and tolerance information into Minitab. Stat > Quality Tools > Gage R&R Study (Crossed ) Enter Gage Info and Options. (see next page) FN: Gageaiag.mtw ANOVA method is preferred.

Enter the data and tolerance information into Minitab. Stat > Quality Tools > Gage R&R Study Gage Info (see below) & Options

Gage R&R Output

Gage R&R Output

Gage R&R, Variation Components Variance due to the measurement system (broken down into repeatability and reproducibility) %Contribution Source VarComp (of VarComp) Total Gage R&R 0.004437 10.67 Repeatability 0.001292 3.10 Reproducibility 0.003146 7.56 Operator 0.000912 2.19 Operator*PartID 0.002234 5.37 Part-To-Part 0.037164 89.33 Total Variation 0.041602 100.00 StdDev Study Var %Study Var %Tolerance Source (SD) (5.15*SD) (%SV) (SV/Toler) Total Gage R&R 0.066615 0.34306 32.66 22.87 Repeatability 0.035940 0.18509 17.62 12.34 Reproducibility 0.056088 0.28885 27.50 19.26 Operator 0.030200 0.15553 14.81 10.37 Operator*PartID 0.047263 0.24340 23.17 16.23 Part-To-Part 0.192781 0.99282 94.52 66.19 Total Variation 0.203965 1.05042 100.00 70.03 Total variance Standard deviation for each variance component Variance due to the parts

Gage R&R, Results %Contribution Source VarComp (of VarComp) Total Gage R&R 0.004437 10.67 Repeatability 0.001292 3.10 Reproducibility 0.003146 7.56 Operator 0.000912 2.19 Operator*PartID 0.002234 5.37 Part-To-Part 0.037164 89.33 Total Variation 0.041602 100.00 StdDev Study Var %Study Var %Tolerance Source (SD) (5.15*SD) (%SV) (SV/Toler) Total Gage R&R 0.066615 0.34306 32.66 22.87 Repeatability 0.035940 0.18509 17.62 12.34 Reproducibility 0.056088 0.28885 27.50 19.26 Operator 0.030200 0.15553 14.81 10.37 Operator*PartID 0.047263 0.24340 23.17 16.23 Part-To-Part 0.192781 0.99282 94.52 66.19 Total Variation 0.203965 1.05042 100.00 70.03 Question: What is our conclusion about the measurement system?

Measure : Process Capability Analysis Process capability is a measure of how well the process is currently behaving with respect to the output specification. Process capability is determined by the total variation that comes from common causes -the minimum variation that can be achieved after all special causes have been eliminated. Thus, capability represents the performance of the process itself,as demonstrated when the process is being operated in a state of statistical control

Measure : Process Capability Analysis Translate practical problem to statistical problem Characterization Variation Large LSL USL Off-Target LSL USL Outliers LSL USL

Measure : Process Capability Analysis Two measures of process capability Process Potential Cp Process Performance Cpu Cpl Cpk Cpm

Measure : Process Capability Analysis Process Potential

Measure : Process Capability Analysis The Cp index compares the allowable spread (USL-LSL) against the process spread (6). It fails to take into account if the process is centered between the specification limits. Process is centered Process is not centered

Measure : Process Capability Analysis Process Performance The Cpk index relates the scaled distance between the process mean and the nearest specification limit.

Measure : Process Capability Analysis There are 2 kind of variation : Short term Variation and Long term Variation

Measure : Process Capability Analysis Short Term VS LongTerm ( Cp Vs Pp or Cpk vs Ppk )

Measure : Process Capability Analysis Process Potential VS. Process Performance ( Cp Vs Cpk ) 1.If Cp > 1.5 , it means the standard deviation is suitable 2.Cp is not equal to Cpk, it means that the process mean is off-centered

Workshop#3 Design the appropriate check sheet Define the subgroup Shoot the ball for at least 30 trials per subgroup Perform process capability analysis, translate Cp, Cpk , Pp and Ppk into statistical problem Report your results.

Measure : Process Map Process Map is a graphical representation of the flow of a “as-is” process. It contains all the major steps and decision points in a process. It helps us understand the process better, identify the critical or problems area, and identify where improvement can be made.

Measure : Process Map OPERATION All steps in the process where the object undergoes a change in form or condition. TRANSPORTATION All steps in a process where the object moves from one location to another, outside of the Operation STORAGE All steps in the process where the object remains at rest, in a semi-permanent or storage condition DELAY All incidences where the object stops or waits on a an operation, transportation, or inspection INSPECTION All steps in the process where the objects are checked for completeness, quality, outside of the Operation. DECISION

Measure : Process Map • How many Operational Steps are there? Good Bad Scrap Warehouse • How many Operational Steps are there? How many Decision Points? How many Measurement/Inspection Points? How many Re-work Loops? How many Control Points?

Measure : Process Map High Level Process Map Major Step KPIVs KPOVs These KPIVs and KPOVs can then be used as inputs to Cause and Effect Matrix

Workshop #2 : Do the process map and report the process steps and KPIVs that may be the cause

Measure : Cause and Effect Analysis A visual tool used to identify, explore and graphically display, in increasing detail, all the possible causes related to a problem or condition to discover root causes To discover the most probable causes for further analysis To visualize possible relationships between causes for any problem current or future To pinpoint conditions causing customer complaints, process errors or non-conforming products To provide focus for discussion To aid in development of technical or other standards or process improvements

Measure : Cause and Effect Matrix There are two types of Cause and Effect Matrix 1. Fishbone Diagram - traditional approach to brainstorming and diagramming cause-effect relationships. Good tool when there is one primary effect being analyzed. 2. Cause-Effect Matrix - a diagram in table form showing the direct relationships between outputs (Y’s) and inputs (X’s).

Measure : Cause and Effect Matrix Methods Materials Machinery Manpower Problem/ Desired Improvement C/N/X C N Fishbone Diagram C = Control Factor N = Noise Factor X = Factor for DOE (chosen later)

Measure : Cause and Effect Matrix

Workshop #4: Team brainstorming to create the fishbone diagram

Measure : Failure Mode and Effect Analysis FMEA is a systematic approach used to examine potential failures and prevent their occurrence. It enhances an engineer’s ability to predict problems and provides a system of ranking, or prioritization, so the most likely failure modes can be addressed.

Measure : Failure Mode and Effect Analysis

Severity (ความรุนแรง ) X Occurrence (โอกาสการเกิดขึ้น) X Measure : Failure Mode and Effect Analysis RPN = S x O x D Severity (ความรุนแรง ) X Occurrence (โอกาสการเกิดขึ้น) X Detection (การตรวจจับ)

Measure : Failure Mode and Effect Analysis สิ่งสำคัญมีน้อย (Vital Few) สิ่งจิ๊บจ๊อยมีมาก (Trivial Many)

Workshop # 5 : Team Brainstorming to create FMEA

Measure : Measure Phase’s Output Check and fix the measurement system Determine “where” you are Rolled throughput yield, DPPM Process Capability Entitlement Identify potential KPIV’s Process Mapping / Cause & Effect / FMEA Determine their likely impact

Analyze The Analyze phase serves to validate the KPIVs, and to study the statistical relationship between KPIVs and KPOVs

Analyze : Tools To validate the KPIVs Hypothesis Test 2 samples t test Analysis Of Variances etc. To reveal the relationship between KPIVs and KPOVs Regression analysis Correlation

Analyze : Hypothesis Testing The Null Hypothesis Statement generally assumed to be true unless sufficient evidence is found to the contrary Often assumed to be the status quo, or the preferred outcome. However, it sometimes represents a state you strongly want to disprove. Designated as H0

The Alternative Hypothesis Analyze : Hypothesis Testing The Alternative Hypothesis Statement generally held to be true if the null hypothesis is rejected Can be based on a specific engineering difference in a characteristic value that one desires to detect Designated as HA

Analyze : Hypothesis Testing NULL HYPOTHESIS: Nothing has changed: For Tests Of Process Mean: H0: m = m0 For Tests Of Process Variance: H0: s2 = s20 ALTERNATE HYPOTHESIS: Change has occurred: MEAN VARIANCE INEQUALITY Ha:   0 Ha: 2  20 NEW  OLD Ha:   0 Ha: 2  20 NEW  OLD Ha:   0 Ha: 2  20

Analyze : Hypothesis Testing

Analyze : Hypothesis Testing See Hypothesis Testing Roadmap

Example: Single Mean Compared to Target The example will include 10 measurements of a random sample: 55 57 58 54 53 56 55 54 54 53 The question is: Is the mean of the sample representative of a target value of 54? The Hypotheses: Ho: m = 54 Ha: m  54 Ho can be rejected if p < .05

Single Mean to a Target - Using Minitab Stat > Basic Statistics > 1-Sample t P-value is greater than 5%, so we say the sample mean is representative of 54 One-Sample T: C1 Test of mu = 54 vs mu not = 54 Variable N Mean StDev SE Mean C1 10 54.900 1.663 0.526 Variable 95.0% CI T P C1 ( 53.710, 56.090) 1.71 0.121

Our Conclusion Statement Because the p value was greater than our critical confidence level (.05 in this case), or similarly, because the confidence interval on the mean contained our target value, we can make the following statement: “We have insufficient evidence to reject the null hypothesis.” Does this say that the null hypothesis is true (that the true population mean = 54)? No! However, we usually then choose to operate under the assumption that Ho is true.

Single Std Dev Compared to Standard A study was performed in order to evaluate the effectiveness of two devices for improving the efficiency of gas home-heating systems. Energy consumption in houses was measured after 2 device (damper=1& damper =2) were installed. The energy consumption data (BTU.In) are stacked in one column with a grouping column (Damper) containing identifiers or subscripts to denote the population. You are interested in comparing the variances of the two populations to the current (s=2.4). ฉ All Rights Reserved. 2000 Minitab, Inc.

Example: Single Std Dev Compared to Standard (Data: Furnace.mtw, Use “BTU_in”) Note: Minitab does not provide an individual c2 test for standard deviations. Instead, it is necessary to look at the confidence interval on the standard deviation and determine if the CI contains the claimed value.

Example: Single Standard Deviation Stat > Basic Statistics > Display Descriptive Statistics

Running the Statistics….

Running the Statistics….

Two Parameter Testing Step 1: State the Practical Problem Means: 2 Sample t-test Sigmas: Homog. Of Variance Medians: Nonparametrics Failure Rates: 2 Proportions Step 1: State the Practical Problem Step 2: Are the data normally distributed? Step 3: State the Null Hypothesis: For s: For m: Ho: spop1= spop2 Ho: m pop1 = m pop2 (normal data) Ho: M1 = M2 (non-normal data) State the Alternative Hypothesis: Ha: spop1 ¹ spop2 Ha: m pop1 ¹ m pop2 Ha: M1 ¹ M2 (non-normal data)

Two Parameter Testing (Cont.) Step 4: Determine the appropriate test statistic F (calc) to test Ho: spop1 = spop2 T (calc) to test Ho: m pop1 = m pop2 (normal data) Step 5: Find the critical value from the appropriate distribution and alpha Step 6: If calculated statistic > critical statistic, then REJECT Ho. Or If P-Value < 0.05 (P-Value < Alpha), then REJECT Ho. Step 7: Translate the statistical conclusion into process terms.

Comparing Two Independent Sample Means The example will make a comparison between two group means Data in Furnace.mtw ( BTU_in) Are the mean the two groups the same? The Hypothesis is: Ho: m1 = m2 Ha : m1  m2 Reject Ho if t > t a/2 or t < -t a/2 for n1 + n2 - 2 degrees of freedom

t-test Using Stacked Data Stat >Basic Statistics > 2-Sample t

t-test Using Stacked Data Descriptive Statistics Graph: BTU.In by Damper Two-Sample T-Test and CI: BTU.In, Damper Two-sample T for BTU.In Damper N Mean StDev SE Mean 1 40 9.91 3.02 0.48 2 50 10.14 2.77 0.39 Difference = mu (1) - mu (2) Estimate for difference: -0.235 95% CI for difference: (-1.464, 0.993) T-Test of difference = 0 (vs not =): T-Value = -0.38 P-Value = 0.704 DF = 80

2 variances test Stat >Basic Statistics > 2 variances

2 variances test

Characteristics About Multiple Parameter Testing One type of analysis is called Analysis of Variance (ANOVA). Allows comparison of two or more process means. We can test statistically whether these samples represent a single population, or if the means are different. The OUTPUT variable (KPOV) is generally measured on a continuous scale (Yield, Temperature, Volts, % Impurities, etc...) The INPUT variables (KPIV’s) are known as FACTORS. In ANOVA, the levels of the FACTORS are treated as categorical in nature even though they may not be. When there is only one factor, the type of analysis used is called “One-Way ANOVA.” For 2 factors, the analysis is called “Two-Way ANOVA. And “n” factors entail “n-Way ANOVA.”

General Method Step 1: State the Practical Problem Step 2: Do the assumptions for the model hold? Response means are independent and normally distributed Population variances are equal across all levels of the factor Run a homogeneity of variance analysis--by factor level—first Step 3: State the hypothesis Step 4: Construct the ANOVA Table Step 5: Do the assumptions for the errors hold (residual analysis)? Errors of the model are independent and normally distributed Step 6: Interpret the P-Value (or the F-statistic) for the factor effect P-Value < 0.05, then REJECT Ho Otherwise, operate as if the null hypothesis is true Step 7: Translate the statistical conclusion into process terms

Step 2: Do the Assumptions for the Model Hold? Are the means independent and normally distributed Randomize runs during the experiment Ensure adequate sample sizes Run a normality test on the data by level Minitab: Stat > Basic Stats > Normality Test Population variances are equal for each factor level (run a homogeneity of variance analysis first) For s Ho: pop1 = pop2 = pop3 = pop4 = .. Ha: at least two are different

Step 3: State the Hypotheses Mathematical Hypotheses: Ho: ’s = 0 Ha: k  0 Conventional Hypotheses: Ho: 1 = 2 = 3 = 4 Ha: At least one k is different

Step 4: Construct the ANOVA Table One-Way Analysis of Variance Analysis of Variance for Time Source DF SS MS F P Operator 3 149.5 49.8 4.35 0.016 Error 20 229.2 11.5 Total 23 378.6 SOURCE SS df MS Test Statistic Between SStreatment g - 1 MStreatment = SStreatment / (g-1) F = MStreatment / MSerror Within SSerror N - g MSerror = SSerror / (N-g) Total SStotal N - 1 Where: g = number of subgroups n = number of readings per subgroup What’s important  the probability that the Operator variation in means could have happened by chance.

Steps 5 - 7 Residual Analysis Step 5:Do the assumptions for the errors hold (residual analysis) ? Errors of the model are independent and normally distributed Randomize runs during the experiment Ensure adequate sample size Plot histogram of error terms Run a normality check on error terms Plot error against run order (I-Chart) Plot error against model fit Step 6:Interpret the P-Value (or the F-statistic) for the factor effect P-Value < 0.05, then REJECT Ho. Otherwise, operate as if the null hypothesis is true. Step 7:Translate the statistical conclusion into process terms Residual Analysis

Example, Experimental Setup Twenty-four animals receive one of four diets. The type of diet is the KPIV (factor of interest). Blood coagulation time is the KPOV During the experiment, diets were assigned randomly to animals. Blood samples taken and tested in random order. Why ? DIET A DIET B DIET C DIET D 62 63 68 56 60 67 66 62 63 71 71 60 59 64 67 61 65 68 63 66 68 64 63 59

Example, Step 2 Do the assumptions for the model hold? Population by level are normally distributed Won’t show significance for small # of samples Variances are equal across all levels of the factor Stat > ANOVA > Test for Equal Variances Ho: _____________ Ha :_____________

Example, Step 3 State the Null and Alternate Hypotheses Ho: µ diet1= µ diet2= µ diet3= µ diet4 (or) Ho: t’s = 0 Ha: at least two diets differ from each other(or) Ha:’s0 Interpretation of the null hypothesis: the average blood coagulation time of each diet is the same (or) what you eat will NOT affect your blood coagulation time. Interpretation of the alternate hypothesis: at least one diet will affect the average blood coagulation time differently than another (or) what type of diet you keep does affect blood coagulation time.

Example, Step 4 Construct the ANOVA Table (using Minitab): Stat > ANOVA > One-way ... Hint: Store Residuals & Fits for later use

Example, Step 4 One-way Analysis of Variance Analysis of Variance for Coag_Tim Source DF SS MS F P Diet_Num 3 228.00 76.00 13.57 0.000 Error 20 112.00 5.60 Total 23 340.00 Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev ---+---------+---------+---------+--- 1 4 61.00 1.826 (------*------) 2 6 66.00 2.828 (-----*----) 3 6 68.00 1.673 (----*-----) 4 8 61.00 2.619 (----*----) ---+---------+---------+---------+--- Pooled StDev = 2.366 59.5 63.0 66.5 70.0

Example, Step 5 Do the assumptions for the errors hold? Best way to check is through a “residual analysis” Stat > Regression > Residual Plots ... Determine if residuals are normally distributed Ascertain that the histogram of the residuals looks normal Make sure there are no trends in the residuals (it’s often best to graph these as a function of the time order in which the data was taken) The residuals should be evenly distributed about their expected (fitted) values

Example, Step 5 Individual residuals - trends? Or outliers? How normal are the residuals ? This graph investigates how the Residuals behave across the experiment. This is probably the most important graph, since it will signal that something outside the experiment may be operating. Nonrandom patterns are warnings. This graph investigates whether the mathematical model fits equally for low to high values of the Fits Histogram - bell curve ? Ignore for small data sets (<30) Random about zero without trends?

When group sizes are equal Example, Step 6 Interpret the P-Value (or the F-statistic) for the factor effect Assuming the residual assumptions are satisfied: If P-Value < 0.05, then REJECT Ho Otherwise, operate as if null hypothesis is true If P is less than 5% then at least one group mean is different. In this case, we reject the hypothesis that all the group means are equal. At least one Diet mean is different. An F-test this large could happen by chance, but in less than one time out of 2000 chances. This would be like getting 11 heads in a row from a fair coin. Analysis of Variance for Coag_Tim Source DF SS MS F P Diet_Num 3 228.00 76.00 13.57 0.000 Error 20 112.00 5.60 Total 23 340.00 F-test is close to 1.00 when group means are similar. In this case, The F-test is much greater. When group sizes are equal

Work shop#6: Run Hypothesis to validate your KPIVs from Measure phase

Analyze : Analyze Phase’s output Refine: KPOV = F(KPIV’s) Which KPIV’s cause mean shifts? Which KPIV’s affect the standard deviation? Which KPIV’s affect yield or proportion? How did KPIV’s relate to KPOV’s?

Improve The Improve phase serves to optimize the KPIV’s and study the possible actions or ideas to achieve the goal

Improve : Tools To optimize KPIV’s in order to achieve the goal Design of Experiment Evolutionary Operation Response Surface Methodology

Improve : Design Of Experiment Factorial Experiments The GOAL is to obtain a mathematical relationship which characterizes: Y = F (X1, X2, X3, ...). Mathematical relationships allow us to identify the most important or critical factors in any experiment by calculating the effect of each. Factorial Experiments allow investigation of multiple factors at multiple levels. Factorial Experiments provide insight into potential “interactions” between factors. This is referred to as factorial efficiency.

Improve : Design Of Experiment Factors: A factor (or input) is one of the controlled or uncontrolled variables whose influence on a response (output) is being studied in the experiment. A factor may be quantitative, e.g., temperature in degrees, time in seconds. A factor may also be qualitative, e.g., different machines, different operator, clean or not clean.

Improve : Design Of Experiment Level: The “levels” of a factor are the values of the factor being studied in the experiment. For quantitative factors, each chosen value becomes a level, e.g., if the experiment is to be conducted at two different temperatures, then the factor of temperature has two “levels”. Qualitative factors can have levels as well, e.g for cleanliness , clean vs not clean; for a group of machines, machine identity. “Coded” levels are often used,e.g. +1 to indicate the “high level” and -1 to indicate the “low level” . Coding can be useful in both preparation & analysis of the experiment

Improve : Design Of Experiment k1 x k2 x k3 …. Factorial : Description of the basic design. The number of “ k’s ” is the number of factors. The value of each “ k ” is the number of levels of interest for that factor. Example : A2 x 3 x 3 design indicates three input variables. One input has two levels and the other two, each have three levels. Test Run (Experimental Run ) : A single combination of factor levels that yields one or more observations of the output variable.

Center Point Method to check linearity of model called Center Point. Center Point is treatment that set all factor as center for quantitative. Result will be interpreted through “curvature” in ANOVA table. If center point’s P-value show greater than a level, we can do analysis by exclude center point from model. ( linear model ) If center point’s P-value show less than a level, that’s mean we can not use equation from software result to be model. ( non - linear ) There are no rule to specify how many Center point per replicate will be take, decision based on how difficult to setting and control.

Sample Size by Minitab Refer to Minitab, sample size will be in menu of Stat->Power and Sample Size.

Sample Size By Minitab Specify number of factor in experiment design. Specify number of run per replicated. Enter power value, 1-b, which can enter more than one. And effect is critical difference that would like to detect (d). Process sigma

Center Point case Exercise : DOECPT.mtw “0” indicated that these treatments are center point treatment.

Center Point Case H0 : Model is linear Ha : Model is non linear Estimated Effects and Coefficients for Weight (coded units) Term Effect Coef StDev Coef T P Constant 2506.25 12.77 196.29 0.000 A 123.75 61.87 12.77 4.85 0.017 B -11.25 -5.62 12.77 -0.44 0.689 C 201.25 100.62 12.77 7.88 0.004 D 6.25 3.12 12.77 0.24 0.822 A*B 120.00 60.00 12.77 4.70 0.018 A*C 20.00 10.00 12.77 0.78 0.491 A*D -17.50 -8.75 12.77 -0.69 0.542 B*C -22.50 -11.25 12.77 -0.88 0.443 B*D 7.50 3.75 12.77 0.29 0.788 C*D 12.50 6.25 12.77 0.49 0.658 A*B*C 16.25 8.13 12.77 0.64 0.570 A*B*D -11.25 -5.63 12.77 -0.44 0.689 A*C*D -18.75 -9.38 12.77 -0.73 0.516 B*C*D 3.75 1.88 12.77 0.15 0.893 A*B*C*D -22.50 -11.25 12.77 -0.88 0.443 Ct Pt -33.75 28.55 -1.18 0.322 H0 : Model is linear Ha : Model is non linear P-Value of Ct Pt (center point) show greater than a level, we can exclude Center Point from model.

Reduced Model Refer to effect table, we can excluded factor that show no statistic significance by remove term from analysis. For last page, we can exclude 3-Way interaction and 4-Way interaction due to no any term that have P-Value greater than a level. We can exclude 2 way interaction except term A*B due to P-value of this term less than a level. For main effect, we can not remove B whether P-Value of B is greater than a level, due to we need to keep term A*B in analysis.

Center Point Case Final equation that we get for model is Fractional Factorial Fit: Weight versus A, B, C Estimated Effects and Coefficients for Weight (coded units) Term Effect Coef SE Coef T P Constant 2499.50 8.636 289.41 0.000 A 123.75 61.87 9.656 6.41 0.000 B -11.25 -5.62 9.656 -0.58 0.569 C 201.25 100.62 9.656 10.42 0.000 A*B 120.00 60.00 9.656 6.21 0.000 Final equation that we get for model is Weight = 2499.5 + 61.87A – 5.62B + 100.62C + 60AB

DOE for Standard Deviations The basic approach involves taking “n” replicates at each trial setting The response of interest is the standard deviation (or the variance) of those n values, rather than the mean of those values There are then three analysis approaches: Normal Probability Plot of log(s2) or log(s)* Balanced ANOVA of log(s2) or log(s)* F tests of the s2 (not shown in this package) * log transformation permits normal distribution analysis approach

Standard Deviation Experiment The following represents the results from 2 different 23 experiments, where 24 replicates were run at each trial combination The implication of this data set is that the standard deviations have already been calculated and that these columns represent the “crunched” data. Some students may feel that you would have been better off starting from the original data, but it’s worthwhile to emphasize that the class is oriented to learning new techniques and not performing mundane analyses. File: Sigma DOE.mtw *

Std Dev Experiment Analysis Set Up After putting this into the proper format as a designed experiment: Stat > DOE > Factorial > Analyze Factorial Design Under the Graph option / Effects Plots  Normal ln(s2)

Normal Probability Plots Plot all the effects of a 23 on a normal probability plot Three main effects: A, B and C Three 2-factor interactions: AB, AC and BC One 3-factor interaction: ABC If no effects are important, all the points should lie approximately on a straight line Significant effects will lie off the line Single significant effects should be easily detectable Multiple significant effects may make it hard to discern the line.

Probability Plot: Experiment 1 Results from Experiment 1 Using ln(s2) B Minitab does not identify these points unless they are very significant. You need to look at Minitab’s Session Window to identify. The plot shows one of the points--corresponding to the B main effect--outside of the rest of the effects

ANOVA Table: Experiment 1 Results from Experiment 1 Using ln(s2) Analysis of Variance for Expt 1 Source DF SS MS F P A 1 0.0414 0.0414 0.30 0.611 B 1 1.2828 1.2828 9.39 0.037 C 1 0.0996 0.0996 0.73 0.441 Error 4 0.5463 0.1366 Total 7 1.9701

Sample Size Considerations The sample size computed for experiments involving standard deviations should be based on a and b, as well as the critical ratio that you want to detect--just as it is for hypothesis testing The Excel program “Sample Sizes.xls” can be used for this purpose If “m” is the sample size for each level (computed by the program), and the experiment has k treatment combinations, then the number of replicates, n, per treatment combination = 1 + 2(m-1) This sample size methodology is valid for situations where the factor effects are multiplicative; that is, one level of a variable may increase the standard deviation by 1.3, while one level of another variable may increase the standard deviation by 1.5, so that when those two levels are combined in a treatment, the net effect is (1.3(1.5) = 1.95. Of course, the sample size is valid only for the F testing approach. The additive model is much more complex, and the theory is not defined, particularly with regard to appropriate sample size. In case someone asks about the sample size per treatment combination, it’s based on the following. The sample size program gives the size m. Need df = m-1. Df is what is needed across k/2 treatments. So each treatment is to contribute 2df/k degrees of freedom each. Since the standard deviation takes 1 degree of freedom, the sample size per treatment combination n = 1 + 2df/k = 1+2(m-1)/k. k *

Workshop # 7 : Run DOE to optimize the validate KPIV to get the desired KPOV

Improve : Improve Phase’s output Which KPIV’s cause mean shifts? Which KPIV’s affect the standard deviation? Levels of the KPIV’s that optimize process performance

Control The Control phase serves to establish the action to ensure that the process is monitored continuously for consistency in quality of the product or service.

Control: Tools To monitor and control the KPIV’s Error Proofing (Poka-Yoke) SPC Control Plan

Control: Poka-Yoke Why Poka-Yoke? Strives for zero defects Leads to Quality Inspection Elimination Respects the intelligence of workers Takes over repetitive tasks/actions that depend on one’s memory Frees an operator’s time and mind to pursue more creative and value added activities

Control: Poka-Yoke Benefit of Poka-Yoke? Enforces operational procedures or sequences Signals or stops a process if an error occurs or a defect is created Eliminates choices leading to incorrect actions Prevents product damage Prevents machine damage Prevents personal injury Eliminates inadvertent mistakes

Control: SPC SPC is the basic tool for observing variation and using statistical signals to monitor and/or improve performance. This tool can be applied to nearly any area. Performance characteristics of equipment Error rates of bookkeeping tasks Dollar figures of gross sales Scrap rates from waste analysis Transit times in material management systems SPC stands for Statistical Process Control. Unfortunately, most companies apply it to finished goods (Y’s) rather than process characteristics (X’s). Until the process inputs become the focus of our effort, the full power of SPC methods to improve quality, increase productivity, and reduce cost cannot be realized.

Types of Control Charts The quality of a product or process may be assessed by means of Variables :actual values measured on a continuous scale e.g. length, weight, strength, resistance, etc Attributes :discrete data that come from classifying units (accept/reject) or from counting the number of defects on a unit If the quality characteristic is measurable monitor its mean value and variability (range or standard deviation) If the quality characteristic is not measurable monitor the fraction (or number) of defectives monitor the number of defects

Defectives vs Defects Defective or Nonconforming Unit a unit of product that does not satisfy one or more of the specifications for the product e.g. a scratched media, a cracked casing, a failed PCBA Defect or Nonconformity a specific point at which a specification is not satisfied e.g. a scratch, a crack, a defective IC

Shewhart Control Charts - Overview Walter A Shewhart

Shewhart Control Charts for Variables

Control: SPC Choosing The Correct Control Chart Type u c p, np p X, mR Type of data Individual measurements or sub-groups? Normally Distributed data? Interested primarily in sudden shifts in mean? Constant sub-group size? Area of opportunity constant from sample to sample? Counting defects or defectives? u c p, np p X, mR MA, EWMA, or CUSUM X-bar, R X-bar, s Use of modified control chart rules okay on x-bar chart Data tends to be normally distributed because of central limit theorem More effective in detecting gradual long-term changes Attributes Variables Defectives Yes No Defects Measurement Sub-groups Individuals

Control: Control Phase’s output Y is monitored with suitable tools X is controlled by suitable tools Manage the INPUTS and good OUTPUTS will follow

Breakthrough Summary Champion Blackbelts Finance Rep.& Process Owner

Hard Savings Savings which flow to Net Profit Before Income Tax (NPBIT) Can be tracked and reported by the Finance organization Is usually a reduction in labor, material usage, material cost, or overhead Can also be cost of money for reduction in inventory or assets

Finance Guidelines - Savings Definitions Hard Savings Direct Improvement to Company Earnings Baseline is Current Spending Experience Directly Traceable to Project Can be Audited Hard Savings Example Process is Improved, resulting in lower scrap Scrap reduction can be linked directly to the successful completion of the project

Potential Savings Savings opportunities which have been documented and validated, but require action before actual savings could be realized an example is capital equipment which has been exceeded due to increased efficiencies in the process. Savings can not be realized because we are still paying for the equipment. It has the potential for generating savings if we could sell or put back into use because of increases in schedules. Some form of a management decision or action is generally required to realize the savings

Finance Guidelines - Savings Definitions Potential Savings Improve Capability of company Resource Potential Savings Example Process is Improved, resulting in reduced manpower requirement Headcount is not reduced or reduction cannot be traced to the project Potential Savings might turn into hard savings if the resource is productively utilized in the future

Identifying Soft Savings Dollars or other benefits exist but they are not directly traceable Projected benefits have a reasonable probability (TBD) that they will occur Some or all of the benefits may occur outside of the normal 12 month tracking window Assessment of the benefit could/should be viewed in terms of strategic value to the company and the amount of baseline shift accomplished

Finance Guidelines - Savings Definitions Soft Savings Benefit Expected from Process Improvement Benefit cannot be directly traced to Successful Completion of Project Benefit cannot be quantified Soft Savings Example Process is Improved; decreasing cycle time