Presentation is loading. Please wait.

Presentation is loading. Please wait.

DMAIC : The Breakthrough Strategy

Similar presentations


Presentation on theme: "DMAIC : The Breakthrough Strategy"— Presentation transcript:

1 DMAIC : The Breakthrough Strategy

2 What is Six Sigma? It is a business process that allows companies to drastically improve their bottom line by designing and monitoring everyday business activities in ways that minimize waste and resources while increasing customer satisfaction. Mikel Harry, Richard Schroeder

3 What Six Sigma Can Do For Your Company?
4.8 DFSS MAIC Average company

4 What Six Sigma Can Do For Your Company?

5 The Cost of Quality (COQ)
Traditional Cost of Poor Quality (COQ) 5-8% Inspection Warranty Rejects Rework การติดตั้ง ยอดขายลดลง การขนส่งล่าช้า งานเอกสารส่งผิดที่ เวลาผลิตยาวนาน ค่าเร่งการผลิต รายการสั่งซื้อมากเกินไป ใช้เวลา Set up นาน ค่าของเงินตามกาลเวลา การสั่งวัตถุดิบมาก เกินความจำเป็น ความไม่พอใจ ความปลอดภัย ค่าบริการขนส่ง ค่าบัตรโทรศัพท์ ข้อมูลที่ไม่ถูกต้อง แนวทางที่แตกต่าง ในการทำธุรกิจ Lost Opportunity Group activities: create the flow after rejected units have occurred in the process. Less Obvious Cost of Quality (COQ) 15-20% Note: % of sales

6 DMAIC : The Yellow Brick Road
Define C O R E P H A S Measure Analyze Characterization Optimization Breakthrough Strategy Improve Control

7 Breakthrough & People Champion Blackbelts Finance Rep.& Process Owner

8 Define What is my biggest problem? Customer complaints
Low performance metrics Too much time consumed What needs to improve? Big budget items Poor performance Where are there opportunities to improve? How do I affect corporate and business group objectives? What’s in my budget?

9 Define : The Project Projects DIRECTLY tie to department and/or business unit objectives Projects are suitable in scope BBs are “fit” to the project Champions own and support project selection

10 Define : The Defect Rework High Defect Rates Customer Complaints
Excessive Test and Inspection Constrained Capacity with High anticipated Capital Expenditures Bottlenecks High Defect Rates Low Yields Excessive Cycle Time Excessive Machine Down Time High Maintenance Costs High Consumables Usage

11 ปัญหาฝังแน่น (Chronic)
Define : The Chronic Problem Special Cause ( ปัญหานาน ๆ ครั้ง ) Time Reject Rate ปัญหาฝังแน่น (Chronic) The Breakthrough Strategy Optimum Level

12 Define : The Persistent Problem
Is process in control?

13 Define : Refine The Defect
a2 a3 a4 a5 a6 a7 a1 Refined Defect = a1

14 MAIC --> Identify Leveraged KPIV’s
Tools Outputs Process Map Inputs Variables C&E Matrix and FMEA Potential Key Process Measure Gage R&R, Capability Input Variables (KPIVs) Multi- Vari Studies, Correlations KPIVs Analyze 8 - 10 T-Test, ANOM, ANOVA Screening DOE’s Improve 4-8 Optimized KPIVs DOE’s , RSM Quality Systems 3-6 Key Leverage KPIVs Control SPC, Control Plans

15 Measure The Measure phase serves to validate the problem, translate the practical to statistical problem and to begin the search for root causes

16 Measure : Tools To validate the problem Measurement System Analysis To translate practical to statistical problem Process Capability Analysis To search for the root cause Process Map Cause and Effect Analysis Failure Mode and Effect Analysis

17 Work shop #1: Our products are the distance resulting from the Catapult. Product spec are +/- 4 Cm. for both X and Y axis Shoot the ball for at least 30 trials , then collect yield Prepare to report your result.

18 Measure : Measurement System Analysis
Objectives: Validate the Measurement / Inspection System Quantify the effect of the Measurement System variability on the process variability

19 Measure : Measurement System Analysis
Attribute GR&R : Purpose To determine if inspectors across all shifts, machines, lines, etc… use the same criteria to discriminate “good” from “bad” To quantify the ability of inspectors or gages to accurately repeat their inspection decisions To identify how well inspectors/gages conform to a known master (possibly defined by the customer) which includes: How often operators decide to over reject How often operators decide to over accept

20 Measure : Measurement System Analysis

21 Measure : Measurement System Analysis
% Appraiser Score % REPEATIBILITY OF OPERATOR # 1 = 16/20 = 80% % REPEATIBILITY OF OPERATOR # 2 = 13/20 = 65% % REPEATIBILITY OF OPERATOR # 3 = 20/20 = 100%

22 Measure : Measurement System Analysis
% Attribute Score % UNBIAS OF OPERATOR # 1 = 12/20 = 60% % UNBIAS OF OPERATOR # 2 = 12/20 = 60% % UNBIAS OF OPERATOR # 3 = 17/20 = 85% % Screen Effective Score % REPEATABILITY OF INSPECTION = 11/20 = 55 % % Attribute Screen Effective Score % UNBIAS OF INSPECTION 50 % = 10/20 = 50%

23 Measure : Measurement System Analysis
Variable GR&R : Purpose Study of your measurement system will reveal the relative amount of variation in your data that results from measurement system error. It is also a great tool for comparing two or more measurement devices or two or more operators. MSA should be used as part of the criteria for accepting a new piece of measurement equipment to manufacturing. It should be the basis for evaluating a measurement system which is suspect of being deficient.

24 Measure : Measurement System Analysis

25 Measure : Measurement System Analysis
Resolution? “Precision” (R&R) Calibration? Stability? Linearity? Bias?

26 Measurement System Metrics
Measurement System Variance: s2meas = s2repeat + s2reprod To determine whether the measurement system is “good” or “bad” for a certain application, you need to compare the measurement variation to the product spec or the process variation Comparing s2meas with Tolerance: Precision-to-Tolerance Ratio (P/T) Comparing s2meas with Total Observed Process Variation (P/TV): % Repeatability and Reproducibility (%R&R) Discrimination Index

27 Uses of P/T and P/TV (%R&R)
The P/T ratio is the most common estimate of measurement system precision Evaluates how well the measurement system can perform with respect to the specifications The appropriate P/T ratio is strongly dependent on the process capability. If Cpk is not adequate, the P/T ratio may give a false sense of security. The P/TV (%R&R) is the best measure for Analysis Estimates how well the measurement system performs with respect to the overall process variation %R&R is the best estimate when performing process improvement studies. Care must be taken to use samples representing full process range.

28 Number of Distinct Categories
Automobile Industry Action Group (AIAG) recommendations: Categories Remarks < 2 System cannot discern one part from another = 2 System can only divide data in two groups e.g. high and low = 3 System can only divide data in three groups e.g. low, middle and high  4 System is acceptable

29 Measure : Measurement System Analysis
Variable GR&R : Decision Criterion Note : Stability is analyzed by control chart

30 Enter the data and tolerance information into Minitab.
Example: Minitab Enter the data and tolerance information into Minitab. Stat > Quality Tools > Gage R&R Study (Crossed ) Enter Gage Info and Options. (see next page) FN: Gageaiag.mtw ANOVA method is preferred.

31 Enter the data and tolerance information into Minitab.
Stat > Quality Tools > Gage R&R Study Gage Info (see below) & Options

32 Gage R&R Output

33 Gage R&R Output

34 Gage R&R, Variation Components
Variance due to the measurement system (broken down into repeatability and reproducibility) %Contribution Source VarComp (of VarComp) Total Gage R&R Repeatability Reproducibility Operator Operator*PartID Part-To-Part Total Variation StdDev Study Var %Study Var %Tolerance Source (SD) (5.15*SD) (%SV) (SV/Toler) Total Gage R&R Repeatability Reproducibility Operator Operator*PartID Part-To-Part Total Variation Total variance Standard deviation for each variance component Variance due to the parts

35 Gage R&R, Results %Contribution Source VarComp (of VarComp) Total Gage R&R Repeatability Reproducibility Operator Operator*PartID Part-To-Part Total Variation StdDev Study Var %Study Var %Tolerance Source (SD) (5.15*SD) (%SV) (SV/Toler) Total Gage R&R Repeatability Reproducibility Operator Operator*PartID Part-To-Part Total Variation Question: What is our conclusion about the measurement system?

36 Measure : Process Capability Analysis
Process capability is a measure of how well the process is currently behaving with respect to the output specification. Process capability is determined by the total variation that comes from common causes -the minimum variation that can be achieved after all special causes have been eliminated. Thus, capability represents the performance of the process itself,as demonstrated when the process is being operated in a state of statistical control

37 Measure : Process Capability Analysis
Translate practical problem to statistical problem Characterization Variation Large LSL USL Off-Target LSL USL Outliers LSL USL

38 Measure : Process Capability Analysis
Two measures of process capability Process Potential Cp Process Performance Cpu Cpl Cpk Cpm

39 Measure : Process Capability Analysis
Process Potential

40 Measure : Process Capability Analysis
The Cp index compares the allowable spread (USL-LSL) against the process spread (6). It fails to take into account if the process is centered between the specification limits. Process is centered Process is not centered

41 Measure : Process Capability Analysis
Process Performance The Cpk index relates the scaled distance between the process mean and the nearest specification limit.

42 Measure : Process Capability Analysis
There are 2 kind of variation : Short term Variation and Long term Variation

43 Measure : Process Capability Analysis
Short Term VS LongTerm ( Cp Vs Pp or Cpk vs Ppk )

44 Measure : Process Capability Analysis
Process Potential VS. Process Performance ( Cp Vs Cpk ) 1.If Cp > 1.5 , it means the standard deviation is suitable 2.Cp is not equal to Cpk, it means that the process mean is off-centered

45 Workshop#3 Design the appropriate check sheet Define the subgroup Shoot the ball for at least 30 trials per subgroup Perform process capability analysis, translate Cp, Cpk , Pp and Ppk into statistical problem Report your results.

46 Measure : Process Map Process Map is a graphical representation of the flow of a “as-is” process. It contains all the major steps and decision points in a process. It helps us understand the process better, identify the critical or problems area, and identify where improvement can be made.

47 Measure : Process Map OPERATION
All steps in the process where the object undergoes a change in form or condition. TRANSPORTATION All steps in a process where the object moves from one location to another, outside of the Operation STORAGE All steps in the process where the object remains at rest, in a semi-permanent or storage condition DELAY All incidences where the object stops or waits on a an operation, transportation, or inspection INSPECTION All steps in the process where the objects are checked for completeness, quality, outside of the Operation. DECISION

48 Measure : Process Map • How many Operational Steps are there?
Good Bad Scrap Warehouse How many Operational Steps are there? How many Decision Points? How many Measurement/Inspection Points? How many Re-work Loops? How many Control Points?

49 Measure : Process Map High Level Process Map
Major Step KPIVs KPOVs These KPIVs and KPOVs can then be used as inputs to Cause and Effect Matrix

50 Workshop #2 : Do the process map and report the process steps and KPIVs that may be the cause

51 Measure : Cause and Effect Analysis
A visual tool used to identify, explore and graphically display, in increasing detail, all the possible causes related to a problem or condition to discover root causes To discover the most probable causes for further analysis To visualize possible relationships between causes for any problem current or future To pinpoint conditions causing customer complaints, process errors or non-conforming products To provide focus for discussion To aid in development of technical or other standards or process improvements

52 Measure : Cause and Effect Matrix
There are two types of Cause and Effect Matrix 1. Fishbone Diagram - traditional approach to brainstorming and diagramming cause-effect relationships. Good tool when there is one primary effect being analyzed. 2. Cause-Effect Matrix - a diagram in table form showing the direct relationships between outputs (Y’s) and inputs (X’s).

53 Measure : Cause and Effect Matrix
Methods Materials Machinery Manpower Problem/ Desired Improvement C/N/X C N Fishbone Diagram C = Control Factor N = Noise Factor X = Factor for DOE (chosen later)

54 Measure : Cause and Effect Matrix

55 Workshop #4: Team brainstorming to create the fishbone diagram

56 Measure : Failure Mode and Effect Analysis
FMEA is a systematic approach used to examine potential failures and prevent their occurrence. It enhances an engineer’s ability to predict problems and provides a system of ranking, or prioritization, so the most likely failure modes can be addressed.

57 Measure : Failure Mode and Effect Analysis

58 Severity (ความรุนแรง ) X Occurrence (โอกาสการเกิดขึ้น) X
Measure : Failure Mode and Effect Analysis RPN = S x O x D Severity (ความรุนแรง ) X Occurrence (โอกาสการเกิดขึ้น) X Detection (การตรวจจับ)

59 Measure : Failure Mode and Effect Analysis
สิ่งสำคัญมีน้อย (Vital Few) สิ่งจิ๊บจ๊อยมีมาก (Trivial Many)

60 Workshop # 5 : Team Brainstorming to create FMEA

61 Measure : Measure Phase’s Output
Check and fix the measurement system Determine “where” you are Rolled throughput yield, DPPM Process Capability Entitlement Identify potential KPIV’s Process Mapping / Cause & Effect / FMEA Determine their likely impact

62 Analyze The Analyze phase serves to validate the KPIVs, and to study the statistical relationship between KPIVs and KPOVs

63 Analyze : Tools To validate the KPIVs Hypothesis Test 2 samples t test Analysis Of Variances etc. To reveal the relationship between KPIVs and KPOVs Regression analysis Correlation

64 Analyze : Hypothesis Testing
The Null Hypothesis Statement generally assumed to be true unless sufficient evidence is found to the contrary Often assumed to be the status quo, or the preferred outcome. However, it sometimes represents a state you strongly want to disprove. Designated as H0

65 The Alternative Hypothesis
Analyze : Hypothesis Testing The Alternative Hypothesis Statement generally held to be true if the null hypothesis is rejected Can be based on a specific engineering difference in a characteristic value that one desires to detect Designated as HA

66 Analyze : Hypothesis Testing
NULL HYPOTHESIS: Nothing has changed: For Tests Of Process Mean: H0: m = m0 For Tests Of Process Variance: H0: s2 = s20 ALTERNATE HYPOTHESIS: Change has occurred: MEAN VARIANCE INEQUALITY Ha:   0 Ha: 2  20 NEW  OLD Ha:   0 Ha: 2  20 NEW  OLD Ha:   0 Ha: 2  20

67 Analyze : Hypothesis Testing

68 Analyze : Hypothesis Testing
See Hypothesis Testing Roadmap

69 Example: Single Mean Compared to Target
The example will include 10 measurements of a random sample: The question is: Is the mean of the sample representative of a target value of 54? The Hypotheses: Ho: m = 54 Ha: m  54 Ho can be rejected if p < .05

70 Single Mean to a Target - Using Minitab
Stat > Basic Statistics > 1-Sample t P-value is greater than 5%, so we say the sample mean is representative of 54 One-Sample T: C1 Test of mu = 54 vs mu not = 54 Variable N Mean StDev SE Mean C Variable % CI T P C ( , )

71 Our Conclusion Statement
Because the p value was greater than our critical confidence level (.05 in this case), or similarly, because the confidence interval on the mean contained our target value, we can make the following statement: “We have insufficient evidence to reject the null hypothesis.” Does this say that the null hypothesis is true (that the true population mean = 54)? No! However, we usually then choose to operate under the assumption that Ho is true.

72 Single Std Dev Compared to Standard
A study was performed in order to evaluate the effectiveness of two devices for improving the efficiency of gas home-heating systems. Energy consumption in houses was measured after 2 device (damper=1& damper =2) were installed. The energy consumption data (BTU.In) are stacked in one column with a grouping column (Damper) containing identifiers or subscripts to denote the population. You are interested in comparing the variances of the two populations to the current (s=2.4). ฉ All Rights Reserved Minitab, Inc.

73 Example: Single Std Dev Compared to Standard
(Data: Furnace.mtw, Use “BTU_in”) Note: Minitab does not provide an individual c2 test for standard deviations. Instead, it is necessary to look at the confidence interval on the standard deviation and determine if the CI contains the claimed value.

74 Example: Single Standard Deviation
Stat > Basic Statistics > Display Descriptive Statistics

75 Running the Statistics….

76 Running the Statistics….

77 Two Parameter Testing Step 1: State the Practical Problem
Means: 2 Sample t-test Sigmas: Homog. Of Variance Medians: Nonparametrics Failure Rates: 2 Proportions Step 1: State the Practical Problem Step 2: Are the data normally distributed? Step 3: State the Null Hypothesis: For s: For m: Ho: spop1= spop2 Ho: m pop1 = m pop2 (normal data) Ho: M1 = M2 (non-normal data) State the Alternative Hypothesis: Ha: spop1 ¹ spop2 Ha: m pop1 ¹ m pop2 Ha: M1 ¹ M2 (non-normal data)

78 Two Parameter Testing (Cont.)
Step 4: Determine the appropriate test statistic F (calc) to test Ho: spop1 = spop2 T (calc) to test Ho: m pop1 = m pop2 (normal data) Step 5: Find the critical value from the appropriate distribution and alpha Step 6: If calculated statistic > critical statistic, then REJECT Ho. Or If P-Value < (P-Value < Alpha), then REJECT Ho. Step 7: Translate the statistical conclusion into process terms.

79 Comparing Two Independent Sample Means
The example will make a comparison between two group means Data in Furnace.mtw ( BTU_in) Are the mean the two groups the same? The Hypothesis is: Ho: m1 = m2 Ha : m1  m2 Reject Ho if t > t a/2 or t < -t a/2 for n1 + n2 - 2 degrees of freedom

80 t-test Using Stacked Data
Stat >Basic Statistics > 2-Sample t

81 t-test Using Stacked Data
Descriptive Statistics Graph: BTU.In by Damper Two-Sample T-Test and CI: BTU.In, Damper Two-sample T for BTU.In Damper N Mean StDev SE Mean Difference = mu (1) - mu (2) Estimate for difference: 95% CI for difference: (-1.464, 0.993) T-Test of difference = 0 (vs not =): T-Value = P-Value = DF = 80

82 2 variances test Stat >Basic Statistics > 2 variances

83 2 variances test

84 Characteristics About Multiple Parameter Testing
One type of analysis is called Analysis of Variance (ANOVA). Allows comparison of two or more process means. We can test statistically whether these samples represent a single population, or if the means are different. The OUTPUT variable (KPOV) is generally measured on a continuous scale (Yield, Temperature, Volts, % Impurities, etc...) The INPUT variables (KPIV’s) are known as FACTORS. In ANOVA, the levels of the FACTORS are treated as categorical in nature even though they may not be. When there is only one factor, the type of analysis used is called “One-Way ANOVA.” For 2 factors, the analysis is called “Two-Way ANOVA. And “n” factors entail “n-Way ANOVA.”

85 General Method Step 1: State the Practical Problem
Step 2: Do the assumptions for the model hold? Response means are independent and normally distributed Population variances are equal across all levels of the factor Run a homogeneity of variance analysis--by factor level—first Step 3: State the hypothesis Step 4: Construct the ANOVA Table Step 5: Do the assumptions for the errors hold (residual analysis)? Errors of the model are independent and normally distributed Step 6: Interpret the P-Value (or the F-statistic) for the factor effect P-Value < 0.05, then REJECT Ho Otherwise, operate as if the null hypothesis is true Step 7: Translate the statistical conclusion into process terms

86 Step 2: Do the Assumptions for the Model Hold?
Are the means independent and normally distributed Randomize runs during the experiment Ensure adequate sample sizes Run a normality test on the data by level Minitab: Stat > Basic Stats > Normality Test Population variances are equal for each factor level (run a homogeneity of variance analysis first) For s Ho: pop1 = pop2 = pop3 = pop4 = .. Ha: at least two are different

87 Step 3: State the Hypotheses
Mathematical Hypotheses: Ho: ’s = 0 Ha: k  0 Conventional Hypotheses: Ho: 1 = 2 = 3 = 4 Ha: At least one k is different

88 Step 4: Construct the ANOVA Table
One-Way Analysis of Variance Analysis of Variance for Time Source DF SS MS F P Operator Error Total SOURCE SS df MS Test Statistic Between SStreatment g - 1 MStreatment = SStreatment / (g-1) F = MStreatment / MSerror Within SSerror N - g MSerror = SSerror / (N-g) Total SStotal N - 1 Where: g = number of subgroups n = number of readings per subgroup What’s important  the probability that the Operator variation in means could have happened by chance.

89 Steps 5 - 7 Residual Analysis
Step 5:Do the assumptions for the errors hold (residual analysis) ? Errors of the model are independent and normally distributed Randomize runs during the experiment Ensure adequate sample size Plot histogram of error terms Run a normality check on error terms Plot error against run order (I-Chart) Plot error against model fit Step 6:Interpret the P-Value (or the F-statistic) for the factor effect P-Value < 0.05, then REJECT Ho. Otherwise, operate as if the null hypothesis is true. Step 7:Translate the statistical conclusion into process terms Residual Analysis

90 Example, Experimental Setup
Twenty-four animals receive one of four diets. The type of diet is the KPIV (factor of interest). Blood coagulation time is the KPOV During the experiment, diets were assigned randomly to animals. Blood samples taken and tested in random order. Why ? DIET A DIET B DIET C DIET D 63 59

91 Example, Step 2 Do the assumptions for the model hold?
Population by level are normally distributed Won’t show significance for small # of samples Variances are equal across all levels of the factor Stat > ANOVA > Test for Equal Variances Ho: _____________ Ha :_____________

92 Example, Step 3 State the Null and Alternate Hypotheses
Ho: µ diet1= µ diet2= µ diet3= µ diet4 (or) Ho: t’s = 0 Ha: at least two diets differ from each other(or) Ha:’s0 Interpretation of the null hypothesis: the average blood coagulation time of each diet is the same (or) what you eat will NOT affect your blood coagulation time. Interpretation of the alternate hypothesis: at least one diet will affect the average blood coagulation time differently than another (or) what type of diet you keep does affect blood coagulation time.

93 Example, Step 4 Construct the ANOVA Table (using Minitab):
Stat > ANOVA > One-way ... Hint: Store Residuals & Fits for later use

94 Example, Step 4 One-way Analysis of Variance
Analysis of Variance for Coag_Tim Source DF SS MS F P Diet_Num Error Total Individual 95% CIs For Mean Based on Pooled StDev Level N Mean StDev (------*------) (-----*----) (----*-----) (----*----) Pooled StDev =

95 Example, Step 5 Do the assumptions for the errors hold?
Best way to check is through a “residual analysis” Stat > Regression > Residual Plots ... Determine if residuals are normally distributed Ascertain that the histogram of the residuals looks normal Make sure there are no trends in the residuals (it’s often best to graph these as a function of the time order in which the data was taken) The residuals should be evenly distributed about their expected (fitted) values

96 Example, Step 5 Individual residuals - trends? Or outliers?
How normal are the residuals ? This graph investigates how the Residuals behave across the experiment. This is probably the most important graph, since it will signal that something outside the experiment may be operating. Nonrandom patterns are warnings. This graph investigates whether the mathematical model fits equally for low to high values of the Fits Histogram - bell curve ? Ignore for small data sets (<30) Random about zero without trends?

97 When group sizes are equal
Example, Step 6 Interpret the P-Value (or the F-statistic) for the factor effect Assuming the residual assumptions are satisfied: If P-Value < 0.05, then REJECT Ho Otherwise, operate as if null hypothesis is true If P is less than 5% then at least one group mean is different. In this case, we reject the hypothesis that all the group means are equal. At least one Diet mean is different. An F-test this large could happen by chance, but in less than one time out of 2000 chances. This would be like getting 11 heads in a row from a fair coin. Analysis of Variance for Coag_Tim Source DF SS MS F P Diet_Num Error Total F-test is close to 1.00 when group means are similar. In this case, The F-test is much greater. When group sizes are equal

98 Work shop#6: Run Hypothesis to validate your KPIVs from Measure phase

99 Analyze : Analyze Phase’s output
Refine: KPOV = F(KPIV’s) Which KPIV’s cause mean shifts? Which KPIV’s affect the standard deviation? Which KPIV’s affect yield or proportion? How did KPIV’s relate to KPOV’s?

100 Improve The Improve phase serves to optimize the KPIV’s and study the possible actions or ideas to achieve the goal

101 Improve : Tools To optimize KPIV’s in order to achieve the goal Design of Experiment Evolutionary Operation Response Surface Methodology

102 Improve : Design Of Experiment
Factorial Experiments The GOAL is to obtain a mathematical relationship which characterizes: Y = F (X1, X2, X3, ...). Mathematical relationships allow us to identify the most important or critical factors in any experiment by calculating the effect of each. Factorial Experiments allow investigation of multiple factors at multiple levels. Factorial Experiments provide insight into potential “interactions” between factors. This is referred to as factorial efficiency.

103 Improve : Design Of Experiment
Factors: A factor (or input) is one of the controlled or uncontrolled variables whose influence on a response (output) is being studied in the experiment. A factor may be quantitative, e.g., temperature in degrees, time in seconds. A factor may also be qualitative, e.g., different machines, different operator, clean or not clean.

104 Improve : Design Of Experiment
Level: The “levels” of a factor are the values of the factor being studied in the experiment. For quantitative factors, each chosen value becomes a level, e.g., if the experiment is to be conducted at two different temperatures, then the factor of temperature has two “levels”. Qualitative factors can have levels as well, e.g for cleanliness , clean vs not clean; for a group of machines, machine identity. “Coded” levels are often used,e.g. +1 to indicate the “high level” and -1 to indicate the “low level” . Coding can be useful in both preparation & analysis of the experiment

105 Improve : Design Of Experiment
k1 x k2 x k3 …. Factorial : Description of the basic design. The number of “ k’s ” is the number of factors. The value of each “ k ” is the number of levels of interest for that factor. Example : A2 x 3 x 3 design indicates three input variables. One input has two levels and the other two, each have three levels. Test Run (Experimental Run ) : A single combination of factor levels that yields one or more observations of the output variable.

106 Center Point Method to check linearity of model called Center Point.
Center Point is treatment that set all factor as center for quantitative. Result will be interpreted through “curvature” in ANOVA table. If center point’s P-value show greater than a level, we can do analysis by exclude center point from model. ( linear model ) If center point’s P-value show less than a level, that’s mean we can not use equation from software result to be model. ( non - linear ) There are no rule to specify how many Center point per replicate will be take, decision based on how difficult to setting and control.

107 Sample Size by Minitab Refer to Minitab, sample size will be in menu of Stat->Power and Sample Size.

108 Sample Size By Minitab Specify number of factor in experiment design.
Specify number of run per replicated. Enter power value, 1-b, which can enter more than one. And effect is critical difference that would like to detect (d). Process sigma

109 Center Point case Exercise : DOECPT.mtw
“0” indicated that these treatments are center point treatment.

110 Center Point Case H0 : Model is linear Ha : Model is non linear
Estimated Effects and Coefficients for Weight (coded units) Term Effect Coef StDev Coef T P Constant A B C D A*B A*C A*D B*C B*D C*D A*B*C A*B*D A*C*D B*C*D A*B*C*D Ct Pt H0 : Model is linear Ha : Model is non linear P-Value of Ct Pt (center point) show greater than a level, we can exclude Center Point from model.

111 Reduced Model Refer to effect table, we can excluded factor that show no statistic significance by remove term from analysis. For last page, we can exclude 3-Way interaction and 4-Way interaction due to no any term that have P-Value greater than a level. We can exclude 2 way interaction except term A*B due to P-value of this term less than a level. For main effect, we can not remove B whether P-Value of B is greater than a level, due to we need to keep term A*B in analysis.

112 Center Point Case Final equation that we get for model is
Fractional Factorial Fit: Weight versus A, B, C Estimated Effects and Coefficients for Weight (coded units) Term Effect Coef SE Coef T P Constant A B C A*B Final equation that we get for model is Weight = A – 5.62B C + 60AB

113 DOE for Standard Deviations
The basic approach involves taking “n” replicates at each trial setting The response of interest is the standard deviation (or the variance) of those n values, rather than the mean of those values There are then three analysis approaches: Normal Probability Plot of log(s2) or log(s)* Balanced ANOVA of log(s2) or log(s)* F tests of the s2 (not shown in this package) * log transformation permits normal distribution analysis approach

114 Standard Deviation Experiment
The following represents the results from 2 different 23 experiments, where 24 replicates were run at each trial combination The implication of this data set is that the standard deviations have already been calculated and that these columns represent the “crunched” data. Some students may feel that you would have been better off starting from the original data, but it’s worthwhile to emphasize that the class is oriented to learning new techniques and not performing mundane analyses. File: Sigma DOE.mtw *

115 Std Dev Experiment Analysis Set Up
After putting this into the proper format as a designed experiment: Stat > DOE > Factorial > Analyze Factorial Design Under the Graph option / Effects Plots  Normal ln(s2)

116 Normal Probability Plots
Plot all the effects of a 23 on a normal probability plot Three main effects: A, B and C Three 2-factor interactions: AB, AC and BC One 3-factor interaction: ABC If no effects are important, all the points should lie approximately on a straight line Significant effects will lie off the line Single significant effects should be easily detectable Multiple significant effects may make it hard to discern the line.

117 Probability Plot: Experiment 1
Results from Experiment 1 Using ln(s2) B Minitab does not identify these points unless they are very significant. You need to look at Minitab’s Session Window to identify. The plot shows one of the points--corresponding to the B main effect--outside of the rest of the effects

118 ANOVA Table: Experiment 1
Results from Experiment 1 Using ln(s2) Analysis of Variance for Expt 1 Source DF SS MS F P A B C Error Total

119 Sample Size Considerations
The sample size computed for experiments involving standard deviations should be based on a and b, as well as the critical ratio that you want to detect--just as it is for hypothesis testing The Excel program “Sample Sizes.xls” can be used for this purpose If “m” is the sample size for each level (computed by the program), and the experiment has k treatment combinations, then the number of replicates, n, per treatment combination = 1 + 2(m-1) This sample size methodology is valid for situations where the factor effects are multiplicative; that is, one level of a variable may increase the standard deviation by 1.3, while one level of another variable may increase the standard deviation by 1.5, so that when those two levels are combined in a treatment, the net effect is (1.3(1.5) = 1.95. Of course, the sample size is valid only for the F testing approach. The additive model is much more complex, and the theory is not defined, particularly with regard to appropriate sample size. In case someone asks about the sample size per treatment combination, it’s based on the following. The sample size program gives the size m. Need df = m-1. Df is what is needed across k/2 treatments. So each treatment is to contribute 2df/k degrees of freedom each. Since the standard deviation takes 1 degree of freedom, the sample size per treatment combination n = 1 + 2df/k = 1+2(m-1)/k. k *

120 Workshop # 7 : Run DOE to optimize the validate KPIV to get the desired KPOV

121 Improve : Improve Phase’s output
Which KPIV’s cause mean shifts? Which KPIV’s affect the standard deviation? Levels of the KPIV’s that optimize process performance

122 Control The Control phase serves to establish the action to ensure that the process is monitored continuously for consistency in quality of the product or service.

123 Control: Tools To monitor and control the KPIV’s Error Proofing (Poka-Yoke) SPC Control Plan

124 Control: Poka-Yoke Why Poka-Yoke? Strives for zero defects Leads to Quality Inspection Elimination Respects the intelligence of workers Takes over repetitive tasks/actions that depend on one’s memory Frees an operator’s time and mind to pursue more creative and value added activities

125 Control: Poka-Yoke Benefit of Poka-Yoke?
Enforces operational procedures or sequences Signals or stops a process if an error occurs or a defect is created Eliminates choices leading to incorrect actions Prevents product damage Prevents machine damage Prevents personal injury Eliminates inadvertent mistakes

126 Control: SPC SPC is the basic tool for observing variation and using statistical signals to monitor and/or improve performance. This tool can be applied to nearly any area. Performance characteristics of equipment Error rates of bookkeeping tasks Dollar figures of gross sales Scrap rates from waste analysis Transit times in material management systems SPC stands for Statistical Process Control. Unfortunately, most companies apply it to finished goods (Y’s) rather than process characteristics (X’s). Until the process inputs become the focus of our effort, the full power of SPC methods to improve quality, increase productivity, and reduce cost cannot be realized.

127 Types of Control Charts
The quality of a product or process may be assessed by means of Variables :actual values measured on a continuous scale e.g. length, weight, strength, resistance, etc Attributes :discrete data that come from classifying units (accept/reject) or from counting the number of defects on a unit If the quality characteristic is measurable monitor its mean value and variability (range or standard deviation) If the quality characteristic is not measurable monitor the fraction (or number) of defectives monitor the number of defects

128 Defectives vs Defects Defective or Nonconforming Unit
a unit of product that does not satisfy one or more of the specifications for the product e.g. a scratched media, a cracked casing, a failed PCBA Defect or Nonconformity a specific point at which a specification is not satisfied e.g. a scratch, a crack, a defective IC

129 Shewhart Control Charts - Overview
Walter A Shewhart

130 Shewhart Control Charts for Variables

131 Control: SPC Choosing The Correct Control Chart Type u c p, np p X, mR
Type of data Individual measurements or sub-groups? Normally Distributed data? Interested primarily in sudden shifts in mean? Constant sub-group size? Area of opportunity constant from sample to sample? Counting defects or defectives? u c p, np p X, mR MA, EWMA, or CUSUM X-bar, R X-bar, s Use of modified control chart rules okay on x-bar chart Data tends to be normally distributed because of central limit theorem More effective in detecting gradual long-term changes Attributes Variables Defectives Yes No Defects Measurement Sub-groups Individuals

132 Control: Control Phase’s output
Y is monitored with suitable tools X is controlled by suitable tools Manage the INPUTS and good OUTPUTS will follow

133 Breakthrough Summary Champion Blackbelts Finance Rep.& Process Owner

134 Hard Savings Savings which flow to Net Profit Before Income Tax (NPBIT) Can be tracked and reported by the Finance organization Is usually a reduction in labor, material usage, material cost, or overhead Can also be cost of money for reduction in inventory or assets

135 Finance Guidelines - Savings Definitions
Hard Savings Direct Improvement to Company Earnings Baseline is Current Spending Experience Directly Traceable to Project Can be Audited Hard Savings Example Process is Improved, resulting in lower scrap Scrap reduction can be linked directly to the successful completion of the project

136 Potential Savings Savings opportunities which have been documented and validated, but require action before actual savings could be realized an example is capital equipment which has been exceeded due to increased efficiencies in the process. Savings can not be realized because we are still paying for the equipment. It has the potential for generating savings if we could sell or put back into use because of increases in schedules. Some form of a management decision or action is generally required to realize the savings

137 Finance Guidelines - Savings Definitions
Potential Savings Improve Capability of company Resource Potential Savings Example Process is Improved, resulting in reduced manpower requirement Headcount is not reduced or reduction cannot be traced to the project Potential Savings might turn into hard savings if the resource is productively utilized in the future

138 Identifying Soft Savings
Dollars or other benefits exist but they are not directly traceable Projected benefits have a reasonable probability (TBD) that they will occur Some or all of the benefits may occur outside of the normal 12 month tracking window Assessment of the benefit could/should be viewed in terms of strategic value to the company and the amount of baseline shift accomplished

139 Finance Guidelines - Savings Definitions
Soft Savings Benefit Expected from Process Improvement Benefit cannot be directly traced to Successful Completion of Project Benefit cannot be quantified Soft Savings Example Process is Improved; decreasing cycle time


Download ppt "DMAIC : The Breakthrough Strategy"

Similar presentations


Ads by Google