Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prioritizing Test Cases for Regression Testing Sebastian Elbaum University of Nebraska, Lincoln Alexey Malishevsky Oregon State University Gregg Rothermel.

Similar presentations


Presentation on theme: "Prioritizing Test Cases for Regression Testing Sebastian Elbaum University of Nebraska, Lincoln Alexey Malishevsky Oregon State University Gregg Rothermel."— Presentation transcript:

1 Prioritizing Test Cases for Regression Testing Sebastian Elbaum University of Nebraska, Lincoln Alexey Malishevsky Oregon State University Gregg Rothermel Oregon State University ISSTA 2000

2 Defining Prioritization Test scheduling During regression testing stage Goal: maximize a criterion/criteria –Increase rate of fault detection –Increase rate of coverage –Increase rate of fault likelihood exposure

3 Prioritization Requirements Definition of goal Increase rate of fault detection Measurement criterion % Of faults detected over life of test suite Prioritization technique Randomly Total statements coverage Probability of exposing faults

4 Previous Work Goal –Increase rate of fault detection Measurement –APFD: weighted average of the percentage of faults detected over life of test suite –Scale: (higher means faster detection)

5 Previous Work (2) A-B-C-D-EC-E-B-A-DE-D-C-B-A XX XXXX XXXXXXX X XXX Faults B A D C E TESTS Measuring Rate of Fault Detection

6 Previous Work (3) #Label Prioritize on 1 random randomized ordering 2 optimal optimize rate of fault detection 3 st­total coverage of statements 4 st­addtl coverage of statements not yet covered 5 st­fep probability of exposing faults 6 st­fep­addtl probability of faults, adjusted to consider previous test cases Prioritization Techniques

7 Summary Previous Work Performed empirical evaluation of general prioritization techniques –Even simple techniques generated gains Used statement level techniques Still room to improve

8 Research Questions 1.Can version specific TCP improve the rate of fault detection? 2.How does fine technique granularity compare with coarse level granularity? 3.Can the use of fault proneness improve the rate of fault detection?

9 Addressing RQ New family of prioritization techniques New series of experiments 1.Version specific prioritization –Statement –Function 2.Granularity 3.Contribution of fault proneness Practical implications

10 Additional Techniques #Label Prioritize on 7 fn­total coverage of functions 8 fn­addtl coverage of functions not yet covered 9 fn­fep­total probability of exposing faults 10 fn­fep­addtl probability of exposing faults, adjusted to consider previous tests 11 fn­fi­total probability of fault likelihood 12 fn­fi­addtl probability of fault likelihood, adjusted to consider previous tests 13 fn­fi­fep­total combined probabilities of fault existence and fault exposure 14 fn­fi­fep­addtl combined probabilities of fault existence/exposure, adjusted on previous coverage

11 Family of Experiments 8 programs 29 versions 50 test suites per program –Branch coverage adequate 14 techniques –2 control “techniques” – optimal & random –4 statement level –8 function level

12 “Generic” Factorial Design Techniques Programs 50 Test Suites 29 Versions Independenc e of code Independence of suite composition Independence of changes

13 Experiment 1a – Version Specific RQ1: Prioritization works on version specific at stat. level. –ANOVA: Different average APFD among stat. level techniques –Bonferroni: St-fep-addtl significantly better GroupTechniqueValue ASt-fep-addtl78.88 BSt-fep-total76.99 BSt-total76.30 CSt-addtl74.44 Random59.73

14 Experiment 1b – Version Specific RQ1: Prioritization works on version specific at function level. –ANOVA: Different average APFD among function level techniques –Bonferroni: Fn-fep not significantly different than Fn-total GroupTechniqueValue AFn-fep-addtl75.59 AFn-fep-total75.48 AFn-total75.09 BFn-addtl71.66

15 Experiment 2: Granularity RQ2: Fine granularity has greater prioritization potential –Techniques at the stat. level are significantly better than functional level –However, “best” functional level are better than “worse” statement level

16 Experiment 3: Fault Proneness RQ3: Incorporating fault likelihood did not significantly increased APFD. –ANOVA: Significant differences in average APFD values among all functional level techniques –Bonferroni: “Surprise”. Techniques using fault likelihood did not rank significantly better GroupTechniqueValue AFn-fi-fep-addtl76.34 A BFn-fi-fep-total75.92 A BFn-fi-total75.63 A BFn-fep-addtl75.59 A BFn-fep-total75.48 BFn-total75.09 CFn-fi-addtl72.62 CFn-addtl71.66 Reasons: –For small changes fault likelihood does not seem to be worth it. –We believe it will be worthwhile for larger changes. Further exploration required.

17 Practical Implications APFD: Optimal = 99% Fn-fi-fep-addtl= 98% Fn-total = 93% Random = 84% Time: Optimal = 1.3 Fn-fi-fep-addtl = 2.0 (+.7) Fn-total = 11.9(+10.6) Random = 16.5(+15.2)

18 Conclusions Version specific techniques can significantly improve rate of fault detection during regression testing Technique granularity is noticeable –In general, statement level is more powerful but, –Advanced functional level techniques are better than simple statement level techniques Fault likelihood may not be helpful

19 Working on … Controlling the threats –More subjects –Extending model Discovery of additional factors Development of guidelines to choose “best” technique

20 Backup Slides

21 Threats Representativeness –Program –Changes –Tests and process APFD as a test efficiency measure Tools correctness

22 Experiment Subjects

23 FEP Computation Probability that a fault causes a failure Works with mutation analysis –Insert mutants –Determine how many mutant are exposed by a test case FEP(t,s) = # of mutants of s exposed by t # of mutants of s

24 FI Computation Fault likelihood Associated with measurable software attributes Complexity metrics –Size, Control Flow, and Coupling –Generated fault index principal component analysis

25 Overall GroupTechniqueValue AOptimal94.24 BSt-fep-addtl78.88 CSt-fep-total76.99 D CFn-fi-fep-addtl76.34 D CSt-total76.30 D EFn-fi-fep-total75.92 D EFn-fi-total75.63 D EFn-fep-addtl75.59 D EFn-fep-total75.48 F EFn-total75.09 FSt-addtl74.44 GFn-fi-addtl72.62 GFn-addtl71.66 HRandom59.73


Download ppt "Prioritizing Test Cases for Regression Testing Sebastian Elbaum University of Nebraska, Lincoln Alexey Malishevsky Oregon State University Gregg Rothermel."

Similar presentations


Ads by Google