Presentation is loading. Please wait.

Presentation is loading. Please wait.

AMOST 20091 Experimental Comparison of Code-Based and Model-Based Test Prioritization Bogdan Korel Computer Science Department Illinois Institute of Technology.

Similar presentations


Presentation on theme: "AMOST 20091 Experimental Comparison of Code-Based and Model-Based Test Prioritization Bogdan Korel Computer Science Department Illinois Institute of Technology."— Presentation transcript:

1 AMOST 20091 Experimental Comparison of Code-Based and Model-Based Test Prioritization Bogdan Korel Computer Science Department Illinois Institute of Technology Chicago, USA George Koutsogiannakis Computer Science Department Illinois Institute of Technology Chicago, USA

2 AMOST 20092 Outline  Introduction  Test prioritization  Code-based test prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

3 AMOST 20093 Outline  Introduction  Test prioritization  Code-based test prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

4 AMOST 20094 Introduction  During maintenance of evolving software systems, their specification and implementation are changed  Regression testing validates changes made to the system  Existing regression testing techniques –Code-based –Specification-based

5 AMOST 20095 Introduction  During regression testing, after testing the modified part of the system, the modified system needs to be retested using the existing test suite  Retesting the system may be very expensive  Testers are interested in detecting faults in the system as early as possible during the retesting process

6 AMOST 20096 Outline  Introduction  Test prioritization  Code-based test prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

7 AMOST 20097 Test Prioritization We consider test prioritization with respect to early fault detection The goal is to increase the likelihood of revealing faults earlier during execution of the prioritized test suite

8 AMOST 20098 Test Prioritization Let TS = {t 1, t 2, …, t N } be a test suite Question: In which order tests should be executed? 1. t 1, t 2, …, t N-1, t N 2. t N, t N-1, …, t 2, t 1 3. …

9 AMOST 20099 Test Prioritization Suppose test t 2 is the only test in TS that fails. 1. t 1, t 2, …, t N-1, t N early fault detection 2. t N, t N-1, …, t 2, t 1 late fault detection

10 AMOST 2009  Existing test prioritization methods: –Random prioritization –Code-based prioritization Order tests according to some criterion, e.g., a code coverage is achieved at the fastest rate –Model-based test prioritization Information about the system model is used to prioritize the test suite for system retesting 10 Prioritization Methods

11 AMOST 2009 Perform an experimental study to compare: –Code-based prioritization –Model-based test prioritization 11 Prioritization Methods

12 AMOST 200912 Outline  Introduction  Test prioritization  Code-based prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

13 AMOST 200913 Code-based Test Prioritization  The idea of code-based test prioritization is to use the source code of the system to prioritize the test suite.

14 AMOST 200914 Code-based Test Prioritization  The original system is executed for the whole test suite  Information about execution of the original system is collected  The collected information is used to prioritize the test suite for the modified system  Execution of the test suite on the original system may be expensive

15 AMOST 200915 System retesting Original implementation Modified implementation Test suite modified

16 AMOST 200916 Code-based Test Prioritization modified Original implementation Modified implementation Test suite Prioritized test suite Code-based test prioritization

17 AMOST 200917 Code-based Test Prioritization Original Implementation Prioritized test suite Tests execution information Prioritization algorithm Test suite

18 AMOST 200918 Code-based Test Prioritization  Several code-based test prioritization methods: –Total statement coverage –Additional statement coverage –Total function coverage –Additional function coverage –…

19 AMOST 200919 Code-based Test Prioritization  Information collected for each test during the original system execution: –Total statement coverage # of statements executed –Additional statement coverage A list of statements executed –Total function coverage # of functions executed –Additional function coverage A list of functions executed

20 AMOST 200920 Code-based Test Prioritization  Several code-based test prioritization methods: –Total statement coverage –Additional statement coverage (Heuristic #1) –Total function coverage –Additional function coverage –…

21 AMOST 200921  Allows each statement to have the same opportunity to be executed during software retesting  A higher priority is assigned to a test that covers the higher number of not yet executed statements Additional Statement Coverage

22 AMOST 200922 Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 t 2 : S 1, S 5, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 8 : S 1, S 2, S 3, S 4, S 7 t 9 : S 1, S 6 t 10 : S 1, S 2

23 AMOST 200923 Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 t 2 : S 1, S 5, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 8 : S 1, S 2, S 3, S 4, S 7 t 9 : S 1, S 6 t 10 : S 1, S 2

24 AMOST 200924 Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 t 2 : S 1, S 5, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 8 : S 1, S 2, S 3, S 4, S 7 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8

25 AMOST 200925 Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements t 2 : S 1, S 5, S 8, S 9 S 1, S 2, S 3, S 4, S 7 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8

26 AMOST 200926 Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements t 2 : S 1, S 5, S 8, S 9 S 1, S 2, S 3, S 4, S 7 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8

27 AMOST 200927 Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements t 2 : S 1, S 5, S 8, S 9 S 1, S 2, S 3, S 4, S 7 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8, t 2

28 AMOST 200928 Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements S 1, S 2, S 3, S 4, S 5, S 7, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8, t 2

29 AMOST 200929 Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements S 1, S 2, S 3, S 4, S 5, S 7, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8, t 2

30 AMOST 200930 Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements S 1, S 2, S 3, S 4, S 5, S 7, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8, t 2, t 9

31 AMOST 200931 Outline  Introduction  Test prioritization  Code-based prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

32 AMOST 200932 System Modeling  A state-based modeling is used to model state- based systems, i.e., systems characterized by a set of states and transitions between states that are triggered by events  System modeling is very popular for modeling state-based systems such as: control systems, communications systems, embedded systems, …

33 AMOST 200933 System Modeling  Several modeling languages have been developed to model state-based software systems  EFSM: Extended Finite State Machine  SDL: Specification Description Language  VFSM: Virtual Finite State Machine  State Chart  …

34 AMOST 200934 System Modeling  Several modeling languages have been developed to model state-based software systems  EFSM: Extended Finite State Machine  SDL: Specification Description Language  VFSM: Virtual Finite State Machine  State Chart  …

35 AMOST 200935 Extended Finite State Machine  EFSM consists of: –States –Transitions

36 AMOST 200936 EFSM Transition State 1 State 2 Event(p)[Condition]/Action(s)

37 AMOST 200937 EFSM Transition State 1 State 2 Event(p)[Condition]/Action(s)

38 AMOST 200938 EFSM Transition State 1 State 2 Event(p)[Condition]/Action(s)

39 AMOST 200939 EFSM Transition State 1 State 2 Event(p)[Condition]/Action(s)

40 AMOST 200940 EFSM Transition State 1 State 2 Event(p)[Condition]/Action(s)

41 AMOST 200941 Sample System Model

42 AMOST 200942 State-Based Models We assume that models are executable i.e., enough detail is provided in the model so it can be executed. An input t (test) to a model is a sequence of events with input values associated with these events.

43 AMOST 200943 System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

44 AMOST 200944 System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

45 AMOST 200945 System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

46 AMOST 200946 System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

47 AMOST 200947 System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

48 AMOST 200948 System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

49 AMOST 200949 System Model Input (test): On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off() Transaction sequence: T1, T2, T4, T7, T9, T11, T10, T15

50 AMOST 200950 Outline  Introduction  Test prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

51 AMOST 200951 Model-based Test Prioritization  The idea of model-based test prioritization is to use the system model(s) to prioritize the test suite –Model is modified –Model is not modified

52 AMOST 200952 System retesting – model is modified Original model Modified model modified implementation Modified implementation Test suite modified

53 AMOST 200953 Model-based Test Prioritization Original model Modified model modified implementation Modified implementation Test suite Prioritized test suite Model-based test prioritization modified

54 AMOST 200954 Model-based Test Prioritization  The idea of model-based test prioritization is to use the system model(s) to prioritize the test suite –Model is modified –Model is not modified

55 AMOST 200955 System retesting – model is not modified Model implementation Modified implementation Test suite modified

56 AMOST 200956 Model-based Test Prioritization – model is not modified Model modified implementation Modified implementation Test suite Prioritized test suite Model-based test prioritization

57 AMOST 200957 Model-based Test Prioritization Model Prioritized test suite Tests execution information Prioritization algorithm Marked model elements Test suite

58 AMOST 200958

59 AMOST 200959 Source code

60 AMOST 200960 Source code related to Brake is modified

61 AMOST 200961 Source code related to Brake is modified

62 AMOST 200962 Source code related to Brake is modified Source code related to Coast is modified

63 AMOST 200963 Source code related to Brake is modified Source code related to Coast is modified

64 AMOST 200964 Model-based Test Prioritization  The model is executed for the whole test suite  Information about execution of the model is collected  The collected information is used to prioritize the test suite  Execution of the test suite on the model is inexpensive (very fast) as compared to execution of the system

65 AMOST 200965 Model-based Test Prioritization  Model-based test prioritization methods: –Selective test prioritization –Model-based prioritization based on: # of executed marked transitions the list of executed marked transitions –Model dependence-based test prioritization Sequence of executed transitions –…

66 AMOST 200966 Model-based Test Prioritization  Several model-based test prioritization methods: –Selective test prioritization –Model-based prioritization based on: # of executed marked transitions the list of executed marked transitions –Model dependence-based test prioritization Sequence of executed transitions –…

67 AMOST 200967 Selective Test Prioritization  The idea of selective test prioritization is –Assign high priority to tests that execute at least one marked transition in the model –Assign low priority to tests that do not execute any marked transition in the model

68 AMOST 200968 Selective Test Prioritization  During system retesting, –tests with high priority are executed first –low priority tests are executed later  High priority tests are ordered using a random ordering. Similarly, low priority tests are ordered using a random ordering Test Suite TS H TS L TS H : High priority tests TS L : Low priority tests

69 AMOST 200969 Model-based Test Prioritization  Several model-based test prioritization methods: –Selective test prioritization –Model-based prioritization based on: # of executed marked transition the list of executed marked transitions (Heuristic #2) –Model dependence-based test prioritization Sequence of executed transitions –…

70 AMOST 200970  Allows each marked transition to have the same opportunity to be executed during software retesting  A higher priority is assigned to a test that executes a marked transition that has been executed the least number of times at the given point of system retesting by keeping a count of transition executions  This heuristic tries to balance the number of executions of marked transitions by keeping counters for each marked transition Heuristics #2

71 AMOST 200971 Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=0 t 3 : T 3, T 4 count(T 3 )=0 t 4 : T 5 count(T 4 )=0 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 8 : T 2, T 3, T 4 t 9 : t 10 :

72 AMOST 200972 Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=0 t 3 : T 3, T 4 count(T 3 )=0 t 4 : T 5 count(T 4 )=0 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 8 : T 2, T 3, T 4 t 9 : t 10 : S:

73 AMOST 200973 Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=0 t 3 : T 3, T 4 count(T 3 )=0 t 4 : T 5 count(T 4 )=0 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 8 : T 2, T 3, T 4 t 9 : t 10 : S:

74 AMOST 200974 Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 8 : T 2, T 3, T 4 t 9 : t 10 : S: t 8

75 AMOST 200975 Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8

76 AMOST 200976 Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8

77 AMOST 200977 Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8

78 AMOST 200978 Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8

79 AMOST 200979 Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8

80 AMOST 200980 Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=2 t 4 : T 5 count(T 4 )=2 t 5 : T 1 count(T 5 )=1 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8, t 2

81 AMOST 200981 Outline  Introduction  Model-based testing  Test prioritization  Code-based test prioritization  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

82 AMOST 200982 Measuring the Effectiveness of Test Prioritization Test prioritization methods may generate many different solutions (prioritized test sequences) for a given test suite. A factor that may influence the resulting prioritized test sequence is, for example, an order in which tests are processed during the prioritization process

83 AMOST 200983 Measuring the Effectiveness of Test Prioritization In order to compare different test prioritization methods, different measures were introduced The rate of fault detection is a measure of how rapidly a prioritized test sequence detects faults. This measure is a function of the percentage of faults detected in terms of the test suite fraction

84 AMOST 200984 Measuring the Effectiveness of Test Prioritization In our study we concentrated on one fault d In order to compare different prioritization methods, we use the concept of the most likely position of the first failed test that detects fault d The most likely position represents an average (most likely) position of the first failed test that detects fault d over all possible prioritized test sequences that may be generated by a test prioritization method (for a given system under test and a test suite)

85 AMOST 200985 The most likely (average) position of the first failed test that detects fault d Measuring Effectiveness of Early Fault Detection R(i,d): number of prioritized test sequences for which the first failed test is in position i M: number of all possible prioritized test sequences d: fault

86 AMOST 200986 The most likely relative position in test suite TS of the first failed test that detects fault d: Measuring Effectiveness of Early Fault Detection 0 < RP(d)  1

87 AMOST 200987  For some heuristics (e.g., random) an analytical approach to compute precisely MLP can be used  However, for many test prioritization methods derivation of a precise formula for RP(d), the most likely relative position of the first failed test that detects fault d, may be very difficult.  Therefore, we have implemented a randomized approach of estimation of RP(d) for all five heuristic methods Measuring Effectiveness of Early Fault Detection

88 AMOST 200988  This estimation randomly generates prioritized test sequences according to a given test prioritization heuristic  For each generated sequence, the position of the first failed test is determined in the test sequence  After a large number of test sequences is generated, the estimated most likely position is computed Measuring Effectiveness of Early Fault Detection

89 AMOST 200989 Outline  Introduction  Model-based testing  Test prioritization  Code-based test prioritization  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

90 AMOST 200990 Experimental Study  The goal of the experiment study is to compare the effectiveness of early fault detection of:  Code-based prioritization (Heuristic #1)  Model-based test prioritization (Heuristic #2)  The most likely relative position of the first failed test that detects fault d is used as the measure of the effectiveness of early fault detection

91 AMOST 200991 Experimental Study

92 AMOST 200992 Experimental Study  We introduced incorrect modifications to implementations  For each modification we identified and marked the corresponding transitions  For each implementation we created 7-22 incorrect versions  # of failed tests for each incorrect version: 1-10 failed tests

93 AMOST 200993 ATM Cruise Control R: Random prioritization H1: Heuristic #1 prioritization (code-based) H2: Heuristic #2 prioritization (model-based) RP Boxplots for the Experimental Study

94 AMOST 200994 Fuel Pump ISDN R: Random prioritization H1: Heuristic #1 prioritization (code-based) H2: Heuristic #2 prioritization (model-based) RP Boxplots for the Experimental Study

95 AMOST 200995 TCP Print-Tokens R: Random prioritization H1: Heuristic #1 prioritization (code-based) H2: Heuristic #2 prioritization (model-based) RP Boxplots for the Experimental Study

96 AMOST 200996 Vending Machine R: Random prioritization H1: Heuristic #1 prioritization (code-based) H2: Heuristic #2 prioritization (model-based) RP Boxplots for the Experimental Study

97 AMOST 200997 Cumulative data for all models R: Random prioritization H1: Heuristic #1 prioritization (code-based) H2: Heuristic #2 prioritization (model-based) RP Boxplots for the Experimental Study

98 AMOST 200998 Conclusions  Model-based test prioritization is less expensive than code-based prioritization –Execution of the model is inexpensive as compared to the execution of the system  Cost of model development

99 AMOST 200999 Conclusions  Model based prioritization is sensitive to correct markings of transitions when a model is not modified –Correct identification of transitions (marked transitions) related to source code modifications is important  This is not an issue, when models are modified

100 AMOST 2009100 Conclusions  The small experimental study suggests that model- based prioritization may be as effective in early fault detection as code-based prioritization, if not better  Code-based test prioritization uses information related to the original system

101 AMOST 2009101 Future Work  Automated mapping of source code changes to a model  Experimental study on larger models and systems with multiple faults  Experimental study to compare more code-based methods with model-based methods

102 AMOST 2009102 Questions?


Download ppt "AMOST 20091 Experimental Comparison of Code-Based and Model-Based Test Prioritization Bogdan Korel Computer Science Department Illinois Institute of Technology."

Similar presentations


Ads by Google