AMOST 20091 Experimental Comparison of Code-Based and Model-Based Test Prioritization Bogdan Korel Computer Science Department Illinois Institute of Technology.

Slides:



Advertisements
Similar presentations
Analyzing Regression Test Selection Techniques
Advertisements

Test Yaodong Bi.
Time-Aware Test Suite Prioritization Kristen R. Walcott, Mary Lou Soffa University of Virginia International Symposium on Software Testing and Analysis.
Dependence Analysis in Reduction of Requirement Based Test Suites Boris Vaysburg Luay Tahat Bogdan Korel Computer Science Department Bell Labs Innovations.
Regression Methodology Einat Ravid. Regression Testing - Definition  The selective retesting of a hardware system that has been modified to ensure that.
Copyright A. Andrews, 2003 Regression Testing Department of Computer Science.
Testing Concurrent/Distributed Systems Review of Final CEN 5076 Class 14 – 12/05.
Effectively Prioritizing Tests in Development Environment
Test Case Filtering and Prioritization Based on Coverage of Combinations of Program Elements Wes Masri and Marwa El-Ghali American Univ. of Beirut ECE.
1 Software Testing and Quality Assurance Lecture 21 – Class Testing Basics (Chapter 5, A Practical Guide to Testing Object- Oriented Software)
Department of Electrical and Computer Engineering Texas A&M University College Station, TX Abstract 4-Level Elevator Controller Lessons Learned.
Design of Fault Tolerant Data Flow in Ptolemy II Mark McKelvin EE290 N, Fall 2004 Final Project.
An Experimental Evaluation on Reliability Features of N-Version Programming Xia Cai, Michael R. Lyu and Mladen A. Vouk ISSRE’2005.
Department of CIS University of Pennsylvania 1/31/2001 Specification-based Protocol Testing Hyoung Seok Hong Oleg Sokolsky CSE 642.
Data Dependence Based Testability Transformation in Automated Test Generation Presented by: Qi Zhang.
(c) 2007 Mauro Pezzè & Michal Young Ch 10, slide 1 Functional testing.
1 State-Based Testing of Ajax Web Applications A. Marchetto, P. Tonella and F. Ricca CMSC737 Spring 2008 Shashvat A Thakor.
© 2006 Fraunhofer CESE1 MC/DC in a nutshell Christopher Ackermann.
Testing Components in the Context of a System CMSC 737 Fall 2006 Sharath Srinivas.
1 Software Testing and Quality Assurance Lecture 5 - Software Testing Techniques.
Software Testing Prasad G.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 11 System Test Design
Software Testing and QA Theory and Practice (Chapter 10: Test Generation from FSM Models) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory.
Chapter 13 & 14 Software Testing Strategies and Techniques
PJSISSTA '001 Black-Box Test Reduction Using Input-Output Analysis ISSTA ‘00 Patrick J. Schroeder, Bogdan Korel Department of Computer Science Illinois.
An Introduction to MBT  what, why and when 张 坚
*Graduate School of Engineering Science, Osaka University
Class Specification Implementation Graph By: Njume Njinimbam Chi-Chang Sun.
Foundations of Software Testing Chapter 5: Test Selection, Minimization, and Prioritization for Regression Testing Last update: September 3, 2007 These.
Some Sub-Activities within Requirements Engineering 1.Prototyping 2.Requirements Documentation 3.Requirements Validation 4.Requirements Measurements 5.Requirements.
What is software testing? 1 What are the problems of software testing? 2 Time is limited Applications are complex Requirements are fluid.
Department of CS and Mathematics, University of Pitesti State-based Testing is Functional Testing ! Florentin Ipate, Raluca Lefticaru University of Pitesti,
Identification of Cancer-Specific Motifs in
Rapid software development 1. Topics covered Agile methods Extreme programming Rapid application development Software prototyping 2.
Chapter 13: Regression Testing Omar Meqdadi SE 3860 Lecture 13 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
Major objective of this course is: Design and analysis of modern algorithms Different variants Accuracy Efficiency Comparing efficiencies Motivation thinking.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
Black Box Testing Techniques Chapter 7. Black Box Testing Techniques Prepared by: Kris C. Calpotura, CoE, MSME, MIT  Introduction Introduction  Equivalence.
Agile Test-based Modeling 資工 聶順成. Outline  Introduction : Modeling meets Programming  Agile Modeling: Using Models in Agile Projects  Model-based.
Requirements Engineering-Based Conceptual Modelling From: Requirements Engineering E. Insfran, O. Pastor and R. Wieringa Presented by Chin-Yi Tsai.
Comparing model-based and dynamic event-extraction based GUI testing techniques : An empirical study Gigon Bae, Gregg Rothermel, Doo-Hwan Bae The Journal.
Experimentation in Computer Science (Part 2). Experimentation in Software Engineering --- Outline  Empirical Strategies  Measurement  Experiment Process.
Regression TestJianfeng Chen Regression Test 1. What is Regression test? 2. Z/Object-Z/ TCOZ model/Test Chart 3. TCOZ Technique (TC obsolete&
Banaras Hindu University. A Course on Software Reuse by Design Patterns and Frameworks.
Custom Computing Machines for the Set Covering Problem Paper Written By: Christian Plessl and Marco Platzner Swiss Federal Institute of Technology, 2002.
1 © 2011 Professor W. Eric Wong, The University of Texas at Dallas Requirements-based Test Generation for Functional Testing W. Eric Wong Department of.
Mutation Testing Breaking the application to test it.
Random Test Generation of Unit Tests: Randoop Experience
Lecture #1: Introduction to Algorithms and Problem Solving Dr. Hmood Al-Dossari King Saud University Department of Computer Science 6 February 2012.
Foundations of Software Testing Chapter 5: Test Selection, Minimization, and Prioritization for Regression Testing Last update: September 3, 2007 These.
4 - Conditional Control Structures CHAPTER 4. Introduction A Program is usually not limited to a linear sequence of instructions. In real life, a programme.
Mutation Testing Laraib Zahid & Mariam Arshad. What is Mutation Testing?  Fault-based Testing: directed towards “typical” faults that could occur in.
Software Testing and Quality Assurance Practical Considerations (1) 1.
Cs498dm Software Testing Darko Marinov January 24, 2012.
Regression Testing with its types
Random Testing: Theoretical Results and Practical Implications IEEE TRANSACTIONS ON SOFTWARE ENGINEERING 2012 Andrea Arcuri, Member, IEEE, Muhammad.
Chapter 13 & 14 Software Testing Strategies and Techniques
Aditya P. Mathur Purdue University
Regression Testing.
Improving Test Suites for Efficient Fault Localization
Objective of This Course
Predicting Fault-Prone Modules Based on Metrics Transitions
by Xiang Mao and Qin Chen
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Department of Computer Science Regression Testing.
Institute of Computing Tech.
Regression Testing.
Reseeding-based Test Set Embedding with Reduced Test Sequences
Design and Analysis of Algorithms
Chapter 13 & 14 Software Testing Strategies and Techniques 1 Software Engineering: A Practitioner’s Approach, 6th edition by Roger S. Pressman.
Presentation transcript:

AMOST Experimental Comparison of Code-Based and Model-Based Test Prioritization Bogdan Korel Computer Science Department Illinois Institute of Technology Chicago, USA George Koutsogiannakis Computer Science Department Illinois Institute of Technology Chicago, USA

AMOST Outline  Introduction  Test prioritization  Code-based test prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

AMOST Outline  Introduction  Test prioritization  Code-based test prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

AMOST Introduction  During maintenance of evolving software systems, their specification and implementation are changed  Regression testing validates changes made to the system  Existing regression testing techniques –Code-based –Specification-based

AMOST Introduction  During regression testing, after testing the modified part of the system, the modified system needs to be retested using the existing test suite  Retesting the system may be very expensive  Testers are interested in detecting faults in the system as early as possible during the retesting process

AMOST Outline  Introduction  Test prioritization  Code-based test prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

AMOST Test Prioritization We consider test prioritization with respect to early fault detection The goal is to increase the likelihood of revealing faults earlier during execution of the prioritized test suite

AMOST Test Prioritization Let TS = {t 1, t 2, …, t N } be a test suite Question: In which order tests should be executed? 1. t 1, t 2, …, t N-1, t N 2. t N, t N-1, …, t 2, t 1 3. …

AMOST Test Prioritization Suppose test t 2 is the only test in TS that fails. 1. t 1, t 2, …, t N-1, t N early fault detection 2. t N, t N-1, …, t 2, t 1 late fault detection

AMOST 2009  Existing test prioritization methods: –Random prioritization –Code-based prioritization Order tests according to some criterion, e.g., a code coverage is achieved at the fastest rate –Model-based test prioritization Information about the system model is used to prioritize the test suite for system retesting 10 Prioritization Methods

AMOST 2009 Perform an experimental study to compare: –Code-based prioritization –Model-based test prioritization 11 Prioritization Methods

AMOST Outline  Introduction  Test prioritization  Code-based prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

AMOST Code-based Test Prioritization  The idea of code-based test prioritization is to use the source code of the system to prioritize the test suite.

AMOST Code-based Test Prioritization  The original system is executed for the whole test suite  Information about execution of the original system is collected  The collected information is used to prioritize the test suite for the modified system  Execution of the test suite on the original system may be expensive

AMOST System retesting Original implementation Modified implementation Test suite modified

AMOST Code-based Test Prioritization modified Original implementation Modified implementation Test suite Prioritized test suite Code-based test prioritization

AMOST Code-based Test Prioritization Original Implementation Prioritized test suite Tests execution information Prioritization algorithm Test suite

AMOST Code-based Test Prioritization  Several code-based test prioritization methods: –Total statement coverage –Additional statement coverage –Total function coverage –Additional function coverage –…

AMOST Code-based Test Prioritization  Information collected for each test during the original system execution: –Total statement coverage # of statements executed –Additional statement coverage A list of statements executed –Total function coverage # of functions executed –Additional function coverage A list of functions executed

AMOST Code-based Test Prioritization  Several code-based test prioritization methods: –Total statement coverage –Additional statement coverage (Heuristic #1) –Total function coverage –Additional function coverage –…

AMOST  Allows each statement to have the same opportunity to be executed during software retesting  A higher priority is assigned to a test that covers the higher number of not yet executed statements Additional Statement Coverage

AMOST Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 t 2 : S 1, S 5, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 8 : S 1, S 2, S 3, S 4, S 7 t 9 : S 1, S 6 t 10 : S 1, S 2

AMOST Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 t 2 : S 1, S 5, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 8 : S 1, S 2, S 3, S 4, S 7 t 9 : S 1, S 6 t 10 : S 1, S 2

AMOST Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 t 2 : S 1, S 5, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 8 : S 1, S 2, S 3, S 4, S 7 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8

AMOST Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements t 2 : S 1, S 5, S 8, S 9 S 1, S 2, S 3, S 4, S 7 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8

AMOST Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements t 2 : S 1, S 5, S 8, S 9 S 1, S 2, S 3, S 4, S 7 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8

AMOST Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements t 2 : S 1, S 5, S 8, S 9 S 1, S 2, S 3, S 4, S 7 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8, t 2

AMOST Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements S 1, S 2, S 3, S 4, S 5, S 7, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8, t 2

AMOST Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements S 1, S 2, S 3, S 4, S 5, S 7, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8, t 2

AMOST Additional Statement Coverage Executed statements for each test t 1 : S 1, S 2, S 3 Covered statements S 1, S 2, S 3, S 4, S 5, S 7, S 8, S 9 t 3 : S 1, S 5, S 7 t 4 : S 1, S 5, S 3, S 4 t 5 : S 1, S 2, S 7 t 6 : S 1, S 2 t 7 : S 1, S 2, S 4 t 9 : S 1, S 6 t 10 : S 1, S 2 S: t 8, t 2, t 9

AMOST Outline  Introduction  Test prioritization  Code-based prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

AMOST System Modeling  A state-based modeling is used to model state- based systems, i.e., systems characterized by a set of states and transitions between states that are triggered by events  System modeling is very popular for modeling state-based systems such as: control systems, communications systems, embedded systems, …

AMOST System Modeling  Several modeling languages have been developed to model state-based software systems  EFSM: Extended Finite State Machine  SDL: Specification Description Language  VFSM: Virtual Finite State Machine  State Chart  …

AMOST System Modeling  Several modeling languages have been developed to model state-based software systems  EFSM: Extended Finite State Machine  SDL: Specification Description Language  VFSM: Virtual Finite State Machine  State Chart  …

AMOST Extended Finite State Machine  EFSM consists of: –States –Transitions

AMOST EFSM Transition State 1 State 2 Event(p)[Condition]/Action(s)

AMOST EFSM Transition State 1 State 2 Event(p)[Condition]/Action(s)

AMOST EFSM Transition State 1 State 2 Event(p)[Condition]/Action(s)

AMOST EFSM Transition State 1 State 2 Event(p)[Condition]/Action(s)

AMOST EFSM Transition State 1 State 2 Event(p)[Condition]/Action(s)

AMOST Sample System Model

AMOST State-Based Models We assume that models are executable i.e., enough detail is provided in the model so it can be executed. An input t (test) to a model is a sequence of events with input values associated with these events.

AMOST System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

AMOST System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

AMOST System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

AMOST System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

AMOST System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

AMOST System Model Input events: On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off()

AMOST System Model Input (test): On(), Set(50), SensorSpeed(50), Coast(70), Brake(), Resume(55), Accelerate(50), Off() Transaction sequence: T1, T2, T4, T7, T9, T11, T10, T15

AMOST Outline  Introduction  Test prioritization  System Modeling  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

AMOST Model-based Test Prioritization  The idea of model-based test prioritization is to use the system model(s) to prioritize the test suite –Model is modified –Model is not modified

AMOST System retesting – model is modified Original model Modified model modified implementation Modified implementation Test suite modified

AMOST Model-based Test Prioritization Original model Modified model modified implementation Modified implementation Test suite Prioritized test suite Model-based test prioritization modified

AMOST Model-based Test Prioritization  The idea of model-based test prioritization is to use the system model(s) to prioritize the test suite –Model is modified –Model is not modified

AMOST System retesting – model is not modified Model implementation Modified implementation Test suite modified

AMOST Model-based Test Prioritization – model is not modified Model modified implementation Modified implementation Test suite Prioritized test suite Model-based test prioritization

AMOST Model-based Test Prioritization Model Prioritized test suite Tests execution information Prioritization algorithm Marked model elements Test suite

AMOST

AMOST Source code

AMOST Source code related to Brake is modified

AMOST Source code related to Brake is modified

AMOST Source code related to Brake is modified Source code related to Coast is modified

AMOST Source code related to Brake is modified Source code related to Coast is modified

AMOST Model-based Test Prioritization  The model is executed for the whole test suite  Information about execution of the model is collected  The collected information is used to prioritize the test suite  Execution of the test suite on the model is inexpensive (very fast) as compared to execution of the system

AMOST Model-based Test Prioritization  Model-based test prioritization methods: –Selective test prioritization –Model-based prioritization based on: # of executed marked transitions the list of executed marked transitions –Model dependence-based test prioritization Sequence of executed transitions –…

AMOST Model-based Test Prioritization  Several model-based test prioritization methods: –Selective test prioritization –Model-based prioritization based on: # of executed marked transitions the list of executed marked transitions –Model dependence-based test prioritization Sequence of executed transitions –…

AMOST Selective Test Prioritization  The idea of selective test prioritization is –Assign high priority to tests that execute at least one marked transition in the model –Assign low priority to tests that do not execute any marked transition in the model

AMOST Selective Test Prioritization  During system retesting, –tests with high priority are executed first –low priority tests are executed later  High priority tests are ordered using a random ordering. Similarly, low priority tests are ordered using a random ordering Test Suite TS H TS L TS H : High priority tests TS L : Low priority tests

AMOST Model-based Test Prioritization  Several model-based test prioritization methods: –Selective test prioritization –Model-based prioritization based on: # of executed marked transition the list of executed marked transitions (Heuristic #2) –Model dependence-based test prioritization Sequence of executed transitions –…

AMOST  Allows each marked transition to have the same opportunity to be executed during software retesting  A higher priority is assigned to a test that executes a marked transition that has been executed the least number of times at the given point of system retesting by keeping a count of transition executions  This heuristic tries to balance the number of executions of marked transitions by keeping counters for each marked transition Heuristics #2

AMOST Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=0 t 3 : T 3, T 4 count(T 3 )=0 t 4 : T 5 count(T 4 )=0 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 8 : T 2, T 3, T 4 t 9 : t 10 :

AMOST Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=0 t 3 : T 3, T 4 count(T 3 )=0 t 4 : T 5 count(T 4 )=0 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 8 : T 2, T 3, T 4 t 9 : t 10 : S:

AMOST Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=0 t 3 : T 3, T 4 count(T 3 )=0 t 4 : T 5 count(T 4 )=0 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 8 : T 2, T 3, T 4 t 9 : t 10 : S:

AMOST Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 8 : T 2, T 3, T 4 t 9 : t 10 : S: t 8

AMOST Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8

AMOST Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8

AMOST Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8

AMOST Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8

AMOST Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=1 t 4 : T 5 count(T 4 )=1 t 5 : T 1 count(T 5 )=0 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8

AMOST Heuristics #2 Executed marked transitions t 1 : T 1, T 2, T 3 count(T 1 )=0 t 2 : T 3, T 4, T 5 count(T 2 )=1 t 3 : T 3, T 4 count(T 3 )=2 t 4 : T 5 count(T 4 )=2 t 5 : T 1 count(T 5 )=1 t 6 : T 1, T 2 t 7 : T 2, T 4 t 9 : t 10 : S: t 8, t 2

AMOST Outline  Introduction  Model-based testing  Test prioritization  Code-based test prioritization  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

AMOST Measuring the Effectiveness of Test Prioritization Test prioritization methods may generate many different solutions (prioritized test sequences) for a given test suite. A factor that may influence the resulting prioritized test sequence is, for example, an order in which tests are processed during the prioritization process

AMOST Measuring the Effectiveness of Test Prioritization In order to compare different test prioritization methods, different measures were introduced The rate of fault detection is a measure of how rapidly a prioritized test sequence detects faults. This measure is a function of the percentage of faults detected in terms of the test suite fraction

AMOST Measuring the Effectiveness of Test Prioritization In our study we concentrated on one fault d In order to compare different prioritization methods, we use the concept of the most likely position of the first failed test that detects fault d The most likely position represents an average (most likely) position of the first failed test that detects fault d over all possible prioritized test sequences that may be generated by a test prioritization method (for a given system under test and a test suite)

AMOST The most likely (average) position of the first failed test that detects fault d Measuring Effectiveness of Early Fault Detection R(i,d): number of prioritized test sequences for which the first failed test is in position i M: number of all possible prioritized test sequences d: fault

AMOST The most likely relative position in test suite TS of the first failed test that detects fault d: Measuring Effectiveness of Early Fault Detection 0 < RP(d)  1

AMOST  For some heuristics (e.g., random) an analytical approach to compute precisely MLP can be used  However, for many test prioritization methods derivation of a precise formula for RP(d), the most likely relative position of the first failed test that detects fault d, may be very difficult.  Therefore, we have implemented a randomized approach of estimation of RP(d) for all five heuristic methods Measuring Effectiveness of Early Fault Detection

AMOST  This estimation randomly generates prioritized test sequences according to a given test prioritization heuristic  For each generated sequence, the position of the first failed test is determined in the test sequence  After a large number of test sequences is generated, the estimated most likely position is computed Measuring Effectiveness of Early Fault Detection

AMOST Outline  Introduction  Model-based testing  Test prioritization  Code-based test prioritization  Model-based test prioritization  Measuring the effectiveness of test prioritization  Experimental study  Conclusions

AMOST Experimental Study  The goal of the experiment study is to compare the effectiveness of early fault detection of:  Code-based prioritization (Heuristic #1)  Model-based test prioritization (Heuristic #2)  The most likely relative position of the first failed test that detects fault d is used as the measure of the effectiveness of early fault detection

AMOST Experimental Study

AMOST Experimental Study  We introduced incorrect modifications to implementations  For each modification we identified and marked the corresponding transitions  For each implementation we created 7-22 incorrect versions  # of failed tests for each incorrect version: 1-10 failed tests

AMOST ATM Cruise Control R: Random prioritization H1: Heuristic #1 prioritization (code-based) H2: Heuristic #2 prioritization (model-based) RP Boxplots for the Experimental Study

AMOST Fuel Pump ISDN R: Random prioritization H1: Heuristic #1 prioritization (code-based) H2: Heuristic #2 prioritization (model-based) RP Boxplots for the Experimental Study

AMOST TCP Print-Tokens R: Random prioritization H1: Heuristic #1 prioritization (code-based) H2: Heuristic #2 prioritization (model-based) RP Boxplots for the Experimental Study

AMOST Vending Machine R: Random prioritization H1: Heuristic #1 prioritization (code-based) H2: Heuristic #2 prioritization (model-based) RP Boxplots for the Experimental Study

AMOST Cumulative data for all models R: Random prioritization H1: Heuristic #1 prioritization (code-based) H2: Heuristic #2 prioritization (model-based) RP Boxplots for the Experimental Study

AMOST Conclusions  Model-based test prioritization is less expensive than code-based prioritization –Execution of the model is inexpensive as compared to the execution of the system  Cost of model development

AMOST Conclusions  Model based prioritization is sensitive to correct markings of transitions when a model is not modified –Correct identification of transitions (marked transitions) related to source code modifications is important  This is not an issue, when models are modified

AMOST Conclusions  The small experimental study suggests that model- based prioritization may be as effective in early fault detection as code-based prioritization, if not better  Code-based test prioritization uses information related to the original system

AMOST Future Work  Automated mapping of source code changes to a model  Experimental study on larger models and systems with multiple faults  Experimental study to compare more code-based methods with model-based methods

AMOST Questions?