Test coverage Tor Stålhane. What is test coverage Let c denote the unit type that is considered – e.g. requirements or statements. We then have C c =

Slides:



Advertisements
Similar presentations
Automating Software Module Testing for FAA Certification Usha Santhanam The Boeing Company.
Advertisements

1 Software Unit Test Coverage And Test Adequacy Hong Zhu, Patrick A. V. Hall, John H.R. May Presented By: Arpita Gandhi.
Marking Schema question1: 40 marks question2: 40 marks question3: 20 marks total: 100 marks.
White Box and Black Box Testing Tor Stålhane. What is White Box testing White box testing is testing where we use the info available from the code of.
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
10 Software Engineering Foundations of Computer Science ã Cengage Learning.
Introduction to Software Testing Chapter 2.6 Graph Coverage for Use Cases Paul Ammann & Jeff Offutt
David Woo (dxw07u).  What is “White Box Testing”  Data Processing and Calculation Correctness Tests  Correctness Tests:  Path Coverage  Line Coverage.
Introduction to Software Testing Chapter 2.6 Graph Coverage for Use Cases Paul Ammann & Jeff Offutt
Testing an individual module
8-1 Copyright ©2011 Pearson Education, Inc. publishing as Prentice Hall Chapter 8 Confidence Interval Estimation Statistics for Managers using Microsoft.
Copyright ©2011 Pearson Education 8-1 Chapter 8 Confidence Interval Estimation Statistics for Managers using Microsoft Excel 6 th Global Edition.
Copyright © Cengage Learning. All rights reserved.
SOFTWARE TESTING WHITE BOX TESTING 1. GLASS BOX/WHITE BOX TESTING 2.
Software Testing and QA Theory and Practice (Chapter 4: Control Flow Testing) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory and Practice.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Computer Programming and Basic Software Engineering 4. Basic Software Engineering 1 Writing a Good Program 4. Basic Software Engineering.
White Box vs. Black Box Testing Tor Stålhane. What is White Box testing White box testing is testing where we use the info available from the code of.
Testing and Cost / Benefit Tor Stålhane. Why cost / benefit – 1 For most “real” software systems, the number of possible inputs is large. Thus, we can.
Path selection criteria Tor Stålhane & ‘Wande Daramola.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Basic Business Statistics 11 th Edition.
Confidence Interval Estimation
What is Software Testing? And Why is it So Hard J. Whittaker paper (IEEE Software – Jan/Feb 2000) Summarized by F. Tsui.
Testing Workflow In the Unified Process and Agile/Scrum processes.
Agenda Introduction Overview of White-box testing Basis path testing
Test vs. inspection Part 2 Tor Stålhane. Testing and inspection A short data analysis.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
Black Box Testing Techniques Chapter 7. Black Box Testing Techniques Prepared by: Kris C. Calpotura, CoE, MSME, MIT  Introduction Introduction  Equivalence.
16 October Reminder Types of Testing: Purpose  Functional testing  Usability testing  Conformance testing  Performance testing  Acceptance.
Software Construction Lecture 18 Software Testing.
Coverage Estimating the quality of a test suite. 2 Code Coverage A code coverage model calls out the parts of an implementation that must be exercised.
Summarizing “Structural” Testing Now that we have learned to create test cases through both: – a) Functional (blackbox)and – b) Structural (whitebox) testing.
Software Testing. Software testing is the execution of software with test data from the problem domain. Software testing is the execution of software.
1 Graph Coverage (6). Reading Assignment P. Ammann and J. Offutt “Introduction to Software Testing” ◦ Section
Chap 8-1 Chapter 8 Confidence Interval Estimation Statistics for Managers Using Microsoft Excel 7 th Edition, Global Edition Copyright ©2014 Pearson Education.
程建群 博士 (Dr. Jason Cheng) 年 03 月 Software Engineering Part 08.
White Box and Black Box Testing
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Software Quality Assurance and Testing Fazal Rehman Shamil.
Path selection criteria Tor Stålhane & Wande Daramola.
White-Box Testing Techniques I Prepared by Stephen M. Thebaut, Ph.D. University of Florida Software Testing and Verification Lecture 7.
Basic Business Statistics, 11e © 2009 Prentice-Hall, Inc. Chap 8-1 Chapter 8 Confidence Interval Estimation Business Statistics: A First Course 5 th Edition.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Testing Integral part of the software development process.
Introduction to Software Testing (2nd edition) Chapter 5 Criteria-Based Test Design Paul Ammann & Jeff Offutt
Software TestIng White box testing.
Software Testing.
Hardware & Software Reliability
Software Testing.
Control Flow Testing Handouts
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 4 Control Flow Testing
Software Engineering (CSI 321)
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 2 Theory of Program Testing
Software Reliability Definition: The probability of failure-free operation of the software for a specified period of time in a specified environment.
Coverage-Based Test Design CS 4501 / 6501 Software Testing
Outline of the Chapter Basic Idea Outline of Control Flow Testing
Data Coverage and Code Coverage
Structural testing, Path Testing
White-Box Testing Techniques
Types of Testing Visit to more Learning Resources.
CHAPTER 4 Test Design Techniques
Software Testing (Lecture 11-a)
Test coverage Tor Stålhane.
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
White-Box Testing Techniques I
Presentation transcript:

Test coverage Tor Stålhane

What is test coverage Let c denote the unit type that is considered – e.g. requirements or statements. We then have C c = (units c tested) / (number of units c )

Coverage categories Broadly speaking, there are two categories of test coverage: Program based coverage. This category is concerned with coverage related to the software code. Specification based coverage. This category is concerned with coverage related to specification or requirements

Test coverage For software code, we have three basic types of test coverage: Statement coverage – percentage of statements tested. Branch coverage – percentage of branches tested. Basic path coverage – percentage of basic paths tested.

Finite applicability – 1 That a test criterion has finite applicability means that it can be satisfied by a finite test set. In the general case, the test criteria that we will discuss are not finitely applicable. The main reason for this is the possibility of “dead code” – e.g infeasible branches.

Finite applicability – 2 We will make the test coverage criteria that we use finitely applicable by relating them to only feasible code. Thus, when we later speak of all branches or all code statements, we will tacitly interpret this as all “feasible branches” or all “feasible code”.

Statement coverage This is the simplest coverage measure: C stat = percentage of statements tested

Path diagram P2 P3 P1 S4 S3 S2 S1 Predicates Paths P1P2P3 000S4 001S3, S4 010S2, S4 011S2, S3, S4 100S1, S4 101S1, S3, S4 110S1, S2, S4 111S1, S2, S3, S4

Branch coverage Branch coverage tells us how many of the possible paths that has been tested. C branch = percentage of branches tested

Basis path coverage The basis set of paths is the smallest set of paths that can be combined to create every other path through the code. The size of this set is equal to v(G) – McCabe’s cyclomatic number. C basis = percentage of basis paths tested

Use of test coverage There are several ways to use the coverage values. We will look at two of them coverage used As a test acceptance criteria For estimation of one or more quality factors, e.g. reliability

Test acceptance criteria At a high level this is a simple acceptance criterion: Run a test suite. Have we reached our acceptance criteria – e.g. 95% branch coverage? –Yes – stop testing –No – write more tests. If we have tool that shows us what has not been tested, this will help us in selecting the new test cases.

Avoid redundancy If we use a test coverage measure as an acceptance criterion, we will only get credit for tests that exercise new parts of the code. In this way, a test coverage measure will help us to Directly identify untested code Indirectly help us to identify new test cases

Fault seeding – 1 The concept “fault seeding” is used as follows: Insert a set of faults into the code Run the current test set One out of two things can happen: –All seeded faults are discovered, causing observable errors –One or more seeded faults are not discovered

Fault seeding – 2 The fact that one or more seeded errors are not found by the current test set tells us which parts of the code that have not yet been tested – e.g. which component, code chunk or domain. This info will help us to define the new test cases.

Fault seeding – 3 Fault seeding has one problem – where and how to seed the faults. There are at least two solutions to this: Save and seed faults identified during earlier project activities Draw faults to seed from an experience database containing typical faults and their position in the code.

Fault seeding and estimation – 1 X X X X X X XX Real fault Seeded fault Input domain Test domain

Fault seeding and estimation – 2 We will use the following notation: N 0 : number of faults in the code N: number of faults found using a specified test set S 0 : number of seeded faults S: number of seeded faults found using a specified test set

Fault seeding and estimation – 3 X X X X X X XX Real fault Seeded fault Input domain Test domain N 0 / N = S 0 / S and thus N 0 = N * S 0 / S or N 0 = N * S 0 / max{S, 0.5}

Capture – recapture One way to get around the problem of fault seeding is to use whatever errors are found in a capture – recapture model. This model requires that we use two test groups. The first group finds M errors The second group finds n errors m defects are in both groups m / n = M / N => N = Mn / m

Capture – recapture NoCustomer 1Customer 2CommonN

Output coverage – 1 All the coverage types that we have looked at so far have been related to input data. It is also possible to define coverage based on output data. The idea is as follows: Identify all output specifications Run the current test set One out of two things can happen: –All types of output has been generated –One or more types of output have not been generated

Output coverage – 2 The fact that one or more types of output has not been generated by the current test set tells us which parts of the code that have not yet been tested – e.g. which component, code chunk or domain. This info will help us to define the new test cases.

Output coverage – 3 The main challenge with using this type of coverage measure is that output can be defined at several levels of details, e.g.: An account summary An account summary for a special type of customer An account summary for a special event – e.g. overdraft

Specification based coverage – 1 Specification based test coverage is in most cases requirements based test coverage. We face the same type of problem here as we do for output coverage – the level of details considered in the requirements. In many cases, we do not even have a detailed list of requirements. This is for instance the case for user stories frequently used in agile development.

Specification based coverage – 2 The situation where this is most easy is for systems where there exist a detailed specification, e.g. as a set of textual use cases.

Use case name (Re-)Schedule train Use case actor Control central operator User actionSystem action 1.Request to enter schedule info 3. Enter the schedule (train-ID, start and stop place and time, as well as timing for intermediate points) 5. Confirm schedule 2. Show the scheduling screen 4 Check that the schedule does not conflict with other existing schedules; display entered schedule for confirmation

Quality factor estimation The value of the coverage achieved can be used to estimate important quality characteristics like Number of remaining fault Extra test time needed to achieve a certain number of remaining faults System reliability

Basic assumptions In order to use a test coverage value to estimate the number of remaining faults, we need to assume that: All faults are counted only once. Each fault will only give rise to one error All test case have the same size

Choice of models – errors We will use the notation N(n): number of errors reported after n executions N 0 : initial number of faults There exists more than a dozen models for N(n) = f(N 0, n,  ). It can be shown that when we have N(n) -> N 0, we have N(n) = N 0 (1 – exp(-  n)]

Choice of models – coverage (1) We will use the notation C(n): the coverage achieved after n tests C 0 : final coverage. We will assume this to be 1 – no “dead” code. Further more, we will assume that C(n) = 1 / [1 + A exp( – an)]

Choice of models – coverage (2) n 1 / (1 + A) C(n) 1

Parameters We need the following parameters: For the N(n) model we need –N 0 : total number of defects  : mean number of tests to find a defect For the C(n) model we need –A: first test coverage –a: coverage growth parameter All four parameters can be estimated from observations using the Log Likelihood estimator.

Final model We can use the C(n) expression to get an expression for n as a function of C(n). By substituting this into the N(n) expression we get an estimate for the number of remaining fault as a function of the coverage: