Presentation is loading. Please wait.

Presentation is loading. Please wait.

Overview Functional Testing Boundary Value Testing (BVT)

Similar presentations


Presentation on theme: "Overview Functional Testing Boundary Value Testing (BVT)"— Presentation transcript:

1 ECE 453 – CS 447 – SE 465 Software Testing & Quality Assurance Instructor Kostas Kontogiannis

2 Overview Functional Testing Boundary Value Testing (BVT)
Equivalence Class Testing Decision Table Based testing Retrospective on Functional Testing

3 General We have seen three main types of functional testing, BVT, ECT, and DTT. The common thread among these techniques is that all view a program as a mathematical function that maps its inputs to its outputs. In this lecture we look at questions related to testing effort, testing efficiency, and tesing effectiveness.

4 Boundary Value Functional Test Cases
Expected Output 1 100 Isosceles 2 3 Equilateral 4 199 5 200 Not a Trianle 6 7 8 9 10 Not a Triangle 11 12 13 14 15

5 Equivalence Class Functional Test Cases
b c Output 1 5 Equilateral 2 3 Isosceles 4 Scalene 6 8 Not a triangle 7 9 10 11

6 Decision Table Functional Test Cases
Case ID a b c Expected Output DT1 4 1 2 Not a Triangle DT2 DT3 DT4 5 Equilateral DT5 ? Impossible DT6 DT7 3 Isosceles DT8 DT9 DT10 DT11 Scalene

7 Testing Effort (1) boundary value equivalence class decision table
Sophistication high low Number of Test Cases

8 Testing Effort (2) boundary value equivalence class decision table
Sophistication high low Effort to Identify Test Cases

9 Testing Effort (3) The domain based techniques have no recognition of data or logical dependencies; they are mechanical on generating test cases and therefore are easy to automate. The equivalence class techniques take into account data dependencies and the program logic however, more thought and care is required to define the equivalence relation, partition the domain, and identify the test cases. The decision table technique is the most sophisticated, because it requires that we consider both data and logical dependencies. These techniques result into a tradeoff between test identification effort and test execution effort

10 Test Case Trend Line

11 Testing Efficiency (1) The data above, reveal the fundamental limitation of functional testing: the twin possibilities of gaps of untested functionality redundant tests For example: The decision table technique for the NextDate program generated 22 test cases (fairly complete) The worst case boundary analysis generated 125 cases. These are fairly redundant (check January 1 for five different years, only a few February cases but none on February 28, and February 29, and no major testing for leap years). The strong equivalence class test cases generated 36 test cases 11 of which are impossible. The bottom line is that there are gaps and redundancy in functional test cases, and these are reduced by using more sophisticated techniques (i.e. Decision Tables).

12 Testing Efficiency (2) The question is how can we quantify what we mean by the term testing efficiency. The intuitive notion of an efficient testin technique is that it produces a set of test cases that are “just right” that is, no gaps and no redundancy. We can even develop various ratios of useful test cases (i.e. no redundant, and with no gaps) with respect to the total number of test cases generated by method A and method B. One way to identify redundancy by annotating test cases with a brief purpose statement. Test cases with the same purpose provide a sense of redundancy measure. Detecting gaps is harder and this can be done only by comparing two different methods, even though there are no guarantees for complete detection of gaps. Overall, the structural methods (we will see later in the course), support interesting and useful metrics and these can provide a better quantification of testing efficiency.

13 Testing Effectiveness
Essentially, how effective a method or a set of test cases is for finding faults present in a program. However this is problematic for two reasons: First it presumes we know all faults in a program The second reason deals with the impossible task to prove that a program is free of faults (it is equivalent to solving the halting problem) The best we can do is to work backward from fault types, that is given a fault type we can chose testing methods that are likely to reveal faults of that type. Extensions include: Use knowledge related to the most likely kinds of faults to occur Track kinds and frequencies of faults in the software applications we develop Structured techniques, can be more easily assessed with respect to their effectiveness, by considering how well cover a specific criterion.

14 Guidelines Kinds of faults may reveal some pointers as to which testing method to use. If we do not know the kinds of faults that are likely to occur in the program then the attributes most helpful in choosing functional testing methods are: Whether the variables represent physical or logical quantities Whether or not there are dependencies among variables Whether single or multiple faults are assumed Whether exception handling is prominent

15 Guidelines for Functional Testing Technique Selection (1)
The following selection guidelines can be considered: If the variables refer to physical quantities and/or are independent domain testing and equivalence testing can be considered. If the variables are dependent, decision table testing can be considered If the single-fault assumption is plausible to assume, boundary value analysis and robustness testing can be considered If the multiple-fault assumption is plausible to assume, worst case testing, robust worst case testing, and decision table testing can be considered If the program contains significant exception handling, robustness testing and decision table testing can be considered If the variables refer to logical quantities, equivalence class testing and decision table testing can be considered

16 Guidelines for Functional Testing Technique Selection (2)
C1: variables Physical (P) Logical (L) P L C2: Independent Variables? Y N C3: Single fault assumption? - C4: Exception handling? A1: Boundary value analysis X A2: Robustness testing A3: Worst case testing A4: Robust worst case testing A5: Traditional equivalence testing A6: Weak equivalence testing A7: Strong equivalence testing A8: Decision table


Download ppt "Overview Functional Testing Boundary Value Testing (BVT)"

Similar presentations


Ads by Google