Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 5 Retrospective on Functional Testing Software Testing

Similar presentations


Presentation on theme: "Chapter 5 Retrospective on Functional Testing Software Testing"— Presentation transcript:

1 Chapter 5 Retrospective on Functional Testing 322 235 Software Testing
By Wararat Songpan(Rungworawut),PH.D. Department of Computer Science, Faculty of Science, Khon Kaen University การทดสอบซอฟต์แวร์

2 Functional testing or Black Box
We have seen 3 main types of functional testing, BVT, EC, and DT. Boundary Value Testing (BVT): Boundary Value Analysis (BVA) – 5 extreme values – single assumption Robustness Testing (RT) – 7 extreme values – single assumption Worst-case Testing (WT) – 5 extreme values – Multiple assumption Robust Worst-case Testing (RWT) - 7 extreme values – Multiple assumption Equivalence Class Testing (EC): Weak Normal (WN) - valid EC(try no repeat) -> single assumption Strong Normal (SN) - valid EC (Every) -> multiple assumption Weak Robust (WR) - valid & Invalid EC -> single assumption Strong Robust (SR) - valid & Invalid EC-> multiple assumption Decision Table-based Testing (DT): It finds out of some test cases that do not make sense (Impossible test cases) การทดสอบซอฟต์แวร์

3 Test effort (1) Look at questions related to testing effort, testing efficiency, and testing effectiveness between 3 techniques. high low Number of Test Cases Sophistication Boundary value Equivalence class Decision table การทดสอบซอฟต์แวร์

4 Effort to Identify Test Cases
Test effort (1) high low Effort to Identify Test Cases Boundary value Equivalence class Decision table Sophistication การทดสอบซอฟต์แวร์

5 Test Identification Effort VS Test Execution Effort (BVT)
high low Number of Test Cases Sophistication Boundary value Equivalence class Decision table high low Effort to Identify Test Cases Boundary value Equivalence class Decision table Sophistication BVA: They are mechanical on generating test cases and therefore are easy to automate. การทดสอบซอฟต์แวร์

6 Test Identification Effort VS Test Execution Effort (EC)
high low Number of Test Cases Sophistication Boundary value Equivalence class Decision table high low Effort to Identify Test Cases Boundary value Equivalence class Decision table Sophistication EC: The equivalence class techniques take into account data dependencies and the program logic however, more thought and care is required to define the equivalence relation, partition the domain, and identify the test cases. การทดสอบซอฟต์แวร์

7 Test Identification Effort VS Test Execution Effort (EC)
high low Number of Test Cases Sophistication Boundary value Equivalence class Decision table high low Effort to Identify Test Cases Boundary value Equivalence class Decision table Sophistication DT: The decision table technique is the most sophisticated, because it requires that we consider both data and logical dependencies. การทดสอบซอฟต์แวร์

8 Functional Test Cases: BVA (15 TCs)
Test case ID a b c Expected Results 1 100 Isosceles 2 3 Equilateral 4 199 5 200 Not a Trianle 6 7 8 9 10 Not a Triangle 11 12 13 14 15

9 Functional Test Cases: WN.EC (11 TCs)
Test case ID a b c Expected Results 1 5 Equilateral 2 3 Isosceles 4 Scalene 6 8 Not a triangle 7 9 10 11

10 Functional Test Cases: DT (8 TCs.)
Test case ID a b c Expected Results DT1 4 1 2 Not a Triangle DT2 DT3 DT4 5 Equilateral DT5 ? Impossible DT6 DT7 3 Isosceles DT8 DT9 DT10 DT11 Scalene

11 The number of test cases
การทดสอบซอฟต์แวร์

12 The number of test cases
การทดสอบซอฟต์แวร์

13 The number of test cases
การทดสอบซอฟต์แวร์

14 Test cases Trend Line: All Program
Test case per method-all problem การทดสอบซอฟต์แวร์

15 Testing Efficiency (1) The data above, reveal the fundamental limitation of functional testing: the twin possibilities of gaps of untested functionality redundant tests For example: The decision table technique for the NextDate program generated 22 test cases (fairly complete) The worst case boundary analysis generated 125 cases. These are fairly redundant (check January 1 for five different years, only a few February cases but none on February 28, and February 29, and no major testing for leap years). The strong equivalence class test cases generated 36 test cases 11 of which are impossible. There are gaps and redundancy in functional test cases, and these are reduced by using more sophisticated techniques (i.e. Decision Tables).

16 Testing Efficiency (2) The question is how can we quantify what we mean by the term testing efficiency. The intuitive notion of an efficient testing technique is that it produces a set of test cases that are “just right” that is, with no gaps and no redundancy. We can even develop various ratios of useful test cases (i.e. not redundant, and with no gaps) with respect to the total number of test cases generated by method A and method B. One way to identify redundancy by annotating test cases with a brief purpose statement. Test cases with the same purpose provide a sense of redundancy measure. Detecting gaps is harder and this can be done only by comparing two different methods, even though there are no guarantees for complete detection of gaps. Overall, the structural methods (we will see later in the course), support interesting and useful metrics and these can provide a better quantification of testing efficiency.

17 Guidelines Kinds of faults may reveal some pointers as to which testing method to use. If we do not know the kinds of faults that are likely to occur in the program then the attributes most helpful in choosing functional testing methods are: Whether the variables represent physical or logical quantities Whether or not there are dependencies among variables Whether single or multiple faults are assumed Whether exception handling is prominent

18 Guidelines for Functional Testing Technique Selection (1)
The following selection guidelines can be considered: If the variables refer to physical quantities and/or are independent domain testing and equivalence testing can be considered. If the variables are dependent, decision table testing can be considered If the single-fault assumption is plausible to assume, boundary value analysis and robustness testing can be considered If the multiple-fault assumption is plausible to assume, worst case testing, robust worst case testing, and decision table testing can be considered If the program contains significant exception handling, robustness testing and decision table testing can be considered If the variables refer to logical quantities, equivalence class testing and decision table testing can be considered

19 Guidelines for Functional Testing Technique Selection (2)
C1: variables Physical (P) Logical (L) P L C2: Independent Variables? Y N C3: Single fault assumption? - C4: Exception handling? A1: Boundary value analysis X A2: Robustness testing A3: Worst case testing A4: Robust worst case testing A5: Weak equivalence testing A6: Strong equivalence testing A7: Decision table

20 Test Summary Report การทดสอบซอฟต์แวร์

21 Test Summary Report การทดสอบซอฟต์แวร์

22 Test Summary Report การทดสอบซอฟต์แวร์

23 Bug life Cycle BTT= Bug Tracking Tool การทดสอบซอฟต์แวร์

24 Basic of Testing effectiveness Metrics
• Test Coverage • Productivity • Defect Detection Effectiveness • Defect Acceptance Ratio • Estimation Accuracy การทดสอบซอฟต์แวร์

25 Test Coverage Definition Calculation Unit
Percentage of requirement that are covered in test Execution = (Number of Requirement Covered in Test Execution/ Number of Requirement Specified ) * 100 Percentage (%) การทดสอบซอฟต์แวร์

26 Productivity Definition Calculation Unit
The number of test cases developed or Test cases executed per person hour of effort = (Number of Test Cases Developed/ Effort for Test Case development or Execution) /hour การทดสอบซอฟต์แวร์

27 Defect Detection Effectiveness
Definition Calculation Unit Percentage of the total number of defects reported for the application that are reported during the testing stage =(Number of Defects Reported during testing and Accepted)/ (Number of Defects Reported during testing and Accepted + Number of Defects Reported after the testing stage and Accepted) * 100 Percentage (%) การทดสอบซอฟต์แวร์

28 Defect Acceptance Ratio
Definition Calculation Unit Percentage of defects reported that are accepted as valid =(Number of Defects Accepted as Valid)/ (Number of Defects Reported) Percentage (%) การทดสอบซอฟต์แวร์

29 Estimation Accuracy Definition Calculation Unit
Ratio of estimated effort to the actual effort for testing =(Estimated Effort)/ (Actual Effort) x 100 Percentage (%) การทดสอบซอฟต์แวร์


Download ppt "Chapter 5 Retrospective on Functional Testing Software Testing"

Similar presentations


Ads by Google