Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter-(Testing principles)

Similar presentations


Presentation on theme: "Chapter-(Testing principles)"— Presentation transcript:

1 Chapter-(Testing principles)
Chapter-16(Pressman)

2 What is Software Testing
Software Testing is a process of executing a program with the intent of finding an error. A good test case is one that has a high probability of finding an as yet undiscovered error A successful test is one that uncovers an as yet undiscovered error TCS2411 Software Engineering 2

3 The software testing process

4 Testing Activities Unit T est Unit T est Integration Functional Test
Requirements Analysis Document Subsystem Code Unit Requirements Analysis Document System Design Document T est Tested Subsystem User Manual Subsystem Code Unit T est Tested Subsystem Integration Functional Test Test Functioning System Integrated Subsystems Tested Subsystem Subsystem Code Unit T est All tests by developer

5 Testing Activities continued
Client’s Understanding of Requirements Global Requirements User Environment Validated System Accepted System Functioning System Performance Acceptance Installation Test Test Test Usable System Tests by client Tests by developer User’s understanding System in Use Tests (?) by user

6 Software Testing Terms
The execution of a program to find its faults Verification The process of proving the programs correctness. Verification answers the question: Am I building the product right? Validation The process of evaluating software at the end of the software development to ensure compliance with respect to the customer needs and requirements. The process of finding errors by executing the program in a real environment Validation answers the question: Am I building the right product? Debugging Diagnosing the error and correct it Software testing is defined as the execution of a program to find its faults. While more time typically is spent on testing than in any other phase of software development, there is considerable confusion about its purpose. Many software professionals, for example, believe that tests are run to show that the program works rather than to learn about its faults. Myers has provided some useful testing definitions: Testing The process of executing a program (or part of a program) with the intention of finding errors. Verification An attempt to find errors by executing a program in a test or simulated environment (it is now preferable to view verification as the process of proving the program’s correctness) Validation An attempt to find errors by executing a program in a real environment. Debugging Diagnosing the precise nature of a known error and then correcting it (debugging is a correction and not a testing activity) Verification and validation are sometimes confused. They are, in fact, different activities. The difference between them is succinctly summarized by Boehm: ‘Validation: Are we building the right product?’ ‘Verification: Are we building the product right?’

7 Software Testability Checklist (Software testability is simply how easily [a computer program] can be tested.) Operability (the better it works the more efficiently it can be tested) Observabilty (what you see is what you test) Controllability (the better software can be controlled the more testing can be automated and optimized) Decomposability (by controlling the scope of testing, the more quickly problems can be isolated and retested intelligently) The software system is built from independent modules. Software modules can be tested independently. Simplicity (the less there is to test, the more quickly we can test) Stability (the fewer the changes, the fewer the disruptions to testing) Understandability (the more information known, the smarter the testing)

8 Good Test Attributes A good test has a high probability of finding an error. A good test is not redundant. A good test should be best of breed. Highest likelihood of uncovering a whole class of errors A good test should not be too simple or too complex.

9 TCS2411 Software Engineering
Testing Principles All test should be traceable to customer requirements. Tests should be planned before testing begins. Testing should begin with individual components and move towards to integrated cluster of components. Exhaustive testing is not possible. The most effective testing should be conducted by an independent third party. TCS2411 Software Engineering 9

10 Who Should Test The Software?
Developer (individual units) Independent test group (ITG) removes conflict of interest reports to SQA team TCS2411 Software Engineering 10

11 Test data and test cases
Test data Inputs which have been devised to test the system Test cases Inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification

12 Classification of testing techniques
Classification based on the source of information to derive test cases: black-box testing (functional/behavioral, specification-based) white-box testing (structural, program-based, glassbox) Coverage-based: e.g. how many statements or requirements have been tested so far Fault-based: e.g., how many seeded faults are found Error-based: focus on error-prone points, e.g. off-by-one points Black-box: you do not look inside, but only base yourself on the specification/functional description White-box: you do look inside, to the structure, the actual program/specification. This classification is mostly used at the module level.

13 White-box Testing

14 White-box/program based Testing
Based on knowledge of internal logic of an application's code Based on coverage of code statements, branches, paths. Guarantee that all independent paths have been exercised at least once Exercise all logical decisions on their true and false sides Execute all loops at their boundaries and within their operational bounds 14

15 Basis Path Testing steps
White-box technique usually based on the program flow graph The cyclomatic complexity of the program computed from its flow graph using the formula V(G) = E -N + 2 or V(G) = P or Number of regions Determine the basis set of linearly independent paths (the cardinality of this set id the program cyclomatic complexity) Prepare test cases that will force the execution of each path in the basis set.

16 Cyclomatic Complexity/ How Is Cyclomatic Complexity Computed?
Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity of a program. Defines the number of independent paths in the basis set Number of regions V(G) = E – N + 2 E : Number of flow graph edges N : Number of flow graph nodes V(G) = P + 1 P : Number of predicate nodes

17 Example of a Control Flow Graph (CFG)
1 s:=0; d:=0; if (x+y < 100) s:=s+x+y; else d:=d+x-y; } 2 while (x<y) { x:=x+3; y:=y+2; 3 4 5 6 7 8 17

18 TCS2411 Software Engineering
Flow Graph Notation 1 2 3 6 7 8 4 5 9 10 11 2,3 4,5 Node Edge R3 R2 R1 R4 Region TCS2411 Software Engineering 18

19 How Is Cyclomatic Complexity Computed?
Number of regions The flow graph has 4 regions V(G) = E – N + 2 E : Number of flow graph edges N : Number of flow graph nodes V(G) = 11 edges – 9 nodes + 2 = 4 V(G) = P + 1 P : Number of predicate nodes V(G) = 3 predicate nodes + 1 = 4 TCS2411 Software Engineering 19

20 Independent Program Paths
Basis set for flow graph on previous slide Path 1: 1-11 Path 2: Path 3: Path 4: The number of paths in the basis set is determined by the cyclomatic complexity 20

21 EX1: Deriving Test Cases for average procedure/PDL
i=1; total.input = total.valid=0; sum=0; do while value[i] <> -999 and total.input<100 increment total.input by 1; if value[i] >= minimum AND value[i] <= maximum then increment total.valid by 1; sum = sum + value[i] else skip end if increment i by 1 End do If total.valid > then average = sum / total valid; else average = -999; End if 1 2 3 4 5 6 7 8 9 10 11 12 13 TCS2411 Software Engineering 21

22 EX1: Deriving Test Cases for average procedure/PDL
STEP1: Draw flow graph from design/code as foundation. 1 2 3 4 10 12 11 5 13 6 8 7 9 TCS2411 Software Engineering 22

23 Ex1…. (cont) Step2: Determine cyclomatic complexity
V(G) = 6 regions V(G) = 17 edges – 13 nodes + 2 = 6 V(G) = 5 predicates nodes + 1 = 6 Step3: Determine a basis set of linearly independent graph path 1: path 2: path 3: path 4: path 5: path 6: The ellipsis (. . .) following paths 4, 5, and 6 indicates that any path through the remainder of the control structure is acceptable. It is often worthwhile to identify predicate nodes as an aid in the derivation of test cases. In this case, nodes 2, 3, 5, 6, and 10 are predicate nodes.

24 EX: Binary search flow graph

25 Independent paths 1, 2, 3, 8, 9 1, 2, 3, 4, 6, 7, 2 1, 2, 3, 4, 5, 7, 2 1, 2, 3, 4, 6, 7, 2, 8, 9 Test cases should be derived so that all of these paths are executed A dynamic program analyser may be used to check that paths have been executed

26 White-box Testing Example: Determining the Paths
FindMean (FILE ScoreFile) { float SumOfScores = 0.0; int NumberOfScores = 0; float Mean=0.0; float Score; Read(ScoreFile, Score); while (!EOF(ScoreFile) { if (Score > 0.0 ) { SumOfScores = SumOfScores + Score; NumberOfScores++; } /* Compute the mean and print the result */ if (NumberOfScores > 0) { Mean = SumOfScores / NumberOfScores; printf(“ The mean score is %f\n”, Mean); } else printf (“No scores found in file\n”); 1 2 3 4 5 6 7 8 9

27 Constructing the Logic Flow Diagram

28 Determine McCabe’s Complexity Measure
The easiest formula to evaluate is to read the code: Number of predicates (3) + 1 = 4 Check your answer by determining the number of regions: Number of closed regions (3) + number of outside regions (always 1) = 4 Determine a basis (a linearly independent set of paths): Will contain 4 paths. Use the following technique: Take the shortest path. Add a single predicate to the next path. Basis: Path 1: 1, 2, 7, 9, Exit (Could have the number 10) //at end of file after reading Path 2: 1, 2, 3, 5, 6, 2, 7, 9, Exit // reads a value that is negative or zero; // then at end of file Path 3: 1, 2, 3, 4, 5, 6, 2, 7, 8, Exit // reads a single value that is positive, so // completes mean Path 4: 1, 2, 7, 8, Exit: // infeasible path

29 Graph Matrix for white box testing

30 White-box Testing Focus: Thoroughness (Coverage). Every statement in the component is executed at least once. Four types of white-box testing Statement Testing Branch Testing Path Testing Loop Testing

31 White-box Testing (Continued)
Statement Testing: Every statement should be tested at least once. Branch Testing (Conditional Testing): Make sure that each possible outcome from a condition is tested at least once if ( i = TRUE) printf("YES\n"); else printf("NO\n"); Test cases: 1) i = TRUE; 2) i = FALSE Path Testing: Make sure all paths in the program are executed. This is infeasible. (Combinatorial explosion)

32 Types of White-Box Testing
There exist several popular white-box testing methodologies: Statement coverage branch coverage path coverage/basis path loop coverage 32

33 Statement Coverage Statement coverage methodology:
design test cases so that every statement in a program is executed at least once. 33

34 Example Euclid's GCD Algorithm int f1(int x, int y){ 1 while (x != y){
2 if (x>y) then x=x-y; 4 else y=y-x; 5 } 6 return x; } Euclid's GCD Algorithm 34

35 Euclid's GCD computation algorithm
By choosing the test set {(x=3,y=3),(x=4,y=3), (x=3,y=4)} all statements are executed at least once. 35

36 Branch Coverage Test cases are designed such that:
different branch conditions given true and false values in turn. 36

37 Branch Coverage Branch testing guarantees statement coverage:
a stronger testing compared to the statement coverage-based testing. 37

38 Example int f1(int x,int y){ 1 while (x != y){ 2 if (x>y) then
x=x-y; 4 else y=y-x; 5 } 6 return x; } 38

39 Example Test cases for branch coverage can be:
{(x=3,y=3),(x=3,y=2), (x=4,y=3), (x=3,y=4)} 39

40 Path Coverage/ Basis path testing
Design test cases such that: all linearly independent paths in the program are executed at least once. 40

41 Example int f1(int x,int y){ 1 while (x != y){ 2 if (x>y) then
x=x-y; 4 else y=y-x; 5 } 6 return x; } 41

42 Example Control Flow Graph
1 2 3 4 5 6 42

43 McCabe's cyclomatic metric
Given a control flow graph G, cyclomatic complexity V(G): V(G)= E-N+2 N is the number of nodes in G E is the number of edges in G 43

44 Example Control Flow Graph
1 2 3 4 5 6 44

45 Example Cyclomatic complexity = = 3. 45

46 Cyclomatic complexity
Another way of computing cyclomatic complexity: inspect control flow graph determine number of bounded areas in the graph V(G) = Total number of bounded areas + 1 46

47 Example Control Flow Graph
1 2 3 4 5 6 47

48 Example From a visual examination of the CFG:
the number of bounded areas is 2. cyclomatic complexity = 2+1=3. 48

49 Cyclomatic complexity
Defines the number of independent paths in a program. Provides a lower bound: for the number of test cases for path coverage. 49

50 Cyclomatic complexity
Knowing the number of test cases required: does not make it any easier to derive the test cases, only gives an indication of the minimum number of test cases required. 50

51 Path testing The tester proposes:
an initial set of test data using his experience and judgement. 51

52 Path testing A dynamic program analyzer is used:
to indicate which parts of the program have been tested the output of the dynamic analysis used to guide the tester in selecting additional test cases. 52

53 Derivation of Test Cases
Let us discuss the steps: to derive path coverage-based test cases of a program. 53

54 Derivation of Test Cases
Draw control flow graph. Determine V(G). Determine the set of linearly independent paths. Prepare test cases: to force execution along each path. 54

55 Example int f1(int x,int y){ 1 while (x != y){ 2 if (x>y) then
x=x-y; 4 else y=y-x; 5 } 6 return x; } 55

56 Example Control Flow Diagram
1 2 3 4 5 6 56

57 Derivation of Test Cases
Number of independent paths: 3 1,6 test case (x=1, y=1) 1,2,3,5,1,6 test case(x=2, y=1) 1,2,4,5,1,6 test case(x=1, y=2) 57

58 Loop Testing A white-box testing technique that focuses exclusively on the validity of loop constructs Four different classes of loops exist Simple loops Nested loops Concatenated loops Unstructured loops 58

59 Loop Testing Simple loop Nested Loops Concatenated Loops Unstructured

60 Discussion on White Box Testing
Advantages Find errors on code level Typically based on a very systematic approach, covering the complete internal module structure Constraints Does not find missing or additional functionality Does not really check the interface Difficult for large and complex module TCS2411 Software Engineering 60

61 Black-box Testing

62 Specification-Based Testing
Expected output Specification Apply input Program Observed output Validate the observed output against the expected output

63 Black Box/Functional/Behavioral/specification-based/close box Testing
Test cases Derived from program specification Functional testing of a component of a system Examine behaviour through inputs & the corresponding outputs Used during the later stages of testing after white box testing has been performed The tester identifies a set of input conditions that will fully exercise all functional requirements for a program 63

64 Black-box testing

65 Black Box Testing (Continued)
Attempts to find the following errors: Incorrect or missing functions Interface errors Errors in data structures or external database access Performance errors Initialisation and termination errors TCS2411 Software Engineering 65

66 Black Box Testing Techniques
Equivalence Partitioning Boundary Value Analysis Comparison Testing Orthogonal Array Testing Graph Based Testing Methods 66

67 Equivalence Partitioning
Black-box technique that divides the input domain into classes of data with common characteristics from which test cases can be derived Test cases should be chosen from each partition Goal: Reduce number of test cases by equivalence partitioning:

68 Equivalence partitioning

69 Equivalence partitioning…….
Once you have identified a set of partitions, you choose test cases from each of these partitions. A good rule of thumb for test case selection is to choose test cases on the boundaries of the partitions, plus cases close to the midpoint of the partition.

70 Equivalence class guidelines
If input condition specifies a range, one valid and two invalid equivalence classes are defined If an input condition requires a specific value, one valid and two invalid equivalence classes are defined If an input condition specifies a member of a set, one valid and one invalid equivalence class is defined If an input condition is Boolean, one valid and one invalid equivalence class is defined

71 Black-box Testing (Continued)
Selection of equivalence classes (No rules, only guidelines): Input is valid across range of values. Select test cases from 3 equivalence classes: Below the range Within the range Above the range Input is valid if it is from a discrete set. Select test cases from 2 equivalence classes: Valid set member Invalid set member Input is valid if it is the Boolean value true or false. Select a true value a false value Input is valid if it is an exact value of a particular data type. Select the value any other value of that type

72 Boundary Value Analysis
It focuses on the boundaries of the input domain rather than its center Complement equivalence partitioning Test both sides of each boundary Test min, min-1, max, max+1, typical values 72

73 Guidelines for Boundary Value Analysis
1. If an input condition specifies a range bounded by values a and b, test cases should be designed with values a and b as well as values just above and just below a and b 2. If an input condition specifies a number of values, test case should be developed that exercise the minimum and maximum numbers. Values just above and just below the minimum and maximum are also tested Apply guidelines 1 and 2 to output conditions; produce output that reflects the minimum and the maximum values expected; also test the values just below and just above If internal program data structures have prescribed boundaries (e.g., an array), design a test case to exercise the data structure at its minimum and maximum boundaries 73

74 Comparison Testing Black-box testing for safety critical systems in which independently developed implementations of redundant systems are tested for conformance to specifications When redundant software is developed, separate software engineering teams develop independent versions of an application using the same specification. In such situations, each version can be tested with the same test data All versions are executed in parallel with real-time comparison of results to ensure consistency.

75 Orthogonal Array Testing
Black-box technique that enables the design of a reasonably small set of test cases that provide maximum test coverage Focus is on categories of faulty logic likely to be present in the software component (without examining the code) Priorities for assessing tests using an orthogonal array Detect and isolate all single mode faults Detect all double mode faults Mutimode faults

76 Test documentation (IEEE 928)
Test plan Test design specification Test case specification Test procedure specification Test item transmittal report Test log Test incident report Test summary report

77 Comparison of White & Black-box Testing
White-box Testing: Potentially infinite number of paths have to be tested White-box testing often tests what is done, instead of what should be done Cannot detect missing use cases Black-box Testing: Potential combinatorical explosion of test cases (valid & invalid data) Often not clear whether the selected test cases uncover a particular error Does not discover extraneous use cases ("features")

78 The 4 Testing Steps 1. Select what has to be measured
Analysis: Completeness of requirements Design: tested for cohesion Implementation: Code tests 2. Decide how the testing is done Code inspection Proofs (Design by Contract) Black-box, white box, Select integration testing strategy (big bang, bottom up, top down, sandwich) 3. Develop test cases A test case is a set of test data or situations that will be used to exercise the unit (code, module, system) being tested or about the attribute being measured 4. Create the test oracle An oracle contains the predicted results for a set of test cases The test oracle has to be written down before the actual testing takes place

79 Guidance for Test Case Selection
Use analysis knowledge about functional requirements (black-box testing): Use cases Expected input data Invalid input data Use design knowledge about system structure, algorithms, data structures (white-box testing): Control structures Test branches, loops, ... Data structures Test records fields, arrays, ... Use implementation knowledge about algorithms: Examples: Force division by zero Use sequence of test cases for interrupt handler

80 Fault (Bug): The mechanical or algorithmic cause of an error.
Terminology Reliability: The measure of success with which the observed behavior of a system confirms to some specification of its behavior. IEEE(The ability of a system or component to perform its required functions under stated conditions for a specified period of time. It is often expressed as a probability.) Failure: Any deviation of the observed behavior from the specified behavior Error: The system is in a state such that further processing by the system will lead to a failure. Stress or overload errors, Capacity or boundary errors, Timing errors, Throughput or performance errors Fault (Bug): The mechanical or algorithmic cause of an error. Quality: IEEE(the degree to which a system, component, or process meets customer or user needs or expectations.) There are many different types of errors and different ways how we can deal with them.

81 Examples of Faults and Errors
Faults in the Interface specification Mismatch between what the client needs and what the server offers Mismatch between requirements and implementation Algorithmic Faults Missing initialization Branching errors (too soon, too late) Missing test for nil Mechanical Faults (very hard to find) Documentation does not match actual conditions or operating procedures Errors Stress or overload errors Capacity or boundary errors Timing errors Throughput or performance errors

82 Dealing with Errors Verification: Modular redundancy:
Assumes hypothetical environment that does not match real environment Proof might be buggy (omits important constraints; simply wrong) Modular redundancy: Expensive Declaring a bug to be a “feature” Bad practice Patching A patch (sometimes called a "fix") is a quick-repair job for a piece of programming. During a software product's beta test distribution or try-out period and later after the product is formally released, problems (called bug) will almost invariably be found. A patch is the immediate solution that is provided to users; it can sometimes be downloaded from the software maker's Web site. The patch is not necessarily the best solution for the problem and the product developers often find a better solution to provide when they package the product for its next release. A patch is usually developed and distributed as a replacement for or an insertion in compiled code (that is, in a binary file or object module). In larger operating systems, a special program is provided to manage and keep track of the installation of patches. Slows down performance Testing (this lecture) Testing is never good enough

83 Another View on How to Deal with Errors
Error prevention (before the system is released): Use good programming methodology to reduce complexity Use version control to prevent inconsistent system Apply verification to prevent algorithmic bugs Error detection (while system is running): Testing: Create failures in a planned way Debugging: Start with an unplanned failuresMonitoring: Deliver information about state. Find performance bugs Error recovery (recover from failure once the system is released): Data base systems (atomic transactions) Modular redundancy Recovery blocks

84 Fault Handling Techniques
Fault Avoidance Fault Detection Fault Tolerance Design Methodology Reviews Atomic Transactions Modular Redundancy Verification Configuration Management Testing Debugging Unit Testing Integration Testing System Testing Correctness Debugging Performance Debugging

85 Quality Assurance encompasses Testing
Usability Testing Scenario Testing Prototype Testing Product Testing Fault Avoidance Fault Tolerance Atomic Transactions Modular Redundancy Verification Configuration Management Fault Detection Reviews Debugging Walkthrough Inspection Testing Correctness Debugging Performance Debugging Unit Testing Integration Testing System Testing


Download ppt "Chapter-(Testing principles)"

Similar presentations


Ads by Google