Presentation is loading. Please wait.

Presentation is loading. Please wait.

Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005.

Similar presentations


Presentation on theme: "Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005."— Presentation transcript:

1 Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005

2 Software Testing and Reliability  Aditya P. Mathur 2003 2 Learning Objectives: This course Methods for test assessment The coverage principle and the saturation effect Tools: Methods for test generation Software test process AETG: Test generation xSUDS: Test assessment, enhancement, minimization, debugging CodeTest: Test assessment, performance monitoring VisualTest: GUI testing Test RealTime: Test assessment, performance monitoring Ballista: Robustness testing

3 Software Testing and Reliability  Aditya P. Mathur 2003 3 Learning Objectives How and why does testing improve our confidence in program correctness? What is coverage and what role does it play in testing? What are the different types of testing? What is model-based testing? How does it differ from (formal) verification? What are the formalisms for specification and design used as source for test and oracle generation?

4 Software Testing and Reliability  Aditya P. Mathur 2003 4 Testing: Preliminaries What is testing? The act of checking if a part or a product performs as expected. Why test? Gain confidence in the correctness of a part or a product. Check if there are any errors in a part or a product.

5 Software Testing and Reliability  Aditya P. Mathur 2003 5 What to test? During software lifecycle several products are generated. Examples: Requirements document Design document Software subsystems Software system

6 Software Testing and Reliability  Aditya P. Mathur 2003 6 Test all! Each of these products needs testing. Methods for testing various products are different. Examples: Test a requirements document using scenario construction and simulation. Test a design document using simulation. Test a subsystem using functional testing.

7 Software Testing and Reliability  Aditya P. Mathur 2003 7 What is our focus? We focus on testing programs using formal models. Programs may be subsystems or complete systems. These are written in a formal programming language. There is a large collection of techniques and tools to test programs.

8 Software Testing and Reliability  Aditya P. Mathur 2003 8 Source of Tests An Abstraction of the MBT Process Develop/Add Tests Run Tests Debug and remove defects Test adequate? No Yes Proceed to the next step Raw requirements Formal specifications Finite State Machines State Charts Sequence Diagrams Code, etc. Tests Behavior Modified document

9 Software Testing and Reliability  Aditya P. Mathur 2003 9 A Few Terms Program: A collection of functions, as in C, or a collection of classes as in java. Specification: Description of requirements for a program. This might be formal or informal.

10 Software Testing and Reliability  Aditya P. Mathur 2003 10 Few Terms (contd.) A set of values of input variables of a program. Values of environment variables are also included. Test case or test input Test set Set of test inputs Program execution Execution of a program on a test input.

11 Software Testing and Reliability  Aditya P. Mathur 2003 11 Few Terms (contd.) Oracle A function that determines whether or not the results of executing a program under test is as per the program’s specifications. Verification Human examination of a product, such as design document, code, user manual, etc., to check for correctness. Inspections an walkthroughs are the generally used methods for verification. Validation The process of evaluating a system or a subsystem to determine whether or not it satisfies the specified requirements.

12 Software Testing and Reliability  Aditya P. Mathur 2003 12 Correctness Let P be a program (say, an integer sort program). For sort let S be: Let S denote the specification for P.

13 Software Testing and Reliability  Aditya P. Mathur 2003 13 Sample Specification Let K denote any element of this sequence, P takes as input an integer N>0 and a sequence of N integers called elements of the sequence. P sorts the input sequence in descending order and prints the sorted sequence.

14 Software Testing and Reliability  Aditya P. Mathur 2003 14 Correctness again P is considered correct with respect to a specification S if and only if: For each valid input the output of P is in accordance with the specification S.

15 Software Testing and Reliability  Aditya P. Mathur 2003 15 Errors, defects, faults Error: A mistake made by a programmer Example: Misunderstood the requirements. Defect/fault: Manifestation of an error in a program. Example: Incorrect code: if (a<b) {foo(a,b);} Correct code: if (a>b) {foo(a,b);}

16 Software Testing and Reliability  Aditya P. Mathur 2003 16 Failure Incorrect program behavior due to a fault in the program. Failure can be determined only with respect to a set of requirement specifications. A necessary condition for a failure to occur is that execution of the program force the erroneous portion of the program to be executed. What is the sufficiency condition?

17 Software Testing and Reliability  Aditya P. Mathur 2003 17 Errors and failure Inputs Error-revealing inputs cause failure Program Outputs Erroneous outputs indicate failure

18 Software Testing and Reliability  Aditya P. Mathur 2003 18 Debugging Suppose that a failure is detected during the testing of P. The process of finding and removing the cause of this failure is known as debugging. The word bug is slang for fault. Testing usually leads to debugging Testing and debugging usually happen in a cycle.

19 Software Testing and Reliability  Aditya P. Mathur 2003 19 Test-debug Cycle Test Debug Yes Testing complete? No Done! Yes No Failure?

20 Software Testing and Reliability  Aditya P. Mathur 2003 20 Testing and Code Inspection Code inspection is a technique whereby the source code is inspected for possible errors. Code inspection is generally considered complementary to testing. Neither is more important than the other. One is not likely to replace testing by code inspection or by verification.

21 Software Testing and Reliability  Aditya P. Mathur 2003 21 Testing for correctness? Identify the input domain of P. Execute P against each element of the input domain. For each execution of P, check if P generates the correct output as per its specification S.

22 Software Testing and Reliability  Aditya P. Mathur 2003 22 What is an input domain ? Input domain of a program P is the set of all valid inputs that P can expect. The size of an input domain is the number of elements in it. An input domain could be finite or infinite. Finite input domains might be very large!

23 Software Testing and Reliability  Aditya P. Mathur 2003 23 Identifying the input domain For the sort program: N: size of the sequence, K: each element of the sequence. Example: For N<3, e=3, some sequences in the input domain are: [0]: A sequence of size 1 (N=1) [2 1]: A sequence of size 2 (N=2). [ ]: An empty sequence (N=0).

24 Software Testing and Reliability  Aditya P. Mathur 2003 24 Size of an input domain Suppose that The size can be computed as: The size of the input domain is the number of all sequences of size 0, 1, 2, and so on. Can you derive this formula?

25 Software Testing and Reliability  Aditya P. Mathur 2003 25 Testing for correctness? Sorry! To test for correctness P needs to be executed on all inputs. For our example, it will take an exorbitant amount of time to execute the sort program on all inputs on the most powerful computers of today!

26 Software Testing and Reliability  Aditya P. Mathur 2003 26 Exhaustive Testing This form of testing is also known as exhaustive testing as we execute P on all elements of the input domain. For most programs exhaustive testing is not feasible. What is the alternative?

27 Software Testing and Reliability  Aditya P. Mathur 2003 27 Formal Verification Formal verification (for correctness) is different from testing for correctness. There are techniques for formal verification of programs that we do not plan to discuss.

28 Software Testing and Reliability  Aditya P. Mathur 2003 28 Partition Testing In this form of testing the input domain is partitioned into a finite number of sub-domains. P is then executed on a few elements of each sub-domain. Let us return to the sort program.

29 Software Testing and Reliability  Aditya P. Mathur 2003 29 Sub-domains Suppose that 0<=N<=2 and e=3. The size of the partitions is: We can divide the input domain into three sub-domains as shown. 1 3 9

30 Software Testing and Reliability  Aditya P. Mathur 2003 30 Fewer test inputs Now sort can be tested on one element selected from each domain. For example, one set of three inputs is: [ ]Empty sequence from sub-domain 1. [2]Sequence from sub-domain 2. [2 0] Sequence from sub-domain 3. We have thus reduced the number of inputs used for testing from 13 to 3!

31 Software Testing and Reliability  Aditya P. Mathur 2003 31 Confidence Confidence is a measure of one’s belief in the correctness of the program. Correctness is often not measured in binary terms: a correct or an incorrect program. Instead, it is measured as the probability of correct operation of a program when used in various scenarios.

32 Software Testing and Reliability  Aditya P. Mathur 2003 32 Measures of Confidence Reliability: Probability that a program will function correctly in a given environment over a certain number of executions. Test completeness: The extent to which a program has been tested and errors found have been removed.

33 Software Testing and Reliability  Aditya P. Mathur 2003 33 Example: Increase in Confidence We consider a non-programming example to illustrate what is meant by “increase in confidence.” Example: A rectangular field has been prepared to certain specifications. One item in the specifications is: “There should be no stones remaining in the field.”

34 Software Testing and Reliability  Aditya P. Mathur 2003 34 Rectangular Field Search for stones inside a rectangular field. X Y 0L W

35 Software Testing and Reliability  Aditya P. Mathur 2003 35 Testing the Rectangular Field The field has been prepared and our task is to test it to make sure that it has no stones. How should we organize our search?

36 Software Testing and Reliability  Aditya P. Mathur 2003 36 Partitioning the field We divide the entire field into smaller search rectangles. The length and breadth of each search rectangle is one half the expected length and breadth of the smallest stone one expects to find in the field.

37 Software Testing and Reliability  Aditya P. Mathur 2003 37 Partitioning into search rectangles Stone 1 2 3 4 5 6 7 8 Y Width 1234567 X Length Another Stone Two stones inside one rectangle A tiny stone

38 Software Testing and Reliability  Aditya P. Mathur 2003 38 Input Domain Input domain is the set of all possible valid inputs to the search process. In our example this is the set of all points in the field. Thus, the input domain is infinite! To reduce the size of the input domain we partition the field into finite size rectangles.

39 Software Testing and Reliability  Aditya P. Mathur 2003 39 Rectangle size The length and breadth of each search rectangle is one half that of the smallest stone. This is an attempt to ensure that each stone covers at least one rectangle. (Is this always true?)

40 Software Testing and Reliability  Aditya P. Mathur 2003 40 Constraints Testing must be completed in less than H hours. Any stone found during testing is removed. Upon completion of testing the probability of finding a stone must be less than p.

41 Software Testing and Reliability  Aditya P. Mathur 2003 41 Number of search rectangles Let L: Length of the field W:Width of the field l:Expected length of the smallest stone w:Expected width of the smallest stone Size of each rectangle: l/2 x w/2 Number of search rectangles (R)=(L/l)*(W/w)*4 Assume that L/l and W/w are integers.

42 Software Testing and Reliability  Aditya P. Mathur 2003 42 Time to Test Let t be the time to peek inside one search rectangle. No rectangle is examined more than once. Let o be the overhead incurred in moving from one search rectangle to another. Total time to search T=R*t+(R-1)*o Testing with R rectangles is feasible only if T<H.

43 Software Testing and Reliability  Aditya P. Mathur 2003 43 Partitioning the input domain This set consists of all search rectangles (R). Number of partitions of the input domain is finite (=R). However, if T>H then the number of partitions is too large and scanning each rectangle once is infeasible. What should we do in such a situation?

44 Software Testing and Reliability  Aditya P. Mathur 2003 44 Option 1: Do a limited search Of the R search rectangles we examine only r where r is such that (t*r+o*(r-1)) < H. This limited search will satisfy the time constraint. Will it satisfy the probability constraint? Question: What do the probability and time constraints correspond to in a commercial test ?

45 Software Testing and Reliability  Aditya P. Mathur 2003 45 Distribution of Stones To satisfy the probability constraint we must scan enough search rectangles so that the probability of finding a stone, after testing, remains less than p. Let us assume that there are S i stones remaining after i test cycles.

46 Software Testing and Reliability  Aditya P. Mathur 2003 46 Distribution of Stones There are R i search rectangles remaining after i test cycles. Stones are distributed uniformly over the field. An estimate of the probability of finding a stone in a randomly selected remaining search rectangle is p i = s i / R i.

47 Software Testing and Reliability  Aditya P. Mathur 2003 47 Probability Constraint We will stop looking into rectangles if p i <= p Can we really apply this test method in practice?

48 Software Testing and Reliability  Aditya P. Mathur 2003 48 Confidence Number of stones in the field is not known in advance. Hence we cannot compute the probability of finding a stone after a certain number of rectangles have been examined. The best we can do is to scan as many rectangles as we can and remove the stones found.

49 Software Testing and Reliability  Aditya P. Mathur 2003 49 Coverage Suppose that r rectangles have been scanned from a total of R. Then we say that the (rectangle) coverage is r/R. After a rectangle has been scanned for a stone and any stone found has been removed, we say that the rectangle has been covered.

50 Software Testing and Reliability  Aditya P. Mathur 2003 50 Coverage and Confidence What happens when coverage increases? As coverage increases (and stones found are removed) so does our confidence in a “stone-free” field. In this example, when the coverage reaches 100%, (almost) all stones have been found and removed. Can you think of situations when this might not be true?

51 Software Testing and Reliability  Aditya P. Mathur 2003 51 Option 2 : Reduce number of partitions If the number of rectangles to scan is too large, we can increase the size of a rectangle. This reduces the number of rectangles. Increasing the size of a rectangle also implies that there might be more than one stone within a rectangle. Is this good for a tester?

52 Software Testing and Reliability  Aditya P. Mathur 2003 52 Rectangle Size As a stone may now be smaller than a rectangle, detecting a stone inside a rectangle is not guaranteed. Despite this fact our confidence in a “stone-free” field increases with coverage. However, when the coverage reaches 100% we cannot guarantee a “stone-free” field.

53 Software Testing and Reliability  Aditya P. Mathur 2003 53 Coverage vs. Confidence 1(=100%) 1 Coverage Confidence 0 Does not imply that the field is “stone-free”.

54 Software Testing and Reliability  Aditya P. Mathur 2003 54 Rectangle Size (again!) p=Probability of detecting a stone inside a rectangle, given that the stone is there. t=time to complete a test. Rectangle size smalllarge t, p

55 Software Testing and Reliability  Aditya P. Mathur 2003 55 Analogy Field:Program Stone:Error Scan a rectangle:Test program on one input Remove stone:Remove error Partition:Subset of input domain Size of stone:Size of an error Rectangle size:Size of a partition

56 Software Testing and Reliability  Aditya P. Mathur 2003 56 Input domain Analogy (contd.) Size of an error is the number of inputs in the input domain each of which will cause a failure due to that error. Inputs that cause failure due to Error 1 Inputs that cause failure due to Error 2. Error 1 is larger than Error 2. Does this imply that failures due to error 1 will occur more frequently than those due to error 2?

57 Software Testing and Reliability  Aditya P. Mathur 2003 57 Confidence and Probability It might not increase the probability that the field is “stone- free”. Increase in coverage increases our confidence in a “stone-free” field. Important: Increase in confidence is NOT justified if detected stones are not guaranteed to be removed !

58 Software Testing and Reliability  Aditya P. Mathur 2003 58 Types of Testing Basis for classification Object under test All of these methods can be applied here. Source of clues for test input construction

59 Software Testing and Reliability  Aditya P. Mathur 2003 59 Testing: Based on Source of Test Inputs Functional testing/specification testing/black-box testing/conformance testing: Clues for test input generation come from requirements. White-box testing/coverage testing/code-based testing Clues come from program text.

60 Software Testing and Reliability  Aditya P. Mathur 2003 60 Testing: Based on Source of Test Inputs Stress testing Clues come from “load” requirements. For example, a telephone system must be able to handle 1000 calls over any 1-minute interval. What happens when the system is loaded or overloaded?

61 Software Testing and Reliability  Aditya P. Mathur 2003 61 Testing: Based on Source of Test Inputs Performance testing Clues come from performance requirements. For example, each call must be processed in less than 5 seconds. Does the system process each call in less than 5 seconds. Fault- or error- based testing Clues come from the faults that are injected into the program text or are hypothesized to be in the program.

62 Software Testing and Reliability  Aditya P. Mathur 2003 62 Testing: Based on Source of Test Inputs Random testing Robustness testing Clues come from requirements. Test are generated randomly using these clues. Robustness is the degree to which a software component functions correctly in the presence of exceptional inputs or stressful environmental conditions. Clues come from requirements. The goal is to test a program under scenarios not stipulated in the requirements.

63 Software Testing and Reliability  Aditya P. Mathur 2003 63 Testing: Based on Source of Test Inputs OO testing Clues come from the requirements and the design of an OO- program. Protocol testing Clues come from the specification of a protocol. As, for example, when testing for a communication protocol.

64 Software Testing and Reliability  Aditya P. Mathur 2003 64 Testing: Based on Item Under Test Unit testing Testing of a program unit. A unit is the smallest testable piece of a program. One or more units form a subsystem. Subsystem testing Testing of a subsystem. A subsystem is a collection of units that cooperate to provide a part of system functionality

65 Software Testing and Reliability  Aditya P. Mathur 2003 65 Testing: Based on Item Under Test Integration testing Testing of subsystems that are being integrated to form a larger subsystem or a complete system. System testing Testing of a complete system.

66 Software Testing and Reliability  Aditya P. Mathur 2003 66 Testing: Based on Item Under Test Regression testing And the list goes on and on! Test a subsystem or a system on a subset of the set of existing test inputs to check if it continues to function correctly after changes have been made to an older version.

67 Software Testing and Reliability  Aditya P. Mathur 2003 67 Test input construction and objects under test Test object Source of clues for test inputs unit subsystem system Requirements Code

68 Software Testing and Reliability  Aditya P. Mathur 2003 68 Combinatorial Design Context: A telephone switch Problem: Determine what inputs to use to test the switch. Call TypeBillingAccessStatus LocalCallerLoopAvailable Long DistCollectISDNBusy Intl.800PBXBlocked Total parameters: 4Values for each parameter: 3 Total number of scenarios: 3 4 =81 Parameters: Sample inputs:

69 Software Testing and Reliability  Aditya P. Mathur 2003 69 Reducing the Input Space Suppose that 81 test is too many for the telephone switch under test. An alternative is to select a default value for each parameter and then vary each parameter until all values are covered.

70 Software Testing and Reliability  Aditya P. Mathur 2003 70 Test Plan with Default Parameter Values Call TypeBillingAccessStatus LocalCallerLoopAvailable Long DistCallerLoopAvailable Intl.CallerLoopAvailable LocalCollectLoopAvailable Local800LoopAvailable LocalCallerISDNAvailable LocalCallerPBXAvailable LocalCallerLoopBusy LocalCallerLoopBlocked Total inputs: 9 Coverage: 30 of the 54 pair wise interactions.

71 Software Testing and Reliability  Aditya P. Mathur 2003 71 Another Test Plan Call TypeBillingAccessStatus LocalCollectPBXBusy Long Dist800LoopBusy Intl.CallerISDNBusy Local800ISDNBlocked Long DistCallerPBXBlocked Intl.CollectLoopBlocked LocalCallerLoopAvailable Long DistCollectISDNAvailable Total inputs: 9 Coverage: All pair wise interactions covered Intl800PBXAvailable

72 Software Testing and Reliability  Aditya P. Mathur 2003 72 Combinatorial Explosion What if the program under test had 10 parameters each with 3 values? Total parameter combinations= 3 10 Number of tests using the default value method= ? Number of pair-wise combinations = ? Number of pair-wise combinations covered = ?

73 Software Testing and Reliability  Aditya P. Mathur 2003 73 Answers to Questions Tests with default value method=n+ (n-1) x (k-1) Pair-wise combinations=(k x (k-1)/2) x n 2 For k parameters each with n possible values: Pair-wise combinations covered=(k-1)+n*(k-1)+(n-1)*(k-1)*(k-1) Later we shall discuss how to handle the combinatorial explosion.

74 Software Testing and Reliability  Aditya P. Mathur 2003 74 Finite State Machines (FSMs) A state machine is an abstract representation of actions taken by a program or anything else that functions! It is specified as a quintuple: A: a finite input alphabet Q: a finite set of states q0: initial state which is a member of Q.

75 Software Testing and Reliability  Aditya P. Mathur 2003 75 FSMs (contd.) T: state transitions which is a mapping Q x A--> Q F: A finite set of final states, F is a subset of Q. Example: Here is a finite state machine that recognizes integers ending with a carriage return character. A={0,1,2,3,4,5,6,7,8,9, CR} Q={q0,q1,q2} q0: initial state

76 Software Testing and Reliability  Aditya P. Mathur 2003 76 FSMs (contd.) T: {((q0,d),q1),(q1,d),q1), (q1,CR),q2)} F: {q2} A state diagram is an easier to understand specification of a state machine. For the above machine, the state diagram appears on the next slide.

77 Software Testing and Reliability  Aditya P. Mathur 2003 77 State diagram q0q1 d d CR q2 Final state indicated by concentric circles. States indicated by circles. State transitions indicated by labeled arrows from one state the another (which could be the same). Each label must be from the alphabet. It is also known as an event. d: denotes a digit

78 Software Testing and Reliability  Aditya P. Mathur 2003 78 State Diagram-Actions q0q1 q2 d/set i to d d /add 10*d to i CR/output i Can you describe what this machine computes?Can you construct a regular expression that describes all strings recognized by this state machine? x/y: x is an element of the alphabet and y is an action. i is initialized to d when the machine moves from state q0 to q1. i is incremented by 10*d when the machine moves from q1 to q1. The current value of i is output when a CR is encountered.

79 Software Testing and Reliability  Aditya P. Mathur 2003 79 State Machine: Languages Each state machine recognizes a language. The language recognized by a state machine is the set S of all strings such that: when any string s in S is input to the state machine the machine goes through a sequence of transitions and ends up in the final state after having scanned all elements of s. Testing state machines? Later!

80 Software Testing and Reliability  Aditya P. Mathur 2003 80 The Unified Modeling Language Unified Modeling Language (UML) is a notation to express requirements and designs of software systems. Requirements are represented using: a collection of use cases, each use case being a representative of a collection of scenarios. a collection of system sequence diagrams that explain the interaction between a user and the application for each use case.

81 Software Testing and Reliability  Aditya P. Mathur 2003 81 UML: Design Representation Design of an application is represented in UML by a collection of diagrams of the following types (not an exhaustive list): Sequence (or collaboration) diagrams depict the sequence of actions initiated due to an external event. This sequence is depicted in terms of messages sent from one object to another. Statecharts depict the relationship amongst various states of an object. Class diagrams capture the relationships amongst classes.

82 Software Testing and Reliability  Aditya P. Mathur 2003 82 ECG Monitor Use Case Diagram (partial) Physician Remote Display Display waveforms Capture waveform Process alarms Calibrate sensors > Service Personnel

83 Software Testing and Reliability  Aditya P. Mathur 2003 83 A Sequence Diagram (partial) Passenger 1 is on floor 6 and 2 on floor 2. Passenger 1 Elevator Controller Passenger 2 Request UP elevator Light UP indicator Request DN elevator Door closes, E moves, passes floor 2. Door opens Arrives at floor 6. Queue request Request floor 8 Light DN indicator Light floor button One sequence diagram for each use case.

84 Software Testing and Reliability  Aditya P. Mathur 2003 84 What else can one indicate on a sequence diagram? Broadcast messages sent by one object and received by more than one. Timing marks to show timing constraints m between two events. Event identifiers can be attached to an event; an ID can be referenced in other parts of the diagram. State marks are placed on the object timing line to indicate state changes for that object.

85 Software Testing and Reliability  Aditya P. Mathur 2003 85 A State Diagram IdleTransmittingWaiting Message Ready/Trans-count=0 [Trans-count<=limit] tm (wait-time) [else]/inform sender of failure ACK Behavior of a Message Transaction Object Invalid ACK Done/Start-timer Trans-count++

86 Software Testing and Reliability  Aditya P. Mathur 2003 86 UML State Charts Similar to state diagrams. States can be nested within states. Inner states are known as substates. History connector allows the specification of default initial state in a super state.

87 Software Testing and Reliability  Aditya P. Mathur 2003 87 S1 UML State Charts entry: f1() exit: f3() do: f2() entry: g1() exit: g3() do: g2() S2 Y2 SS1 SS2 Y1 S3 P R Q X1 X2 X3 T1 T2 T3 T4/C1 C2 T5 H [G1] History connector: SS2 is the default initial state in the absence of history, else the last active state is the default. Each state may have entry and exit actions as well as activities. Entry (exit) actions are executed in the (reverse) order of nesting.

88 Software Testing and Reliability  Aditya P. Mathur 2003 88 Transitions in UML Statecharts event name (parameters) [guard] / action list^ event list event name: name of the event triggering the transition parameters: List of parameters passed with the event signal. guard: Boolean expression that must evaluate to true for the transition to take place. action list: List of actions to be executed when the transition is taken. event list: List of events generated, and propagated to other state machines, when the transition is taken.

89 Software Testing and Reliability  Aditya P. Mathur 2003 89 Summary Testing and debugging Specification Correctness versus confidence Input domain Exhaustive testing and combinatorial explosion UML artifacts: Use cases, FSM, State Charts, Sequence diagrams

90 Software Testing and Reliability  Aditya P. Mathur 2003 90 Summary: Terms Reliability Coverage Error, defect, fault, failure Debugging, test-debug cycle Types of testing, basis for classification

91 Software Testing and Reliability  Aditya P. Mathur 2003 91 Summary: Questions What is the effect of reducing the partition size on probability of finding errors? How does coverage effect our confidence in program correctness? Does 100% coverage imply that a program is fault-free? What decides the type of testing?


Download ppt "Model Based Software Testing Preliminaries Aditya P. Mathur Purdue University Fall 2005 Last update: August 18, 2005."

Similar presentations


Ads by Google