Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 9 Software Testing Techniques. OOD Case Study.

Similar presentations


Presentation on theme: "Lecture 9 Software Testing Techniques. OOD Case Study."— Presentation transcript:

1 Lecture 9 Software Testing Techniques

2 OOD Case Study

3 Software Testing Testing is the process of exercising a program with the specific intent of finding (and removing) errors prior to delivery to the end user.

4 What Testing Shows errors requirements conformance performance an indication of quality

5 Who Tests the Software? developer independent tester Understands the system but, will test "gently" and, is driven by "delivery" Must learn about the system, but, will attempt to break it and, is driven by quality

6 Testing Principles All tests should be traceable to customer requirements Tests should be planned long before testing begins The Pareto (80/20) principle applies to software testing Testing should begin “in the small” and progress toward testing “in the large” Exhaustive testing is not possible Testing should be conducted by an independent third party

7 Testability Operability—it operates cleanly Observability—the results of each test case are readily observed Controlability—the degree to which testing can be automated and optimized Decomposability—testing can be targeted Simplicity—reduce complex architecture and logic to simplify tests Stability—few changes are requested during testing Understandability—of the design

8 What is a good test? Has a high probability of finding an error – need to develop a classification of perceivable errors and design test for each case Not redundant – no two tests intend to uncover the same error Representative – have highest likelihood of uncovering error Having right level of complexity – not too simple or too complex

9 Exhaustive Testing test per millisecond, it would take 3,170 years to There are 10 possible paths! If we execute one test this program!! 14 loop < 20 X

10 Selective Testing loop < 20 X Selected path

11 Software Testing Methods Strategies white-box methods black-box methods Test internal operations, i.e. procedural details Test functions (at the software interface)

12 Test Case Design "Bugs lurk in corners and congregate at boundaries..." Boris Beizer OBJECTIVE CRITERIA CONSTRAINT to uncover errors in a complete manner with a minimum of effort and time

13 White-Box Testing... our goal is to ensure that all statements and conditions have been executed at least once... Paths Decisions Loops Internal data

14 Why Cover? logic errors and incorrect assumptions are inversely proportional to a path's execution probability we often believe that a path is not likely to be executed; in fact, reality is often counter intuitive typographical errors are random; it's likely that untested paths will contain some

15 Flow Graph 6 6 7 7 1 1 2 2 3 3 4 4 5 5 8 8 9 10 11 1 1 2,3 4,5 7 7 8 8 6 6 9 9 10 11 R1R1 R2R2 R3R3 R4R4 Flowchart Flow Graph

16 Flow Graph (cont.) a a x x y y x x b b IF a OR b then procedure x else procedure y ENDIF SequenceIfWhileUntil Case Predicate node

17 Basis Path Testing 1 1 2,3 4,5 7 7 8 8 6 6 9 9 10 11 R1R1 R2R2 R3R3 R4R4 Basis path set path 1: 1-11 path 2: 1-2-3-4-5-10-1-11 path 3: 1-2-3-6-8-9-10-1-11 path 4: 1-2-3-6-7-9-10-1-11

18 Cyclomatic Complexity A number of industry studies have indicated that the higher V(G), the higher the probability or errors. V(G) modules modules in this range are more error prone V(G) = Number of regions Edges - Nodes + 2 Predicate Nodes + 1

19 Derive Test Cases 1.Draw a flow graph based on the design or code 2.Determine cyclomatic complexity 3.Determine basis set of linearly independent paths 4.Prepare test cases for basis set

20 1 1 2 2 7 7 4 4 5 5 6 6 3 3 8 8 9 9 10 11 12 13 Example PROCEDURE average; INTERFACE RETURNS average, total.input, total.valid; INTERFACE ACCEPTS value, minimum, maximum; TYPE value[1:100] IS SCALAR ARRAY; TYPE average, total.input, total.valid; minimum, maximum, sum IS SCALAR; TYPE i IS INTEGER; i = 1; total.input = total.valid = 0; sum = 0; DO WHILE value[i] <> -999 AND total.input < 100 increment total.input by 1; IF value[i] >= minimum AND value[i] <= maximum THEN increment total.valid by 1; sum = sum + value[i]; ELSE skip ENDIF increment i by 1; ENDDO IF total.valid > 0 THEN average = sum / total.valid; ELSE average = -999; ENDIF END average 1 1 2 2 6 6 12 11 10 13 8 8 9 9 3 3 5 5 4 4 7 7 nodes=13 edges=19 regions=6 R2R2 R4R4 R3R3 R5R5 R1R1 R6R6

21 Example (cont.) path 1: 1-2-10-11-13 path 2: 1-2-10-12-13 path 3: 1-2-3-10-11-13 path 4: 1-2-3-4-5-8-9-2-… path 5: 1-2-3-4-5-6-8-9-2-… path 6: 1-2-3-4-5-6-7-8-9-2-… … = loop back Predicate nodes = 2,3,5,6,10 V(G) = 6 Path 1 test case: value(k) = valid input, where k<i for 2  i  100 value(i)=-999 where 2  i  100 Expected results: correct average based on k values and proper totals Note: must be tested as part of path 4, 5, and 6 tests Path 2 test case: value(i) = -999 Expected results: average = -999

22 Example (cont.) Path 5 test case: value(i) = valid input where i<100 value(k) > minimum where k  i Expected results: correct average based on k values and proper totals Path 6 test case: value(i) = valid input where i<100 Expected results: correct average based on k values and proper totals Path 3 test case: Attempt to process 101 or more values First 100 values should be valid Expected results: Same as test case 1 Path 4 test case: value(i) = valid input where i<100 value(k) < minimum where k<i Expected results: Correct average based on k values and proper totals

23 Loop Testing Nested Loops Concatenated Loops Unstructured Loops Simple loop

24 Simple Loops Minimum conditions—Simple Loops 1. skip the loop entirely 2. only one pass through the loop 3. two passes through the loop 4. m passes through the loop m < n 5. (n-1), n, and (n+1) passes through the loop where n is the maximum number of allowable passes

25 Nested Loops 1.Start at the innermost loop. Set all outer loops to their minimum iteration parameter values. 2.Test the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at their minimum values. Add other tests for out-of-range or excluded values 3.Move out one loop and set it up as in step 2, holding all other loops at typical values. Continue this step until the outermost loop has been tested.

26 Concatenated Loops If the loops are independent of one another then treat each as a simple loop else treat as nested loops endif Redesign unstructured loops if possible

27 Black-Box Testing requirements events input output

28 Equivalence Partitioning user queries mouse clicks output formats prompts FK input data  An equivalence class represents a set of valid or invalid states for input conditions  A test case can uncover classes of errors

29 Sample Equivalence Classes User supplied commands Responses to system prompts File names Computational data physical parameters bounding values initiation values Output data formatting Responses to error messages Graphical data (e.g., mouse clicks) Data outside bounds of the program Physically impossible data Proper value supplied in wrong place Valid data Invalid data If condition needs:  Range / Specific value  one valid and two invalid equivalence classes  Member of a set/ Boolean  one valid and one invalid equivalence classes

30 Bank ID Example Area code – blank or three-digit number Boolean – code may or may not be present Prefix – three-digit number not beginning with 0 or 1 Range – 200 – 999, with specific exceptions Suffix – four-digit number Value – four digits Password – six-digit alphanumeric string Boolean – password may or may not present Value – six-character string Commands – check, deposit, bill payment, and the like Set – valid commands

31 Boundary Value Analysis user queries mouse picks output formats prompts FK input data output domain input domain Concentrates on the boundary conditions!

32 BVA Guidelines Range (e.g. a and b) – test with a, b, and just above and below a and b Values – test with min., max., and values just above and below min. and max. The above two guidelines apply to output conditions Test data structure at its boundary (e.g. array)

33 Testing Strategy unit test (components) Integration test (architecture) Validation test (requirement) System test (interface other system)

34 Unit Testing module to be tested test cases results software engineer white box

35 Unit Test Environment module stub driver RESULTS interface local data structures boundary conditions independent paths error handling paths test cases dummies

36 Integration Testing Options: the “big bang” approach an incremental construction strategy white box

37 Top Down Integration top module is tested with stubs stubs are replaced one at a time, “depth first” as new modules are integrated, some subset of tests is re-run A B C DE FG “breadth first” approach would replace stubs of the same level first Disadvantages Incur development overhead Difficult to determine real cause of error

38 Bottom-Up Integration drivers are replaced one at a time, "depth first" worker modules are grouped into builds and integrated A B C DE FG cluster / build Disadvantage Don’t have a full product until the test is finished

39 Sandwich Testing Top modules are tested with stubs Worker modules are grouped into builds and integrated A B C DE FG cluster / build

40 Regression Testing Retest a corrected module or whenever a new module is added as part of the integration test. Use part of the previously used test cases Can also use automated capture/playback tools Regression test suite should contain Representative sample tests Tests cases for functions likely to be affected by the integration Tests cases that focus on the changed components

41 Validation Testing Aims to test conformity with requirements A test plan outlines the classes of tests to be conducted A test schedule defines specific test cases to be used Configuration review (audit) make sure the correct software configuration is developed User acceptance test – user “test drive” Alpha Test – conducted at the developer’s site Beta Test – conducted at customer’s site black box

42 System Testing Testing interface with other systems Recovery testing: force some error and see if the system can recovery (mean-time-to-repair) Security testing: verify that protection mechanisms are correctly implemented Stress testing: test with abnormal quantity,frequency or volume of transactions Performance testing: test run time performance of software within the context of an integrated system

43 The Debugging Process test cases results Debugging suspected causes identified causes corrections regression tests new test cases

44 Debugging Effort time required to diagnose the symptom and determine the cause time required to correct the error and conduct regression tests

45 Symptoms & Causes symptom cause symptom and cause may be geographically separated symptom may disappear when another problem is fixed cause may be due to a combination of non-errors cause may be due to a system or compiler error cause may be due to assumptions that everyone believes symptom may be intermittent

46 Consequences of Bugs damage mild annoying disturbing serious extreme catastrophic infectious Bug Type Bug Categories: function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.

47 Debugging Techniques brute force / testing backtracking induction deduction

48 Debugging: Final Thoughts Don't run off half-cocked,think about the symptom you're seeing. Use tools (e.g., dynamic debugger) to gain more insight. If at an impasse,get help from someone else. Be absolutely sure toconduct regression tests when you do "fix" the bug. 1. 2. 3. 4.

49 Object-Oriented Testing

50 begins by evaluating the correctness and consistency of the OOA and OOD models testing strategy changes the concept of the ‘unit’ broadens due to encapsulation integration focuses on classes and their execution across a ‘thread’ or in the context of a usage scenario validation uses conventional black box methods test case design draws on conventional methods, but also encompasses special features Encapsulation and inheritance makes OOT difficult

51 Testing the CRC Model 1.Revisit the CRC model and the object-relationship model. 2.Inspect the description of each CRC index card to determine if a delegated responsibility is part of the collaborator’s definition. 3.Invert the connection to ensure that each collaborator that is asked for service is receiving requests from a reasonable source. 4.Using the inverted connections examined in step 3, determine whether other classes might be required or whether responsibilities are properly grouped among the classes. 5.Determine whether widely requested responsibilities might be combined into a single responsibility. 6.Steps 1 to 5 are applied iteratively to each class and through each evolution of the OOA model.

52 OOT Strategy Class testing is the equivalent of unit testing operations within the class are tested the state behavior of the class is examined Integration applied three different strategies thread-based testing—integrates the set of classes required to respond to one input or event use-based testing—integrates the set of classes required to respond to one use case cluster testing—integrates the set of classes required to demonstrate one collaboration Validation testing using use cases

53 Test Case Design 1.Each test case should be uniquely identified and should be explicitly associated with the class to be tested, 2.The purpose of the test should be stated, 3.A list of testing steps should be developed for each test and should contain: a.a list of specified states for the object that is to be tested b.a list of messages and operations that will be exercised as a consequence of the test c.a list of exceptions that may occur as the object is tested d.a list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test) e.supplementary information that will aid in understanding or implementing the test.[BER93]

54 Random Testing identify operations applicable to a class define constraints on their use identify a minimum test sequence an operation sequence that defines the minimum life history of the class (object) generate a variety of random (but valid) test sequences exercise other (more complex) class instance life histories

55 Partition Testing reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional software state-based partitioning categorize and test operations based on their ability to change the state of a class attribute-based partitioning categorize and test operations based on the attributes that they use category-based partitioning categorize and test operations based on the generic function each performs

56 Inter-Class Testing For each client class, use the list of class operators to generate a series of random test sequences. The operators will send messages to other server classes. For each message that is generated, determine the collaborator class and the corresponding operator in the server object. For each operator in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits. For each of the messages, determine the next level of operators that are invoked and incorporate these into the test sequence


Download ppt "Lecture 9 Software Testing Techniques. OOD Case Study."

Similar presentations


Ads by Google