Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Engineering Testing (Concepts and Principles)

Similar presentations


Presentation on theme: "Software Engineering Testing (Concepts and Principles)"— Presentation transcript:

1 Software Engineering Testing (Concepts and Principles)

2 Objectives 1.To introduce the concepts and principles of testing 2.To summarize the debugging process 3.To consider a variety of testing and debugging methods analysis designcodetest

3 Software Testing lNarrow View:  Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user.  A good test case is one that has a high probability of an as-yet- undiscovered error  A successful test is one that uncovers an as-yet-undiscovered error lBroad View:  Testing is the process used to ensure that the software conforms to its specification and meets the user requirements  Validation “Are we building the right product?”  Verification “Are we building the product right?”  Takes place at all stages of software engineering

4 What Testing Shows errors requirements conformance performance an indication of quality

5 Testing Principles lAll tests should be traceable to customer requirements lTests should be planned long before testing begins l80% of errors occur in 20% of classes lTesting should begin “in the small” and progress toward testing “in the large” lExhaustive testing is not possible lTo be most effective, testing should be conducted by an independent third party

6 Who Tests the Software? developer independent tester Understands the system but will test “gently” and is driven by delivery Must learn about the system but will attempt to break it and is driven by quality

7 Software Testability lSoftware that is easy to test: 1.Operability —“the better it works, the more efficiently it can be tested”. Bugs are easier to find in software which at least executes 2.Observability—“what you see is what you test”. The results of each test case are readily observed 3.Controlability—“the better we can control the software, the more testing can be automated and optimized”. Easier to set up test cases 4.Decomposability—“by controlling the scope of testing, we can more quickly isolate problems and perform smarter retesting”. Testing can be targeted 5.Simplicity—“the less there is to test, the more quickly we can test it”. Reduce complex architecture and logic to simplify tests 6.Stability—“the fewer the changes the fewer the disruptions to testing”. Changes disrupt test cases 7.Understandability—“the more information we have the smarter we will test”

8 Test Case Design lA test case is a controlled experiment that tests the system lProcess: Objectives — to uncover errors Criteria — in a complete manner Constraints — with a minimum of effort and time lOften badly designed in an ad hoc fashion l“Bugs lurk in corners and congregate at boundaries.” Good test case design applies this maxim

9 Exhaustive Testing (infeasible) There are 10^14 possible paths! If we execute one test per millisecond, it would take 3170 years to test this program Two nested loops containing four if..then..else statements. Each loop can execute up to 20 times

10 Selective Testing (feasible) Test a carefully selected execution path. Cannot be comprehensive

11 Testing Methods 1.Black Box: examines fundamental interface aspects without regard to internal structure 2.White (Glass) Box: closely examine the internal procedural detail of system components 3.Debugging: fixing errors identified during testing Methods Strategies white-box methods black-box methods

12 [1] White-Box Testing lGoal:  Ensure that all statements and conditions have been executed at least once lDerive test cases that: 1.Exercise all independent execution paths 2.Exercise all logical decisions on both true and false sides 3.Execute all loops at their boundaries and within operational bounds 4.Exercise internal data structures to ensure validity

13 Why Cover all Paths? lLogic errors and incorrect assumptions are inversely proportional to the probability that a program path will be executed. lWe often believe that a logical path is not likely to be executed when, in fact, it may be executed on a regular basis lTypographical error are random; it is likely that untested paths will contain some

14 Basis Path Testing lProvides a measure of the logical complexity of a method and provides a guide for defining a basis set of execution paths 1.Represent control flow using flow graph notation Nodes represent processing, arrows represent control flow Sequence If While

15 Cyclomatic Complexity 2.Compute the cyclomatic complexity V(G) of a flow graph G:  Number of simple predicates (decisions) + 1 or  V(G) = E-N+2 (where E are edges and N are nodes) or  Number of enclosed areas + 1  In this case V(G) = 4

16 Cyclomatic Complexity and Errors lA number of industry studies have indicated that the higher V(G), the higher the probability of errors V(G) modules modules in this range are more error prone

17 Basis Path Testing lV(G) is the number of linearly independent paths through the program (each has at least one edge not covered by any other path) 3.Derive a basis set of V(G) independent paths  Path 1: 1-2-3-8  Path 2: 1-2-3-8-1-2-3  Path 3: 1-2-4-5-7-8  Path 4: 1-2-4-6-7-8 4.Prepare test cases that will force the execution of each path in the basis set 1 2 3 4 56 7 8

18 Basis Path Tips lYou don’t need a flow graph, but it helps in tracing program paths lCount each simple logical test, compound tests (e.g. switch statements) count as 2 or more lBasis path testing should be applied to critical modules only lWhen preparing test cases use boundary values for the conditions

19 Other White Box Methods lCondition Testing: exercises the logical (boolean) conditions in a program lData Flow Testing: selects test paths according to the location of the definition and use of variables in a program lLoop Testing: focuses on the validity of loop constructs

20 Loop Testing NestedLoops Concatenated Loops Unstructured Loops Simpleloop

21 Simple Loops lTest cases for simple loops: 1.Skip the loop entirely 2.Only one pass through the loop 3.Two passes through the loop 4.m passes through the loop (m < n) 5.(n-1), n and (n+1) passes through the loop  Where n is the maximum number of allowable passes

22 Nested Loops lTest cases for nested loops: 1.Start at the innermost loop. Set all the outer loops to their minimum iteration parameter values 2.Test the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at their minimum values 3.Move out one loop and set it up as in step 2, holding all other loops at typical values. Continue this step until the outermost loop has been tested lTest cases for concatenated loops:  If the loops are independent of one another then treat each as a simple loop, otherwise treat as nested loops

23 [2] Black-Box Testing lComplementary to white box testing. Derive external conditions that fully exercise all functional requirements requirements events input output

24 Black Box Strengths lAttempts to find errors in the following categories:  Incorrect or missing functions  Interface errors  Errors in data structures or external database access  Behaviour or performance errors  Initialization or termination errors lBlack box testing is performed during later stages of testing lThere are a variety of black box techniques:  comparison testing (develop independent versions of the system),  orthogonal array testing (sampling of an input domain which has several variables)

25 Black Box Methods lEquivalence Partitioning:  Divide input domain into classes of data.  Each test case then uncovers whole classes of errors.  Examples: valid data (user supplied commands, file names, graphical data (e.g., mouse picks)), invalid data (data outside bounds of the program, physically impossible data, proper value supplied in wrong place) lBoundary Value Analysis:  More errors tend to occur at the boundaries of the input domain  Select test cases that exercises bounding values  Examples: an input condition specifies a range bounded by values a and b. Test cases should be designed with values a and b and just above and below a and b

26 [3] Debugging lTesting is a structured process that identifies an error’s “symptoms” lDebugging is a diagnostic process that identifies an error’s “source” test cases results Debugging suspectedcauses identifiedcauses corrections regressiontests new test cases execution of cases

27 Debugging Effort time required to diagnose the symptom and determine the cause time required to correct the error and conduct regression tests Definition (Regression Tests): re-execution of a subset of test cases to ensure that changes do not have unintended side effects

28 Symptoms and Causes symptom cause symptom and cause may be geographically separated symptom may disappear when another problem is fixed cause may be due to a combination of non-errors cause may be due to a system or compiler error cause may be due to assumptions that everyone believes symptom may be intermittent

29 Not all bugs are equal damage mild annoying disturbing serious extreme catastrophic infectious Bug Type Bug Categories: function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.

30 Debugging Techniques lBrute Force:  Use when all else fails.  Memory dumps and run-time traces.  Mass of information amongst which the error may be found lBacktracking:  Works in small programs where there are few backward paths  Trace the source code backwards from the error to the source lCause Elimination:  Create a set of “cause hypotheses”  Use error data (or further tests) to prove or disprove these hypotheses lBut debugging is an art. Some people have innate prowess and others don’t

31 Debugging Tips lDon’t immediately dive into the code, think about the symptom you are seeing lUse tools (e.g. dynamic debuggers) to gain further insight lIf you are stuck, get help from someone else lAsk these questions before “fixing” the bug: 1.Is the cause of the bug reproduced in another part of the program? 2.What bug might be introduced by the fix? 3.What could have been done to fix the bug in the first place? lBe absolutely sure to conduct regression tests when you do “fix” the bug


Download ppt "Software Engineering Testing (Concepts and Principles)"

Similar presentations


Ads by Google