Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Joe Meehean. 2 Testing is the process of executing a program with the intent of finding errors. -Glenford Myers.

Similar presentations


Presentation on theme: "1 Joe Meehean. 2 Testing is the process of executing a program with the intent of finding errors. -Glenford Myers."— Presentation transcript:

1 1 Joe Meehean

2 2 Testing is the process of executing a program with the intent of finding errors. -Glenford Myers

3  Testing is done to find errors not to prove there are none successful tests find errors  Any useful program has errors you won’t be able to find them all find and fix the big ones 3

4  Program is an opaque box ignore internal algorithms or structure act like someone else wrote it give it to someone else to test  Provide a variety of inputs, check for correct output seems simple, its not 4

5  Problem:  Even simplest programs can have billions of possible inputs cannot possibly test them all must choose a useful subset  How do we do this methodically? 5

6  Equivalence partitioning partition input into sets provide a test for each set  Month # to Month Name Program 6 … -2 -1 0 1 …… 12 13 14 15 … Valid Partition Invalid Partition 1 Invalid Partition 2

7  Boundary-value analysis select inputs from boundary of equivalence partitions where inputs switch from one set to another 7 … -2 -1 0 1 …… 12 13 14 15 … Valid Partition Invalid Partition 1 Invalid Partition 2

8  All pairs testing for exactly 2 input arguments only test all combinations of those inputs or all combination of boundary values  Intuition most common errors involve 1 argument next most common involve 2 … 2 highest we can go without prohibitive cost 8

9  Fuzz testing create a program that generates random inputs feed random input into program under test  Monitor tested program for crashes unhandled exceptions wedging (hanging)  Term coined by Barton Miller 9

10  Model-based testing create a mathematical model of your program create real test cases and abstract test cases compare the results of real run with abstract run 10 Model Program Test Idea Real Input Model Input Real Output Model Output Compare

11  Model-based testing runtime complexity can be a model not the only model  E.g., you expect your program to be O(N) run your program with input size X use your model to calculate runtime for size 2X run program with size 2X compare results 11

12  Exploratory Testing play with it to see how it works knowing how it works, try to break it “it looks like it works like this” “what if I try this” make fun of a program until it cries 12

13  How/when to use these techniques?  Equivalence partitioning ALWAYS  Boundary-value analysis ALWAYS  All-pairs as needed if your program takes pairs of inputs 13

14  How/when to use these techniques?  Fuzz-testing in school, try this by hand (make up random) at work, write software to support  Model-based when it makes sense is your program behaving strangely, but still producing correct output? 14

15  How/when to use these techniques?  Exploratory ALWAYS 15

16 16

17  Create tests to evaluate source code specific functions specific lines specific conditions  Most common kind of testing done by developers 17

18  API Testing Application Programming Interface (API) ensure every public & private method does what it should using either black box (method is black box) or other white box techniques can add testing code directly to program 18

19  API Testing e.g., test LCVector::doubleCapacity() does it double the capacity? does it copy all of the items over to the new array? print raw array before double capacity print raw array after double capacity 19

20  Code coverage technique to evaluate black box tests ensure every piece of code executed once requires software support  Types function coverage statement coverage condition coverage  every boolean sub-expression of a condition  test for both true and false 20

21  Fault injection  What is a fault? unexpected event or condition faults may cause errors e.g., disk drive fails, memory corrupted, …  What is an error? output that doesn’t meet spec crashing hung 21

22  Fault injection artificially cause faults at runtime see what the program does try to inject faults at specific times e.g., modify OS to pretend disk crash during file read observe the programs behavior kind of like extra mean fuzz testing 22

23  Testing all parts of your program doesn’t guarantee it is bug-free  Program may be missing code  Won’t find data-sensitivity errors covers all lines of code not all possible inputs 23

24  How/when to use these techniques?  API testing ALWAYS test it as you build it  Code coverage when tools are available and not time prohibitive this may be a requirement of your job 24

25  How/when to use these techniques?  Fault injection when your program must work all the time graduate school designing robust, mission-critical systems e.g., autopilot software 25

26 26


Download ppt "1 Joe Meehean. 2 Testing is the process of executing a program with the intent of finding errors. -Glenford Myers."

Similar presentations


Ads by Google