Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 TCSS 360, Spring 2005 Lecture Notes Testing Relevant Reading: Object-Oriented Software Engineering, Ch. 9 B. Bruegge, A. Dutoit.

Similar presentations


Presentation on theme: "1 TCSS 360, Spring 2005 Lecture Notes Testing Relevant Reading: Object-Oriented Software Engineering, Ch. 9 B. Bruegge, A. Dutoit."— Presentation transcript:

1 1 TCSS 360, Spring 2005 Lecture Notes Testing Relevant Reading: Object-Oriented Software Engineering, Ch. 9 B. Bruegge, A. Dutoit

2 2 Case study: Scrabble moves Let's think about code to validate moves on a Scrabble board Where can we start a move? Where can tiles be in relation to the starting tile? How do we com- pute scores for a move? How do we do word challenges?

3 3 Bugs even if we correctly discover all cases for placing words on the Scrabble board, it is very likely that we'll have some bugs when we code it bugs are inevitable in any complex software system a bug can be very visible or can hide in your code until a much later date we can hunt down the cause of a known bug using print statements or our IDE's debugger... but how do we discover all of the bugs in our system, even those with low visibility? ANSWER: testing and Quality Assurance practices

4 4 Testing What is the overall goal of testing? What claims can we make when testing "passes" or "fails" ? Can we prove that our code has no bugs? testing: systematic attempt to reveal the presence of errors (to "falsify" system) accomplished by exercising defects in the system and revealing problems failed test: an error was demonstrated passed test: no error was found, so far not used to show absence of errors in software does not directly reveal the actual bugs in the code

5 5 Difficulties of testing testing is seen as a beginner's job, often assigned to the least experienced team members testing often done as an afterthought (if at all) testing cannot be "conquered"; it is impossible to completely test a system example: Space Shuttle Columbia launch, 1981 programmer set 50ms delay to 80ms, led to 1/67 chance of launch failure top-down testing poopooed by shuttle guy because when error is found, impossible to fix without redesign or expense

6 6 Software reliability What is software reliability? How do we measure reliability? reliability: how closely the system conforms to expected behavior software reliability: probability that a software system will not cause failure for the specified time under specified conditions reliability measured by uptime, MTTF (mean time till failure), program crash data

7 7 Faults What is the difference between a fault and an error? What are some kinds of faults? error: incorrect software behavior example: message box text said "Welcome null." fault: mechanical or algorithmic cause of error example: account name field is not set properly. a fault is not an error, but it can lead to them need requirements to specify desired behavior, and need to see system deviate from that behavior, to have a failure

8 8 Some types of faults algorithmic faults design produces a poor algorithm fail to implement the software to match the spec subsystems don't communicate properly mechanical faults earthquake virtual machine failure (why is this a "mechanical" fault?)

9 9 Quality control techniques Any large system is bound to have faults. What are some quality control techniques we can use to deal with them? fault avoidance: prevent errors by finding faults before the system is released fault detection: find existing faults without recovering from the errors fault tolerance: when system can recover from failure by itself

10 10 Fault avoidance techniques development methodologies: use requirements and design to minimize introduction of faults get clear requirements minimize coupling configuration management: don't allow changes to subsystem interfaces verification: find faults in system execution problems: not mature yet, assumes requirements are correct, assumes pre/postconditions are adequate review: manual inspection of system walkthrough: dev presents code to team inspection: team looks at code without dev's guidance shown effective at finding errors

11 11 Fault detection techniques fault detection: find existing faults without recovering from the errors debugging: move through steps to reach erroneous state testing: tries to expose errors in planned way (we are here) a good test model has test cases and test data that identify errors ideally, every possible input to a system should be tested this is prohibitively time-consuming

12 12 Kinds of testing What is the difference between "unit" testing, "integration" testing, and so on? Why do we use many different kinds of tests? unit testing: looks for errors in objects or subsystems integration testing: find errors with connecting subsystems together system structure testing: integration testing all parts of system together system testing: test entire system behavior as a whole, with respect to scenarios and requirements functional testing: test whether system meets requirements performance testing: nonfunctional requirements, design goals acceptance / installation testing: done by client

13 13 Types of testing (Fig9.2 p335)

14 14 Fault tolerance techniques fault tolerance: recovery from failure by system example: database transaction rollbacks modular redundancy: assumes failures usually occur at subsystem level, and assigns more than one component for same task example: a RAID-1 hard disk array uses more than one hard disk to store the same data, so that in case one disk breaks down, the rest still contain the important data

15 15 Testing concepts What is a test case? What is a failure? How are they related? failure: particular instance of a general error, which is caused by a fault test case: set of inputs and outputs to cause failures

16 16 Test cases What are the five elements of a well-written test case, according to the authors? (Hint: one of these is an "oracle." What is this?) name: descriptive name of what is being tested location: full path/URL to test input: arguments, commands, input files to use entered by tester or test driver oracle: expected output log: actual output produced

17 17 Black and white box testing What is the difference between "black-box" and "white-box" testing? black-box test: focuses on input/output of each component white-box test: focuses on internal states of objects requires internal knowledge of the component to craft input data example: knowing that the internal data structure for a spreadsheet program uses 256 rows and columns, choose a test case that tests 255 or 257 to test near that boundary

18 18 Test stubs and drivers test stub: partial implementation on which tested component depends simulates parts that are called by the tested component Bridge pattern (fig 9-11, p342): isolate subsystems by interfaces, to facilitate stubs test driver: code that depends on test case (runs it) and tested component correction: change made to fix faults (can introduce new faults) can be a simple bug fix or a redesign likely to introduce new bugs problem tracking: documenting each failure and remedy

19 19 Regression testing What is regression testing, and why is it important? regression testing: re-executing all prior tests after a code change often done by scripts, automated testing used to ensure that old fixed bugs are still fixed a new feature or a fix for one bug can cause a new bug or reintroduce an old bug especially important in evolving object-oriented systems

20 20 For next time... in the next part of this reading, we'll learn about: ways to conduct good unit tests that cover many representative cases of the program's usage, without covering every possible input structures for doing integration between components in integration testing useful types of system testing how to plan and document a testing policy something about sandwiches (?)


Download ppt "1 TCSS 360, Spring 2005 Lecture Notes Testing Relevant Reading: Object-Oriented Software Engineering, Ch. 9 B. Bruegge, A. Dutoit."

Similar presentations


Ads by Google