Presentation is loading. Please wait.

Presentation is loading. Please wait.


Similar presentations

Presentation on theme: "CHAPTER 18 SOFTWARE TESTING STRATEGIES"— Presentation transcript:


2 TOPICS A strategic approach to software testing Unit Testing
Integration Testing Validation Testing System Testing The ART of Debugging Summary

Generic characteristics of software testing strategies:- Testing begins at module level and works outward towards the of integration entire computer based system. Different testing techniques are required at different points in time. Testing is conducted by the s/w developer and ITG ( Independent Test Group ) for large projects. Testing and Debugging are different and Debugging is essential in any testing strategy.

4 Verification and Validation
Verification -- Does the product meet its specifications? Are we building the product right? Validation -- Does the product perform as desired? Are we building the right product? You cannot test in quality. However, testing is an important part of achieving quality. Testing can expose poor designs as well as finding bugs. V&V is more than just testing. It includes the process by which tests are designed It includes the signoff process by which test failures are certified to have been resolved. It includes the reporting structure of concerns and problems to upper management without endangering the career of the person reporting the problem. This latter requires some form of independent reporting channels. It includes concern the funding of the V&V activity. Project managers should not be able to cut V&V funding to get product out sooner. Possible homework problem: Suppose you are manager of a large software project. Your budget is $1 million over budget and that the V&V budget is $4 million. What do you do? Testers should be independent of builders -- builders have both a vested interest and may have a built in limited perspective. Test group must be closely related to project, though, to understand it. V&V group may follow project and evaluate processes as it goes.

5 Software Testing Strategy
A Software Testing Strategy

6 Software Testing Strategy

7 Software Error Model f(t) = cumulative remaining errors at time t
l0 = initial failure rate p = exponential reduction as errors repaired f(t) = (1/p)ln(l0pt + 1)

8 STRATEGIC APPROACH Issues to be addressed to develop a successful software testing strategy: Specify product requirements in a quantifiable manner long before testing commences. State testing objectives explicitly. Understand the users of the software and develop a profile for each user category. Develop testing plan that emphasizes “rapid cycle testing.” (test a little, fix problems, deliver a little, get feedback) Product requirements -- include portability (to which platforms?), usability, etc. Test objectives -- test coverage, test effectiveness, mean time to failure, remaining defect density, etc. rapid cycle -- test a little, fix problems, deliver a little, get feedback

9 STRATEGIC APPROACH Issues to be addressed to develop a successful software testing strategy: Build robust software that is designed to test itself. Use effective formal technical reviews as a filter to testing. Conduct formal technical reviews to assess test strategy and test cases. Develop continuous improvement approach

10 UNIT TESTING Unit testing -- focuses on the smallest element of software design viz. the module. Corresponds to class testing in the OO context. Makes heavy use of white-box testing.

11 UNIT TESTING Unit Test Generation Considerations:
Review Design information - develop unit test cases. interface local data structures boundary conditions independent paths error handling paths driver Module to be tested Test cases stub stub

12 Unit Test Generation Interface considerations
# of input parameters = # arguments? Parameter and argument attributes match? Parameter and argument units match? Order correct (if important)? Number and order of arguments for built-ins? References to parms not associated with entry point? Attempt to modify input-only arguments? Global variable definitions consistent? Constraints passed as arguments? A good choice of language can help many of these, as if the language includes ways to express these, the compiler can check, e.g., in parameters, argument types, constraints For example, see Ada.

13 Unit Test Generation External I/O considerations
Files attributes correct? OPEN/CLOSE correct? Format specification matches I/O statement? Buffer size matches record size? Files opened before use? EOF handled correctly? I/O errors handled? Textual errors in output?

14 Unit Test Generation Data structure considerations
Improper or inconsistent typing? Erroneous initialization or default values? Incorrect variable names? Inconsistent data types? Underflow, overflow and addressing exceptions? Again, some languages can help this a great deal.

15 Unit Test Generation Test cases must cover all execution paths
Common computational errors to be checked: incorrect arithmetic mixed mode operations incorrect initialization precision inaccuracy incorrect symbolic representation of expression Other tests needed incompatible data types in comparisons incorrect logical operators or precedence comparison problems (e.g., == on floats) loop problems Comparison problems equality on floating operations incorrect comparison of variables loop problems include improper termination (or nonexistent) improperly modified loop variables -- best method here is for user to not explicitly modify them. -- or use them out of loop.

16 Unit Test Generation Error handling tests
Exception-handling is incorrect? Error description is unintelligible, insufficient or incorrect? Error condition causes system interrupt before error handling completed? The process of identifying all of the situations that must be tested aids the developer in understanding how the code should be written and what tests and handling need to be built into the code.

17 INTEGRATION TESTING A systematic approach for constructing program structure while conducting tests to uncover errors associated with interfacing. Tendency for Non-Incremental integration.. Big Bang approach …. Chaos !! ( usually ). Incremental integration - program is constructed and tested in small segments. Top-Down Integration testing Bottom-Up Integration testing


Begin construction and testing with main module. Stubs are substituted for all subordinate modules. Subordinate stubs are replaced one at a time by actual modules. Tests are conducted as each module is integrated. On completion of each set of tests, another stub is replaced with the real module. Regression testing may be conducted to ensure that new errors have not been introduced.

20 Top Down Approach - Use Stubs

21 INTEGRATION TESTING Top-Down Approach : Advantages: Disadvantages:
Verifies major control or decision points early in the test process. With the use of depth-first integration testing, a complete function of the software can be demonstrated. -- Confidence builder for developer/customer. Disadvantages: Since stubs replace lower level modules, no significant data can flow upwards to the main module.

22 INTEGRATION TESTING Bottom Up Approach :
This approach begins construction and testing with modules at the lowest levels in the program structure. Low-level modules are combined into clusters. A driver is written to coordinate test case input and output. The cluster is tested. Drivers are removed and clusters are combined moving upward in the program hierarchy.

23 Bottom Up Approach

Advantages: Easier test case design and lack of stubs. Disadvantages: The program as an entity is does not exist until the last module is added. Sandwich Testing:- combined approach Top down strategy for upper levels and Bottom up strategy for subordinate levels.

25 INTEGRATION TESTING Regression Testing
Re-execution of some subset of tests already conducted to ensure that the new changes do not have unintended side effects. The Regression test suite should contain three different classes of test cases : A representative sample of tests that will exercise all software functions Additional tests that focus on functions that are likely to be affected by the change. Tests that focus on software components that have changed.

26 INTEGRATION TESTING Integration Test Documentation 1 2 3 4 5
Scope of testing 1 2 3 4 Test plan Test Procedure n Actual Test Results 5 Ref. & Appendix phases and builds Schedule Overhead software Environment / Resources Unit test environment case data Expected for build n Note that it is important to keep all of the test results, even for those tings that appear to be working. This is needed so that when failure occur later on, one can trace back to the test that have been performed. May then add additional tests. Order of Integration

27 VALIDATION TESTING It provides final assurance that software meets all functional, behavioral, and performance requirements. -- Exclusive use of Black-box testing techniques. After each validation test case either software conforms to specs or a deviation from specs is detected and a deficiency list needs to be worked. Alpha and Beta testing Alpha test -- At developer’s site by customer. Beta test -- At customer’s site in a “live” environment.

28 SYSTEM TESTING A series of tests to verify that all system elements have been properly integrated. Recovery Testing: Forces software to fail in a variety of ways and verifies that recovery is properly performed. Security Testing: Attempts to verify the software’s protection mechanisms. The software designer tries to make penetration cost more than the value of information obtained by breaking in.

29 SYSTEM TESTING Stress Testing: Performance Testing:
Executes the system in a manner that demands resources in abnormal quantity, frequency or volume. Performance Testing: To test the run time performance of a system within the context of an integrated system.

30 CLCS Test Approach Developers System Integration and Test Group
Validation Group Application S/W IPT Users Operations Environment User Acceptance Tests Development Environment Unit Test Design Early User Eval Unit Integ Integration Environment System Test COTS H/W on Dock CSCI Int System Delivery Acceptance System S/W Validation Tests User App S/W

31 THE ART OF DEBUGGING Debugging is a consequence of successful testing -- when a test case uncovers an error, it is the debugging process that results in the removal of the error. Debugging is an ART. The external manifestation of the error and the cause of the error normally do not share an obvious relationships.

32 Execution of test cases
THE ART OF DEBUGGING The Debugging process Execution of test cases Results Test cases Additional tests Suspected causes Regression tests Debugging Identified causes Corrections

33 THE ART OF DEBUGGING Debugging Approaches
Brute force : - Take memory dumps, invoke run time traces. Least efficient and quite common. Backtracking :- Once an error is uncovered, trace your way back to the cause of the error. Cause Elimination : - Isolate potential causes, devise cause hypotheses tests to isolate bug. Use of debugging tools

34 COMMENTS Should the software developer be involved with testing ?
Developer’s have a vested interest in demonstrating that their software is error-free. Developer’s (psychologically) feel that testing is destructive. When are we done with testing ? “You are never done with testing, the burden shifts from you to the customer.” The last comment is a BAD philosophy. It can be used to reduce the amount of testing because of budget pressures. NEVER take this approach!

35 SUMMARY Software Testing accounts for the largest percentage of technical effort in the software process. Objective of Software testing -- To uncover errors, maintain software quality. Steps : Unit, Integration, Validation, System. Debugging is often an art and the most valuable resource is often the counsel of other software engineers.


Similar presentations

Ads by Google