2 Approximately 10 to 25 percent of a system’s development and maintenance effort is put toward developing and maintaining documentation.It is important to ensure that the right documentation has been prepared, is complete and current, reflects the criticality of the system, and contains all necessary elements. If any part of the documentation is not current, the tester must assume that none of it may be current and look for other sources to substantiate what has been developed and how it works.The testing of documentation should conform to other aspects of systems testing. Documentation is as prone to error as computer programs. The difference is that defective programs usually leads to defective results, whereas defective documentation may not. However, defective documentation is a time bomb; it can cause systems to be improperly changed or system output to be improperly used. Both of these errors can lead to incorrect system results.Source:“Effective Methods for Software Testing” by William Perry
3 The concerns regarding computer systems documentation are that the documentation will fail to: Bring discipline to the performance of an IT functionAssist in planning and managing resourcesHelp in planning and implementing security proceduresAssist auditors in evaluating application systemsHelp transfer knowledge of software development throughout the life cyclePromote common understanding and expectation about the system within the organizationDefine what is expected and verify that that is what is deliveredProvide basis for training individual in how to maintain softwareProvide managers with technical documents to determine that requirements have been met
4 Testing the adequacy of system documentation consists of the following tasks: Measure project documentation needsDetermine what documents must be producedDetermine the completeness of individual documentsDetermine how current project documents are
5 Measure project documentation needs The formality, extend and level of detail of the documentation to be prepared depends on the organization’s management practices and the project’s size, complexity and and risk. What is adequate for one project may be inadequate for another. The first task in testing documentation is to test the sufficiency or adequacy of the documentation produced.Too much documentation can also be wasteful. An important part of testing documentation is to determine first that the right documentation is prepared; there is little value is conforming that unneeded documentation is adequately prepared.
6 Determine what documents must be produced Method for determining the level of documentation needed:Level 1 (minimal): documentation guidelines are applicable to single-use programs of minimal complexity. Suggested cost criteria for programs that are categorized as level 1 are programs requiring less than one worker-month of effort.Level 2 (internal): documentation applies to special purpose programs that after careful considerations, appear to have no sharing potential and to be designed for use only by the requesting department. Large programs with a short life expectancy also fall into this category. The effort spent toward formal documentation for level 2 programs should be minimal.
7 Level 3 (working document) documentation applies to programs that are expected to be used by many people in the same organization or that may be transmitted on request to other organization, contractors etc. These documents should undergo a more stringent documentation review.Level 4 (formal publication) documentation applies to programs that are of sufficient general interest and documentations are critical to programs operations. These documents should be formally reviewed, tested and subject to configuration control procedures.
8 Determine the completeness of individual documents Testers must determine whether each document is adequately prepared based on the following:Documentation contentDocument audience: the information must be presented with the level of detail appropriate to the audienceInformation should be included in each document types is completed
9 Testing the completeness of documentation In testing the completeness of documentation, tests will reveal whether:The documentation is understandable to an independent personAn independent person can use the documentation to correctly operate the system in a efficient, effective manner.
11 Creating the software testing environment is not a trivial task Creating the software testing environment is not a trivial task. Each phase of software development has a parallel testing activity. Testers create test cases from the documents produced at each development phase.System testRequirementsDefine system testsIntegration testDesignspecificationDefine integration testsUnit testUnitspecificationsDefine unit testsImplementthe units
12 The requirements document provides input to define system test cases and drives the design phase The design phase consists of refining the design from high level down to detailed level. Each design level defines a part of the system, thus requires integration tests to ensure that each component works as an incremental element.The unit phase provides the specifications and eventually the code for each unit. Unit specifications are used to defines unit tests
13 Unit testingUnit testing consists of verifying each individual unit in isolation by running tests in an artificial environment.Most people divide unit tests into two categories: white box and black box. Developers use the code’s inner structure and control flow to construct white box tests. Black box tests derive from the requirements and other specifications, without any knowledge of the application’s internal structure and control.White box tests find bugs, but tests based on the code’s internal structure pose the danger of verifying the code works as written, without ensuring that the logic is correct. This is where black box tests come in to ensure that specific inputs yield the correct expected outcome.Unit testing is the first opportunity for exercising source code. By evaluating each unit in isolation, and ensuring that each works on its own, one can more easily pinpoint problems than if the unit were part of a larger system
14 Integration testingA software integration strategy defines the order in which to merge individual units. Integration is a process that starts with a set of units each individually tested in isolation and ends when the entire application (or sub-system) has been built. Integration testing verifies that the combined units function together correctly. This can facilitate finding problems that occur at interfaces or communication between the individual parts.Integrating software and integration testing typically are parallel activities. As a component is added to the growing system, tests verify that the interim configuration works as expected prior to adding more components.
15 System testingSystem testing verifies the entire product, validates it according to the original project requirements. Some of the major categories of system tests include:Compatibility testingConfiguration testingFunctional testingInstallation testingLoad testingPerformance testingRecovery testingReliability testingSecurity testingServiceability testingStress testingUsability testing
16 Regression testingRegression testing consists of reusing a subset of previously executed tests on new versions of the application. The goal is to ensure that features that worked on previous versions still work as expected. Adding new features sometimes may invalidate old regression tests. Testers may need to update existing tests to account for new product features.Running a regression test re-checks the integrity of the modified application. In an idea test environment, the tester re-executes regression tests every time the application changes.
17 Acceptance testingAcceptance testing validates the system against the user’s requirements and ensures that the application is ready for operational use. The phase of testing occurs after the completing of system testing. Acceptance tests consist of typical user scenarios focusing on major functional requirements. Acceptance tests are often executed at the customer site for final handoff.Acceptance testing is the final checkpoint prior to delivery. The end user may often execute the acceptance tests, which are often a selected subset of system test cases that are run in the real environment.
18 Software testing requires more than simply creating and executing tests. Before beginning to test, tester must devise the overall test strategy, including how to record problems found during testing.Ideally tester creates a test plan at every level of testing, from system level through integration, down to unit level testing.A system test plan describes the requirements, resources, strategies, and schedule for testing an application. The information contained in the test plan aides the tester in acquiring necessary resources for creating the test environment and in defining the approach for creating the tests.
19 Problem reporting system Problem reporting is a process for initiating fixes, enhancements, approvals and for tracking the progress of change. This presents a method for managing change and for minimizing the impact of rework, which are essential for controlling quality.Many commercial problem reporting tools exist today. These tools provide metrics and reports to identify deficiencies and to monitor test status.A problem reporting system is the primary means of communication between testers and developers. The tester records every problem found and provides detailed information for reproducing the problem, although some problems are not easily reproducible. Once the developer has fixed the problem and a new release is available for testing, the tester re-executes the associated case to ensure that the problem has been fixed.
20 Test reportingThe primary goal of a test report is to described what occurred during the test activities. A typical test report identifies the configurations and test environment used, and then enumerates the tests executed and their resultsThis document provides an audit trail of what the tester accomplished while testing the application. Metrics derived from the test statistics help determine the application’s readiness for use.
21 A good system test plan defines the following: Overview of schedule for test activitiesApproach to testing, including usage of test toolsTest tools; including how and when to obtain themProcess by which to conduct tests and report resultsSystem test entry and exist criteriaPersonnel required to design, develop and execute testsEquipment resources – what machines and test benches are neededTest coverage goal, where appropriateSpecial configuration of software and hardware needed for testsStrategies for testing the applicationFeatures that will and will not be testedRisks and contingency plans
22 We have now finished “software testing” part of the lecture, starting with next lecture, we will be discussing software configuration systems, software maintenance and how to assure software reliability.Homework assignment (03/25/04)Please read chapters 13, 14