Presentation is loading. Please wait.

Presentation is loading. Please wait.

Acceptance Testing.

Similar presentations


Presentation on theme: "Acceptance Testing."— Presentation transcript:

1 Acceptance Testing

2 V Lifecycle Model The software development lifecycle begins with the identification of a requirement for software and ends with the formal verification of the developed software against that requirement. Traditionally, the models used for the software development lifecycle have been sequential, with the development progressing through a number of well defined phases. The sequential phases are usually represented by a V or waterfall diagram. The Requirements phase, in which the requirements for the software are gathered and analyzed, to produce a complete and unambiguous specification of what the software is required to do. The Architectural Design phase, where a software architecture for the implementation of the requirements is designed and specified, identifying the components within the software and the relationships between the components. The Detailed Design phase, where the detailed implementation of each component is specified. • The Code and Unit Test phase, in which each component of the software is coded and tested to verify that it faithfully implements the detailed design. • The Software Integration phase, in which progressively larger groups of tested software components are integrated and tested until the software works as a whole. • The System Integration phase, in which the software is integrated to the overall product and tested. • The Acceptance Testing phase, where tests are applied and witnessed to validate that the software faithfully implements the specified requirements. Software specifications will be products of the first three phases of this lifecycle model. The remaining four phases all involve testing the software at various levels, requiring test specifications against which the testing will be conducted as an input to each of these phases.

3 Waterfall Model

4 Definition Acceptance testing consists of comparing a software system to its initial requirements and to the current needs of its end-users or, in the case of a contracted program, to the original contract [Meyers 1979]. User Acceptance Testing (UAT) is a common name for the type of testing that forms the basis for the business accepting the solution from its developers (whether internal or external). It is a common requirement in many organizations and in many contracts.

5 Purpose Other Testing: Acceptance Testing:
intent is principally to reveal errors Acceptance Testing: a crucial step that decides the fate of a software system. Its outcome provides quality indication for customers to determine whether to accept or reject the software product. is a crucial step that decides the fate of a software system. Its visible outcome provides an important quality indication for the customer to determine whether to accept or reject the software product. .

6 Goal & Guide GOAL: The major guide : the system requirements document
verify the man-machine interactions validate the required functionality of the system verify that the system operates within the specified constraints check the system’s external interfaces The major guide : the system requirements document The primary focus : on usability and reliability [Hetzel 1984].

7 Categories of AT Four categories emerged from the survey:
Traceability method Formal method Prototyping method Other method

8 Traceability methods Traceability methods work by creating a cross reference between requirements and acceptance test cases, fabricating relationships between test cases and requirements. A straightforward way to match the test cases to the requirements: allow testers to easily check the requirements not yet tested. show the customer which requirement is being exercised by which test case(s). However, each cross reference is created by the tester subjectively. Incorporating traceability into the requirements document is one of the frequently discussed methods for acceptance testing.

9 Traceability Methods – cont.
Make software requirement specification (SRS) traceable by requirements tagging: Number every paragraph and include only one requirement per paragraph Reference each requirement by paragraph and sentence no. Use a database to automate traceability task A matrix or checklist ensures there is a test covering each requirement Keep track of when tests completed/failed. Incorporating traceability into the requirements document is one of the frequently discussed methods for acceptance testing. A software requirements specification (SRS) is traceable if it is written in a manner that facilitates the referencing of each individual item of requirements stated therein. This can be done by numbering every paragraph in an SRS hierarchically and then including only one requirement per paragraph or referencing each requirement by paragraph and sentence number. This can also be done by numbering every requirement in parentheses immediately after the requirement appears in the SRS or by using a convention for indicating requirements; e.g., always use the word “shall” in the sentence containing the requirement. This is sometimes referred to as requirements tagging. Once the SRS has been made traceable, a database can then be used to help automate traceability tasks. This database holds a matrix, or checklist, which ensures that there is a test covering each requirement. It also keeps track of when the tests have been completed (or failed).

10 Figure of Traceability Methods
Figure shows how acceptance test cases and the traceability database are developed after the requirements analysis phase is complete. The database can be developed in tandem with the test cases or as a separate step after the test cases are finished.

11 Traceability Methods – samples
TEM Test Evaluation Matrix [Krause & Diamant 1978] The TEM shows the traceability from requirements to design to test procedures allocated to verify the requirements. Allocate requirements to 3 test levels. Select test scenario that satisfy test objects and exercise requirements in that level. Generate test cases from test scenarios by selecting target configuration. The TEM shows traceability from requirements to design to test procedures allocated to verify the requirements. It first allocates requirements to three test levels. Requirements testing is then performed by selecting test scenarios that satisfy test objectives, represent test missions, and exercise requirements allocated to that level of testing. Test cases are derived from test scenarios by selecting the target configuration and user options necessary to guide the unit, subprocess, or process through the desired paths.

12 Traceability Methods – samples
REQUEST Requirements QUEry and Specification Tool [Wardle 1991] The REQUEST tool automates much of the routine information handling involved in providing traceability. A database tool consisting CRD, TRD and VCRI. Ensure a test covering each requirement. The REQUEST (REquirements QUery, and Specification Tool) tool has been developed to automate much of the routine information handling involved in providing traceability. It is a database tool consisting of the Contractual Requirements Database, a Test Requirements Database and a Verification Cross Reference Index (VCRI). First, the Contractual Requirements database is built by identifying the individual requirements. Then, a Test Requirements database is created. REQUEST is used to generate a test specification document from this test database. With additional user input such as the test level and the test method to be used, REQUEST then generates the VCRI. REQUEST ensures that there is a test covering each requirement and it is the VCRI that provides full forward and reverse traceability between a requirement, its test, and its design.

13 Traceability Methods – samples
SVD System Verification Diagram [Deutsch 1982] The SVD tool connects test procedures with requirements. SVD contains threads threads stimulus elements elements represent function associated with a requirement. The cumulation of threads forms the acceptance test. The SVD (System Verification Diagram) tool also connects test procedures with requirements. The SVD contains threads which are stimulus/response elements that represent an identifiable function or sub-function associated with a specific requirement. These threads represent the functions that must be tested and the paths through the SVD are the sequences of functions that are to be tested. The cumulation of all the threads forms the acceptance test. It is similar to the scenario analysis method presented in this paper. The major difference is that the development of the threads in SVD is done after the requirements analysis phase and the user is not involved in the development of the threads. Also, threads are less elaborate than scenarios and the testers exercise more subjectivity when using threads.

14 Formal methods Apply mathematical modeling to test case generation. They include the application of formal languages, finite state machines, various graphing methods, and others to facilitate the generation of test cases for acceptance testing.

15 Example A Requirements Language Processor (RLP). It produces a finite state machine (FSM) model whose external behavior matches that of the specified system. Tests generated from the FSM are then used to test the actual system.

16 RLP & FSM example method
Another type of method being used for acceptance testing centers around the use of Requirement Language Processors (RLPs) and finite-state machines. An RLP produces a representation of a finite-state machine. This finite-state machine is a model whose external behavior matches that of the specified system. Generating tests for a finite-state machine could be done at a level of fidelity similar to the actual completed system. Work in this area also includes the development of Test Plan Generators (TPGs) [Bauer et al. 1978; Bauer and Finger 1979; Chandrasekharan et al. 1989; Dasarathy and Chandrasekharan 1982; Davis 1980] and Automatic Test Executors (ATEs) [Bauer et al. 1978; Bauer and Finger 1979; Davis 1980; Worrest 1982]. A TPG is an interactive tool that analyzes the functional description produced by a RLP and produces a set of executable test scripts. An ATE reads these test scripts, runs the tests, and produces a test report. As with the traceability method, the production of the test plan is a separate step from requirements analysis.

17 Prototyping methods Prototyping methods begin by developing a prototype of the target system. This prototype is then evaluated to see if it satisfies the requirements. Once the prototype does satisfy the requirements, the outputs from the prototype become the expected outputs from the final system. Therefore, prototypes are used as test oracles for acceptance testing.

18 Prototype Methodology
Using prototypes is another method being for acceptance testing [Beizer 1984; Davis 1990; Lea et al. 1991]. With this method a prototype is developed and evaluated to verify if it satisfies user requirements. If it does not, the requirement specifications are updated and a new prototype is built. Once the requirements are met, the corresponding specifications are used in the design and implementation stage of the target system. The prototypes are evaluated with input data generated from the specifications. These data are then used as input for acceptance testing of the final system (see Fig. 3). The outputs from the prototype evaluation are the expected output of the acceptance testing.

19 Prototype Methodolgy Advantage: less function-mismatching errors found in the final test stage and thus saving lots of cost and time. The difficulty in generating test data for system acceptance test can be greatly reduced. Rapid prototyping is a new software development scheme, which puts primary emphasis on early construction of prototypes and early adjustment of the system functions from the user’s verification feedback before large investment has been done in the stage of system design and implementation. Under the new paradigm, user requirements are collected during the requirement analysis stage and translated into formal specifications which are later used to build a prototype. The prototype is then verified by the user to see whether it meets the requirements. If any mismatch is found, the specifications should be modified and another prototype will be built. The process is repeated until the requirements are satisfied. It enjoys the advantage of having less function-mismatching errors found in the final test stage and thus saving lots of cost and time. On verifying the system functions of a prototype, test cases are generated from the knowledge of the expected system and the requirement specifications. And the executing result shows the correct behavior of the system if the prototype meets the user’s requirements. It is similar to the activity of system functional tests in the acceptance testing stage. So, we find that the test case and the corresponding behavior of the prototype are good references in generating test data for system functional tests. In addition, because the prototype specification is of formal form, the test data can be automatically generated from these specifications. Consequently, the difficulty in generating test data for system acceptance test can be greatly reduced.

20 Other Methods Empirical Data: is an acceptance testing method based on empirical data about the behavior of internal states of program Define test-bits and combine them into a pattern which replects the internal state. Collect empirical data by an oracle which stores value of test-bit patterns in the form of a distribution. Compare pattern observed to data stored in distribution. The process is carried out by defining test-bits within the application software and combining them into a pattern which reflects the internal state of the program. This process is similar to instrumenting a piece of hardware to determine how it is functioning internally while it is operating. Empirical data about this pattern is collected by an oracle which stores the values and the frequency of the test-bit patterns in the form of a distribution. This distribution contains information about the correct program behavior and can be used as an acceptance test for the program. The results are evaluated by comparing the pattern observed for each run to the data stored in the distribution. If the observed pattern is not in the distribution, one of two things may have happened. Either an error in the software has occurred or a correct but unusual event has occurred. In either case a warning message is produced.

21 Other Methods – cond. Structured analysis (SA) methods involve the use of SA techniques (entity-relationship diagrams, data flow diagrams, state transition diagrams, etc.) in aiding the generation of test plans.

22 Other Methods – cond. Simulation is more a tool for acceptance testing than a stand-alone method since it is not used in the development of test cases. It is used for testing real-time systems and for systems where “real” tests are not practical.

23 Special characteristics of UAT
Test objective, an identified set of software features or components to be evaluated under specified conditions by comparing the actual behavior with the specified behavior as described in the software requirements. Test criteria, are used to determine the correctness of the software component under test. Test strategy, are the methods for generating test cases based on formally or informally defined criteria of test completeness Test oracle, is any program, process, or body of data that specifies the expected outcome of a set of tests. Test tool

24

25 Major problem of Ad hoc testing
test plan generation relies on the understanding of the software system, lack of formal test models representing the complete external behavior of the system from the user perspective, no methodology to produce all different usage patterns to exercise the external behavior of the system, lack of rigorous acceptance criteria, no techniques for verifying the correctness, consistency, and completeness of the test cases no methods to check if the acceptance testing plan matches the given requirements. Currently, acceptance testing is conducted in an ad hoc manner. The major problem of this approach are: the test plan generation relies heavily on the understanding of the software system, there is a lack of formal test models representing the complete external behavior of the system from the user perspective, 3) there is no methodology to produce all different usage patterns to exercise the external behavior of the system, 4) there is a lack of rigorous acceptance criteria, 5) there are no techniques for verifying the correctness, consistency, and completeness of the test cases 6) there are no methods to check if the acceptance testing plan matches the given requirements.

26 Reference Software Requirements and Acceptance Testing, Pei Hsia and David Kung, Annals of Software Engineering 3 (1997)

27 Strategies for UAT Behavior (Scenario) based acceptance test
Black-box approach for UAT Operation-based test strategy

28 Definition A scenario is an concrete system usage example consisting of an ordered sequence of events which accomplishes a given functional requirement of the system desired by the customer/user. A scenario schema is a template of scenarios consisting of the same ordered sequence of event types and accomplishing a given functional requirement of a system. A scenario instance is an instantiation of a scenario schema. A scenario is an concrete system usage example consisting of an ordered sequence of events which accomplishes a given functional requirement of the system desired by the customer/user. A scenario starts and ends with the same state of the system as perceived by the user. A scenario schema is a template of scenarios consisting of the same ordered sequence of event types and accomplishing a given functional requirement of a system. A scenario instance is an instantiation of a scenario schema.

29 Scenario-based Acceptance Test
a new acceptance test model - a formal scenario model to capture and represent the external behavior of a software system. discuss acceptance test error, test criteria, and test generation. Acceptance testing is one of the critical phases in the software life cycle. Verifying the external behavior of a software system is an essential part of acceptance testing. Unlike other types of testing (structured testing and integration testing), the major objective of acceptance testing is to demonstrate how well the constructed software system realizes the customer’s requirements.

30 Scenario-based AT Test model consists of three sub-models:
a user view: a set of scenario schema, describe interactive behavior of user with system. an external system interface: represents behavior of an interface between a system and its interfaced external system. an external system view: represents external view of the system in terms of user views and external interface view. we present a formal scenario-based acceptance test model for testing the external behavior of software systems during system acceptance testing. This test model consists of three types of sub-models: 1) a user view, 2) an external system interface and 3) an external system view. A user view captures and represents the external behavior of man-machine interactions for a particular user group. The external behavior includes user’ events and related inputs (such as commands and data) as well as system A user view is a set of scenario schema as seen by a certain group of users. The user view describe the interactive behavior of the user with the system. An external interface view represents the behavior of an interface between a system and its interfaced external system represents the behavior in terms of input (stimulus), output (response), triggered events, reactive events. An instance of an external interface view consists of a set of scenarios as seen from the interface. An external system view is a set of scenario schema which represents the external view of the system in terms of its user views and external interface views. An instance of an external system view consists of a set of concurrent, communicating scenarios in the system.

31 Scenario-based AT Procedure
elicit and specify scenarios identify the user’s and external system views according to requirements; formalize test model formalize the scenario trees and construct the scenario-based finite state machines (FSMs) combine FSMs to form a composite FSM (CFSM); verify test model verify the generated FSMs and CFSMs against their man–machine and external system interface. generate test case to achieve a select test criteria execute test cases The test model formalization consists of two steps. In the first step, a scenario tree for a user view or state machine (FSM), called scenario FSm [4]. Then, these generated FSMs are combined to form a composite finite state machine (CFSM). The objective of this formalization is to use a FSM to model scenario schema and instance so that the test scenario can be analyzed, verified, and generated in a systematic manner before acceptance testing. A composite FSM is a set of related scenario FSMs. Each scenario FSM is a concurrent part of its composite FSM. Step 1 Scenario elicitation and specification – where different user views and external interface views are elicited. Step 2 Test model formalization- where each external interface view is formalized as a scenario-based finite date machine (FSM), and they are merged together to form a system view the software system, which is called a composite scenario-based finite state machine (CFSM). Step 3 Test model verification - where all generated FSMs and the CFSM are examined in terms of determinism, consistency, correctness, and completeness. In the paper, they have used a formal scenario-based acceptance test model to capture and represent the external behavior of a software system in term of scenarios in different user views and external system views.

32 Scenario-based AT – cont.
Scenario tree: T(V) = (N, E, L) For a user view V, consists of finite set of nodes N, a finite set of edges E, a finite set of edge labels L. Node set consists of a set of states as perceived by the user. A label l∈L of an edge e∈E between nodes N1, N2∈N, shows the state of the system changed from N1 to N2 due to the occurrence of event type l

33 Benefit Benefit: its ability to allow users with the requirements analysts to specify and generate the acceptance test cases through a systematic approach.

34 Black-box approach for UAT
Source of TC: functional or external requirements specification of system Use a functional test matrix to select a minimum set of test case that cover system functions.

35 Black-box approach – cond.
Steps to select requirements-based test case: Start with normal test cases from the specification and an identification of system functions. Create a minimal set that exercises all inputs/outputs and covers the list of system functions. Decompose any compound or multiple condition cases into single-condition cases. Examine the combinations of single-condition cases and add test cases for any plausible functional combinations. Reduce the list by deleting any test cases dominated by others already in the set.

36 Black-box approach – cond.
Advantage: reduce possibility of having untested procedures Difficulty: acceptance criteria not clearly defined.

37 Operation-based test OBT consists of the following components:
Test selection based on operational profiles; Well-defined acceptance criteria; Compliant with ISO9001 requirements.

38 Test based on operational profile
An operation is a major task the system performs. An operational profile (OP) is a set of operations and associated frequencies of occurrence in expected field environment. Operation profile dependent on users of application. Most applications have targeted user classes and each user class has their own OP.. Reliability must be checked against each OP. Note that the operational profile is dependent on the users of the application. Most applications have several targeted user classes and each user class will have its own operational profile. The occurrence probability of the same operation may be different for different user classes. During UAT, the reliability of the application must be checked against each of these operational profiles to ensure that the application is acceptable to all user classes.

39 Operational profiles example
Table 2 gives an example with three classes of users and four key operations.

40 Drive testing by using OP
Select test cases according to the occurrence probabilities of the operation. The amount of testing of an operation is based on its relative frequency of use. The most recently used operations will receive the most testing and less frequently used operations will receive less testing. 1. The occurrence probabilities (relative frequency of use) of the operation. The amount of testing of an operation is based on its relative frequency of use. The most recently used operations will receive the most testing and less frequently used operations will receive less testing. 2. The criticality which measures the severity of the effect when the system fails. The more critical operations should receive more testing.

41 Criticality May classify system operations by their criticality.
Assume criticality of each OP determined separately. Example, a banking application teller: the most critical supervisor: less critical manager: the least critical System operations may be classified according to their criticality. We assume that the criticality of each operational profile can be determined separately. For example, a banking application may have three user classes: teller, supervisor, and manager. The teller class may be the most critical, the supervisor is less critical, and the manager the least critical in terms of the importance of the results of the application.

42 Test Selection by Criticality
# of test cases for each OP proportional to its weight criticality: : criticality of OP i : its occurrence probability : total number of OPs where Ci is the criticality of operational profile i, Pi is its occurrence probability, and k is the total number of operational profiles.

43 Example of test selection based on criticality
Table 3 illustrates the calculation of the weighted criticality for a hypothetical case and demonstrates the test case selection strategy. Assume there are four operational profiles. Column 2 gives the criticality of each class of operational profile. Higher numbers indicate higher criticality. Column 3 lists the occurrence probability. Column 5 shows the computed weighted criticality. The final column identifies the number of test cases for each operational profile, based on the assumption that a total of 50 test cases will be used for testing the software system.

44 Acceptance Criteria The OBT strategy uses the following acceptance criteria: no critical faults detected, and the software reliability is at an acceptable level.

45 First acceptance criterion
Let T be a set of acceptance test cases and with the mapping , where si is the input of test ti; and is the output of applying si to program P. The requirement of no critical fault implies that where C is the set of incorrect critical output values as defined by the user.

46 Second acceptance criterion
(a) reliability of every OP class is acceptable when: where Ri is the estimated reliability of the OP i, and is the acceptable reliability of the OP i. (b) Reliability of the whole application is acceptable when: where R0 is the estimated reliability of the whole application, and is the acceptable reliability of the whole application.

47 Acceptance decisions Accept - no defects are detected during UAT, user accept the software deliverable Provisional acceptance - accepted without repair by concession Conditional acceptance - accepted with repair by concession Rework to meet specified requirements - too many faults found for the whole application, but the reliabilities of individual operational profiles are acceptable Rejected - too many faults found in both the whole application and the individual operational profiles, reject the software deliverable According to the OBT strategy, the user can decide to accept, provisionally accept, conditionally accept, send back for rework, or reject the software deliverable based on the results of UAT. Provisional or conditional acceptance occurs only when the first acceptance criterion is met and some non-critical defects are detected during UAT. We next describe the five possible acceptance decisions. 1. Accept When no defects are detected during UAT, the software deliverable should be accepted by the user. 2. Provisional acceptance – accepted without repair by concession When the reliabilities of all the operational profiles and the whole application are above the acceptance threshold limits, the software deliverable can be accepted without repair by concession if the user agrees. 3. Conditional acceptance – accepted with repair by concession When test results of the whole application are acceptable but the reliabilities of some operational profiles are below the acceptance threshold limits, the deliverable can be conditionally accepted with repair of those failed operational profiles subjected to the user’s concession. The developer must fix identified faults corresponding to those failed operational profiles according to their fixing priorities and within an agreed period of time; otherwise, the software deliverable should be rejected. The fixing priority of non-critical faults can be classified into three levels: high, medium, and low. High priority faults should be fixed before other faults. 4. Rework to meet specified requirements When there are too many faults found for the whole application, but the reliabilities of individual operational profiles are acceptable, then rework is needed. The software deliverable can be considered for acceptance only when it is retrofitted and retested until the total number of faults fall within the threshold limit for the whole application. 5. Rejected When there are too many faults found in both the whole application and the individual operational profiles, the software deliverable should be rejected, even if all detected faults are non-critical.

48 Acceptance decisions

49 OBT strategy disadvantages
It requires more analysis upfront as the user needs to determine the different operational profiles. Test selection involves more work as the frequency of occurrence becomes a factor in picking the test cases.

50 Comparing UTA strategies

51 Reference Pei Hsia and David Kung. Software Requirements and Acceptance Testing. Annuals of Software Engineering 3 (1997) Hareton K.N. Leung and Peter W.L. Wong. A Study of User Acceptance Tests. Software Quality Journal 6, (1997) P. Hsia, J. Gao, J. Samuel, D. Kung, Y. Toyoshima and C. Chen. Behavior-based acceptance testing of software systems : a formal scenario approach, Proceedings of the 18th Annual International Computer Software and Applications Conference (COMPSAC), 1994.


Download ppt "Acceptance Testing."

Similar presentations


Ads by Google