1 Verification, validation and testing Chapter 12, Storey.
Published byModified over 5 years ago
Presentation on theme: "1 Verification, validation and testing Chapter 12, Storey."— Presentation transcript:
1 Verification, validation and testing Chapter 12, Storey
2 Introduction (1) -The development of a system can be seen as a series of transformations of its definition from the customers requirements to its complete implementation. -Each phase takes the description of the system as its input and develops this to form the input to the next phase. -In order to have confidence in the final system, it is necessary to confirm that each phase of the development work has been performed correctly. -This is achieved through a process of verification: -the process of determining whether the output of a lifecycle phase fulfils the requirements specified by the previous phase -To demonstrate that the output of a phase conforms to its input, and not to show that the output is actually correct.
3 Introduction (2) -If the input specification is wrong, the verification process will not necessary detect this. -To overcome this, verification is supplemented by validation: -the process of confirming that the specification of a phase, or the complete system, is appropriate and consistent with the customer requirements. -May be performed on individual phases, but is usually used to investigate the characteristics of the complete system. -Often it looks at the behaviour of a prototype system, or a simulation, and determines whether this operates in a manner that satisfies the needs of the customer or user.
4 Testing (1) -Verification and validation is achieved by performing various tests. Testing is the process used to verify or validate a system or its components. -The results from testing may be used to assess the integrity of the system, to investigate specific characteristics such safety and to uncover faults (and thus increasing the system’s dependability). -Testing can be performed at various stages during the development of a system.
5 Testing (2) -The major activities in testing are: -module testing: involves evaluation of small, simple functions of software or hardware. Faults detected are usually relatively straightforward to locate and remove. -system integration testing: investigates the characteristics of a collection of modules and is generally aimed at establishing their correct interaction. Faults detected are likely to be more expensive to correct, because of more complexity. -system validating testing: aims to demonstrate that the complete system satisfies its requirements. Faults detected at this stage are extremely costly to correct (usually involves weaknesses in the customer requirements documents or in the specification), since the modifications must propagate through the entire development process. -Complexity and cost of correcting faults increase as we move from module testing to system testing -Locate faults as soon as possible
6 Testing (3) Testing may take a number of forms and techniques may be broadly classified into: –Dynamic testing: involves execution of a system or component in order to investigate its characteristics. Tests are carried out within the system’s natural working environment or within a simulation of that environment –Static testing: investigates the characteristics of a system or component without operating it. Examples include reviews, inspections and design walkthroughs. -Modelling: involves the use of a mathematical representation of the behaviour of a system or its environments. Animation of a formal specification is an example of modelling. For a typical development programme: –both static and dynamic testing will be included, as well as modelling –the importance of techniques tends to vary throughout the lifecycle –the choice of techniques will be affected by the safety integrity level.
7 Principal testing methods within the development lifecycle Lifecycle phaseDynamic testingStatic testingModelling Requirements analysis and specification XX Top-level designXX Detailed designXX implementationXX Integration testingXXX System validationXX CONTESSE, 1995
8 Testing (4) -Testing may also be divided into ”black-box” and ”white-box” techniques depending on the amount of knowledge the test engineer has of the system being tested. -black-box testing: the test engineer has no knowledge of the implementation of the system and relies simply on information given in the specification (simply checks whether the system does what the specification says it should). Most commonly used on complete systems. -white-box testing: the engineer has access to information concerning the implementation of the system and uses this to guide his work. Such techniques are applicable for testing at all stages of development. -Dynamic testing can be performed using black-box and white-box techniques, while static testing only can be performed using white-box techniques.
9 Planning for verification and validation The task of verification and validation represent a very large part of the required effort when developing safety-critical systems. Since verification and validation are based on testing, test planning is an essential part of the development process and should be planned at an early stage. Validation of the complete system is one of the last stages of system development, but the planning of this activity should be performed at an early stage (where such plans may affect the design).
10 Dynamic testing -Involves execution of a number of test cases that investigates particular aspects of the system. -Each test case comprises a set of input data, a specification of the expected output and an explanation of the function being tested. -Dynamic testing normally includes: -Functional testing: identify and test all the functions of the system that are defined within its requirements. Requires no knowledge of the implementation of the system and is therefore an example of an black-box approach. -Structural testing: uses detailed knowledge of the system’s internal structure to investigate the system’s characteristics. It is therefore an example of a white-box approach. -Random testing: while functional and structural testing choose input that are chosen to investigate some particular characteristics of the system under test, random testing choose input randomly from the entire input space. Tries to detect fault conditions that usually are missed by more systematic techniques. -Dynamic testing involves a mix of ”black-box” and ”white-box” techniques.
11 Dynamic testing techniques Test cases based on equivalence partitioning Test cases based on boundary value analysis State transition testing Probabilistic testing Structure-based testing Process simulation Error guessing Error seeding Timing and memory tests Performance testing Stress testing
12 Static testing -Static testing methods investigate the properties of a system without operating it. -Some of the techniques are performed manually (walkthroughs, design reviews, inspections and checklists), others by using automated tools. -Static testing requires an insight into the nature of the system and therefore always uses a ”white-box” approach. -Many software static testing packages come under the heading of static code analysis tools (formal verification, semantic analysis etc.).
14 Modelling -Involves the use of a mathematical or a graphical representation of the behaviour of a system or its environments. -The model is used to gain an insight into the likely characteristics of the system. -Can be applied manually or by using computer-based tools. -Used most extensively during the early phases of project development and is of particular importance in the production of specification and the top-level design. -Also an important part in system validation. -Modelling techniques include a wide range of methods including some aspects of formal methods. Such techniques are neither ”black-box” nor ”white-box”.
15 Modelling techniques Formal methods Software prototyping / animation Performance modelling State transitions diagrams Time Petri net Data flow diagrams Structure diagrams Environmental modelling
16 Testing for safety -Testing of non-critical systems is primarily concerned with investigating performance with respect to functional requirements -In safety-critical systems one need to show that safety requirement also are satisfied. -In safety-critical systems, much of the testing is aimed at demonstrating the safety of the system. –General safety requirements: includes the achievement of appropriate levels of safety integrity, reliability and quality. –Specific safety requirements: include mechanisms for dealing with various hazards associated with the system. -Validation of a system in respect of its specific safety requirements requires that tests are performed to show that each identified hazard has been effectively encountered. May be possible to demonstrate such properties by the use of dynamic testing alone, although static testing and modelling may be needed. -Validation of the general safety requirements of a system will often require a combination of testing techniques.
17 Test strategies -Several testing techniques can be used in the development of safety- critical systems, and the relative use differs considerably between the various lifecycle phases (see table 12.3 ). -The choice of testing techniques is usually determined by a number of factors: -In-house expertise -Available tools -Integrity of the unit being developed -International standards give guidance on techniques that might be suitable for systems of differing levels of integrity (see table 12.4). -In addition to selecting appropriate techniques for the testing process, it is also necessary to demonstrate the effectiveness of testing. -The effectiveness of testing may be quantified through the use of measures of: -test coverage -test adequacy.
18 Test coverage (1) -Test coverage analysis attempts to estimate the performance of the testing procedure as a percentage of some ideal value. -Test coverage analysis may be applied to black-box testing by considering all the possible input states of a system. If the system is then tested by applying a certain number of test cases, the test coverage may be calculated by dividing the number of test vectors used by the size of the input space. -An ideal test will provide complete input test coverage, and is called exhaustive testing. -This is however almost always impossible (a system with 40 binary inputs has an input space of 2 40 combinations. By performing one test per millisecond this would take about 35 years to test).
19 Test coverage (2) -Effective testing is therefore reliant on the shill of the test engineer in defining a programme of tests that will yield meaningful data. -As all the properties of a system cannot be tested, it is necessary to identify the features of importance and to determine an appropriate strategy and investigate these. -Coverage-based testing: identifies a number of situations to be investigated and then attempt to test an appropriate number of these cases. -Requirements test coverage: the percentage of the functions within the requirements document that are investigated. -Structure-based testing: information on the internal structure of the system is used to perform tests. Program elements that may be tested are statement, branches, paths etc. -Because of the importance and high cost of testing, the needs of testing must be considered during the design stage.
20 Test adequacy -Determine the form and amount of testing for a given application and also the manner in which the test results should be obtained and analysed. -A typical set of criteria will require the use of several testing methods and will necessitate both black-box and white-box techniques. -May be divided into two min categories: -Requirements-based criteria (associated with black-box testing and takes their information from the definition of the system) -Structure-based criteria (require white-box techniques and use data on the structure of the system) -A adequacy criterion is normally associated with an underlying testing technique that is required to satisfy it.
21 Development tools -The development of any computer-based system requires the use of a range of hardware (logic analysers, timing analysers, personal computers) and software (compilers, debuggers, editors) tools. -Of special interest, when developing safety-critical systems, is tools associated with dynamic and static testing. -The effectiveness of testing will be greatly affected by the automated tools used. -Since the verification of a system will be based on test results, it is important that the tools themselves are of high dependability. -Unfortunately, few test tools are validated and almost no tools have been developed to the integrity levels required for testing the most critical systems.
22 Environmental simulation -When developing safety-critical systems it is often impossible, or inadvisable, to test a system fully within its operating environment (nuclear shut-down systems). -In such cases, systems are tested using some form of simulation of the system’s environment. -This not only guarantees safety during the testing process, but may also allow a more efficient and complete investigation of the system’s performance. -The correctness of this simulation is fundamental to the validity of the test results.
23 Independent verification and validation Testing is more effective when performed by staff that are independent from those responsible for implementation. As the integrity requirements increase, the need for independence also increases (IEC 61508). Degree of independence required for validation SIL 1SIL 2SIL 3SIL 4 Independent personsHR NR Independent department-HR NR Independent organization--HR