©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 1 Critical Systems Validation.

Slides:



Advertisements
Similar presentations
©Ian Sommerville 2000CS 365 Critical Systems ValidationSlide 1 Chapter 21 Critical Systems Validation.
Advertisements

1 Note content copyright © 2004 Ian Sommerville. NU-specific content copyright © 2004 M. E. Kabay. All rights reserved. Critical Systems Validation IS301.
Configuration management
Chapter 15 Dependability and Security Assurance Lecture 1 1Chapter 15 Dependability and Security Assurance.
Chapter 4 Quality Assurance in Context
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 1 Critical Systems Validation 2.
Critical Systems Validation CIS 376 Bruce R. Maxim UM-Dearborn.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development.
©Ian Sommerville 2000Software Engineering. Chapter 22Slide 1 Chapter 22 Verification and Validation.
Critical systems development
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
Modified from Sommerville’s originals Software Engineering, 7th edition. Chapter 24 Slide 1 Critical Systems Validation.
Soft. Eng. II, Spr. 2002Dr Driss Kettani, from I. Sommerville1 CSC-3325: Chapter 9 Title : Reliability Reading: I. Sommerville, Chap. 16, 17 and 18.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 10 Slide 1 Formal Specification.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 27 Slide 1 Quality Management 1.
©Ian Sommerville 2006Critical Systems Slide 1 Critical Systems Engineering l Processes and techniques for developing critical systems.
 Lionel Briand Safety Analysis.  Lionel Briand Safety-critical Software Systems whose failure can threaten human life or cause serious.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 10 Slide 1 Critical Systems Specification 3 Formal Specification.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 10 Slide 1 Formal Specification.
1CMSC 345, Version 4/04 Verification and Validation Reference: Software Engineering, Ian Sommerville, 6th edition, Chapter 19.
Software Dependability CIS 376 Bruce R. Maxim UM-Dearborn.
Software Reliability Categorising and specifying the reliability of software systems.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 1 Critical Systems Validation 1.
CSCI 5801: Software Engineering
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 1995 Software Engineering, 5th edition. Chapter 22Slide 1 Verification and Validation u Assuring that a software system meets a user's.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
1 Testing and Verification Class Verification and Validation u Assuring that a software system meets a user's needs.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 9 Slide 1 Critical Systems Specification 2.
Software Engineering DKT 311 Lecture 11 Verification and critical system validation.
Verification and Validation Overview References: Shach, Object Oriented and Classical Software Engineering Pressman, Software Engineering: a Practitioner’s.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 3 Slide 1 Software Processes l Coherent sets of activities for specifying, designing,
1 Chapter 3 Critical Systems. 2 Objectives To explain what is meant by a critical system where system failure can have severe human or economic consequence.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 3 Slide 1 Critical Systems 1.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development 3.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Chapter 19 Verification and Validation.
Software Testing and Quality Assurance Software Quality Assurance 1.
Verification and Validation Assuring that a software system meets a user's needs.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 9 Slide 1 Critical Systems Specification 1.
Figures – Chapter 15. Figure 15.1 Model checking.
SAFEWARE System Safety and Computers Chap18:Verification of Safety Author : Nancy G. Leveson University of Washington 1995 by Addison-Wesley Publishing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 4 Slide 1 Software Processes.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 4 Slide 1 Software Processes.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
©Ian Sommerville 2000Dependability Slide 1 Chapter 16 Dependability.
1 Software Engineering, 8th edition. Chapter 3 Courtesy: ©Ian Sommerville 2006 Sep 16, 2008 Lecture # 3 Critical Systems.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
EKT421 SOFTWARE ENGINEERING
Chapter 8 – Software Testing
Verification and Validation
Critical Systems Validation
Lecture 09:Software Testing
Critical Systems Validation
Testing and Test-Driven Development CSC 4700 Software Engineering
Presentation transcript:

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 1 Critical Systems Validation

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 2 Objectives l To explain how system reliability can be measured and how reliability growth models can be used for reliability prediction l To describe safety arguments and how these are used l To discuss the problems of safety assurance l To introduce safety cases and how these are used in safety validation

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 3 Topics covered l Reliability validation l Safety assurance l Security assessment l Safety and dependability cases

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 4 Validation of critical systems l The verification and validation costs for critical systems involves additional validation processes and analysis than for non-critical systems: The costs and consequences of failure are high so it is cheaper to find and remove faults than to pay for system failure; You may have to make a formal case to customers or to a regulator that the system meets its dependability requirements. This dependability case may require specific V & V activities to be carried out.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 5 Validation costs l Because of the additional activities involved, the validation costs for critical systems are usually significantly higher than for non- critical systems. l Normally, V & V costs take up more than 50% of the total system development costs.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 6 Reliability validation l Reliability validation involves exercising the program to assess whether or not it has reached the required level of reliability. l This cannot normally be included as part of a normal defect testing process because data for defect testing is (usually) atypical of actual usage data. l Reliability measurement therefore requires a specially designed data set that replicates the pattern of inputs to be processed by the system.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 7 The reliability measurement process

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 8 Reliability validation activities l Establish the operational profile for the system. l Construct test data reflecting the operational profile. l Test the system and observe the number of failures and the times of these failures. l Compute the reliability after a statistically significant number of failures have been observed.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 9 Statistical testing l Testing software for reliability rather than fault detection. l Measuring the number of errors allows the reliability of the software to be predicted. Note that, for statistical reasons, more errors than are allowed for in the reliability specification must be induced. l An acceptable level of reliability should be specified and the software tested and amended until that level of reliability is reached.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 10 Reliability measurement problems l Operational profile uncertainty The operational profile may not be an accurate reflection of the real use of the system. l High costs of test data generation Costs can be very high if the test data for the system cannot be generated automatically. l Statistical uncertainty You need a statistically significant number of failures to compute the reliability but highly reliable systems will rarely fail.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 11 Operational profiles l An operational profile is a set of test data whose frequency matches the actual frequency of these inputs from ‘normal’ usage of the system. A close match with actual usage is necessary otherwise the measured reliability will not be reflected in the actual usage of the system. l It can be generated from real data collected from an existing system or (more often) depends on assumptions made about the pattern of usage of a system.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 12 An operational profile

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 13 Operational profile generation l Should be generated automatically whenever possible. l Automatic profile generation is difficult for interactive systems. l May be straightforward for ‘normal’ inputs but it is difficult to predict ‘unlikely’ inputs and to create test data for them.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 14 Reliability prediction l A reliability growth model is a mathematical model of the system reliability change as it is tested and faults are removed. l It is used as a means of reliability prediction by extrapolating from current data Simplifies test planning and customer negotiations. You can predict when testing will be completed and demonstrate to customers whether or not the reliability growth will ever be achieved. l Prediction depends on the use of statistical testing to measure the reliability of a system version.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 15 Equal-step reliability growth

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 16 Observed reliability growth l The equal-step growth model is simple but it does not normally reflect reality. l Reliability does not necessarily increase with change as the change can introduce new faults. l The rate of reliability growth tends to slow down with time as frequently occurring faults are discovered and removed from the software. l A random-growth model where reliability changes fluctuate may be a more accurate reflection of real changes to reliability.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 17 Random-step reliability growth

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 18 Growth model selection l Many different reliability growth models have been proposed. l There is no universally applicable growth model. l Reliability should be measured and observed data should be fitted to several models. l The best-fit model can then be used for reliability prediction.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 19 Reliability prediction

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 20 Safety assurance l Safety assurance and reliability measurement are quite different: Within the limits of measurement error, you know whether or not a required level of reliability has been achieved; However, quantitative measurement of safety is impossible. Safety assurance is concerned with establishing a confidence level in the system.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 21 Safety confidence l Confidence in the safety of a system can vary from very low to very high. l Confidence is developed through: Past experience with the company developing the software; The use of dependable processes and process activities geared to safety; Extensive V & V including both static and dynamic validation techniques.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 22 Safety reviews l Review for correct intended function. l Review for maintainable, understandable structure. l Review to verify algorithm and data structure design against specification. l Review to check code consistency with algorithm and data structure design. l Review adequacy of system testing.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 23 Review guidance l Make software as simple as possible. l Use simple techniques for software development avoiding error-prone constructs such as pointers and recursion. l Use information hiding to localise the effect of any data corruption. l Make appropriate use of fault-tolerant techniques but do not be seduced into thinking that fault-tolerant software is necessarily safe.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 24 Safety arguments l Safety arguments are intended to show that the system cannot reach in unsafe state. l These are weaker than correctness arguments which must show that the system code conforms to its specification. l They are generally based on proof by contradiction Assume that an unsafe state can be reached; Show that this is contradicted by the program code. l A graphical model of the safety argument may be developed.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 25 Construction of a safety argument l Establish the safe exit conditions for a component or a program. l Starting from the END of the code, work backwards until you have identified all paths that lead to the exit of the code. l Assume that the exit condition is false. l Show that, for each path leading to the exit that the assignments made in that path contradict the assumption of an unsafe exit from the component.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 26 Insulin delivery code

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 27 Safety argument model

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 28 Program paths l Neither branch of if-statement 2 is executed Can only happen if CurrentDose is >= minimumDose and <= maxDose. l then branch of if-statement 2 is executed currentDose = 0. l else branch of if-statement 2 is executed currentDose = maxDose. l In all cases, the post conditions contradict the unsafe condition that the dose administered is greater than maxDose.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 29 Process assurance l Process assurance involves defining a dependable process and ensuring that this process is followed during the system development. l As discussed in Chapter 20, the use of a safe process is a mechanism for reducing the chances that errors are introduced into a system. Accidents are rare events so testing may not find all problems; Safety requirements are sometimes ‘shall not’ requirements so cannot be demonstrated through testing.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 30 Safety related process activities l Creation of a hazard logging and monitoring system. l Appointment of project safety engineers. l Extensive use of safety reviews. l Creation of a safety certification system. l Detailed configuration management (see Chapter 29).

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 31 Hazard analysis l Hazard analysis involves identifying hazards and their root causes. l There should be clear traceability from identified hazards through their analysis to the actions taken during the process to ensure that these hazards have been covered. l A hazard log may be used to track hazards throughout the process.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 32 Hazard log entry

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 33 Run-time safety checking l During program execution, safety checks can be incorporated as assertions to check that the program is executing within a safe operating ‘envelope’. l Assertions can be included as comments (or using an assert statement in some languages). Code can be generated automatically to check these assertions.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 34 Insulin administration with assertions

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 35 Security assessment l Security assessment has something in common with safety assessment. l It is intended to demonstrate that the system cannot enter some state (an unsafe or an insecure state) rather than to demonstrate that the system can do something. l However, there are differences Safety problems are accidental; security problems are deliberate; Security problems are more generic - many systems suffer from the same problems; Safety problems are mostly related to the application domain

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 36 Security validation l Experience-based validation The system is reviewed and analysed against the types of attack that are known to the validation team. l Tool-based validation Various security tools such as password checkers are used to analyse the system in operation. l Tiger teams A team is established whose goal is to breach the security of the system by simulating attacks on the system. l Formal verification The system is verified against a formal security specification.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 37 Security checklist

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 38 Safety and dependability cases l Safety and dependability cases are structured documents that set out detailed arguments and evidence that a required level of safety or dependability has been achieved. l They are normally required by regulators before a system can be certified for operational use.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 39 The system safety case l It is now normal practice for a formal safety case to be required for all safety-critical computer-based systems e.g. railway signalling, air traffic control, etc. l A safety case is: A documented body of evidence that provides a convincing and valid argument that a system is adequately safe for a given application in a given environment. l Arguments in a safety or dependability case can be based on formal proof, design rationale, safety proofs, etc. Process factors may also be included.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 40 Components of a safety case

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 41 Argument structure

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 42 Insulin pump argument

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 43 Claim hierarchy

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 44 Key points l Reliability measurement relies on exercising the system using an operational profile - a simulated input set which matches the actual usage of the system. l Reliability growth modelling is concerned with modelling how the reliability of a software system improves as it is tested and faults are removed. l Safety arguments or proofs are a way of demonstrating that a hazardous condition can never occur.

©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 45 Key points l It is important to have a dependable process for safety-critical systems development. The process should include hazard identification and monitoring activities. l Security validation may involve experience- based analysis, tool-based analysis or the use of ‘tiger teams’ to attack the system. l Safety cases collect together the evidence that a system is safe.