Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Engineering DKT 311 Lecture 11 Verification and critical system validation.

Similar presentations


Presentation on theme: "Software Engineering DKT 311 Lecture 11 Verification and critical system validation."— Presentation transcript:

1 Software Engineering DKT 311 Lecture 11 Verification and critical system validation

2 Basic definitions – A failure is an unacceptable behaviour exhibited by a system The frequency of failures measures the reliability An important design objective is to achieve a very low failure rate and hence high reliability. A failure can result from a violation of an explicit or implicit requirement – A defect is a flaw in any aspect of the system that contributes, or may potentially contribute, to the occurrence of one or more failures It might take several defects to cause a particular failure – An error is a slip-up or inappropriate decision by a software developer that leads to the introduction of a defect 2

3 Effective and Efficient Testing – Testing is like detective work: The tester must try to understand how programmers and designers think, so as to better find defects. The tester must not leave anything uncovered, and must be suspicious of everything. It does not pay to take an excessive amount of time; tester has to be efficient. 3

4 Black-box testing Testers provide the system with inputs and observe the outputs – They can see none of: The source code The internal data Any of the design documentation describing the system’s internals 4

5 Glass-box testing Also called ‘white-box’ or ‘structural’ testing Testers have access to the system design – They can Examine the design documents View the code Observe at run time the steps taken by algorithms and their internal data – Individual programmers often informally employ glass-box testing to verify their own code 5

6 Detecting specific categories of defects A tester must try to uncover any defects the other software engineers might have introduced. – This means designing tests that explicitly try to catch a range of specific types of defects that commonly occur 6

7 Defects in Ordinary Algorithms Incorrect logical conditions Not terminating a loop or recursion Not setting up the correct preconditions for an algorithm Not handling null conditions 7

8 Defects in Numerical Algorithms Not using enough bits or digits Not using enough places after the decimal point or significant figures Assuming a floating point value will be exactly equal to some other value 8

9 Example of defect in testing floating value equality 9 for (double d = 0.0; d != 10.0; d+=2.0) {...} for (double d = 0.0; d < 10.0; d+=2.0) {...} Better: Bad:

10 Defects in Timing and Co-ordination Deadlock and livelock – Defects: A deadlock is a situation where two or more threads are stopped, waiting for each other to do something. – The system is hung Livelock is similar, but now the system can do some computations, but can never get out of some states. 10

11 Defects in Timing and Co-ordination Critical races – Defects: One thread experiences a failure because another thread interferes with the ‘normal’ sequence of events. – Testing strategies: It is particularly hard to test for critical races using black box testing alone. One possible, although invasive, strategy is to deliberately slow down one of the threads. Use inspection. 11

12 Example of critical race 12 a) Normal b) Abnormal due to delay in thread A

13 Defects in Handling Stress and Unusual Situations Insufficient throughput or response time on minimal configurations – Defect: On a minimal configuration, the system’s throughput or response time fail to meet requirements. – Testing strategy: Perform testing using minimally configured platforms. 13

14 Inspections An inspection is an activity in which one or more people systematically – Examine source code or documentation, looking for defects. – Normally, inspection involves a meeting... Although participants can also inspect alone at their desks. 14

15 Inspecting compared to testing – Both testing and inspection rely on different aspects of human intelligence. – Testing can find defects whose consequences are obvious but which are buried in complex code. – Inspecting can find defects that relate to maintainability or efficiency. – The chances of mistakes are reduced if both activities are performed. 15

16 Validation of critical systems The verification and validation costs for critical systems involves additional validation processes and analysis than for non-critical systems: – The costs and consequences of failure are high so it is cheaper to find and remove faults than to pay for system failure; – You may have to make a formal case to customers or to a regulator that the system meets its dependability requirements. This dependability case may require specific V & V activities to be carried out.

17 Reliability validation Reliability validation involves exercising the program to assess whether or not it has reached the required level of reliability. This cannot normally be included as part of a normal defect testing process because data for defect testing is (usually) atypical of actual usage data. Reliability measurement therefore requires a specially designed data set that replicates the pattern of inputs to be processed by the system.

18 Reliability validation activities Establish the operational profile for the system. Construct test data reflecting the operational profile. Test the system and observe the number of failures and the times of these failures. Compute the reliability after a statistically significant number of failures have been observed.

19 Reliability measurement problems Operational profile uncertainty – The operational profile may not be an accurate reflection of the real use of the system. High costs of test data generation – Costs can be very high if the test data for the system cannot be generated automatically. Statistical uncertainty – You need a statistically significant number of failures to compute the reliability but highly reliable systems will rarely fail.

20 Safety assurance Safety assurance and reliability measurement are quite different: – Within the limits of measurement error, you know whether or not a required level of reliability has been achieved; – However, quantitative measurement of safety is impossible. Safety assurance is concerned with establishing a confidence level in the system.

21 Safety confidence Confidence in the safety of a system can vary from very low to very high. Confidence is developed through: – Past experience with the company developing the software; – The use of dependable processes and process activities geared to safety; – Extensive V & V including both static and dynamic validation techniques.

22 Safety reviews Review for correct intended function. Review for maintainable, understandable structure. Review to verify algorithm and data structure design against specification. Review to check code consistency with algorithm and data structure design. Review adequacy of system testing.

23 Security assessment Security assessment has something in common with safety assessment. It is intended to demonstrate that the system cannot enter some state (an unsafe or an insecure state) rather than to demonstrate that the system can do something. However, there are differences – Safety problems are accidental; security problems are deliberate; – Security problems are more generic - many systems suffer from the same problems; Safety problems are mostly related to the application domain

24 Security validation Experience-based validation – The system is reviewed and analysed against the types of attack that are known to the validation team. Tool-based validation – Various security tools such as password checkers are used to analyse the system in operation. Tiger teams – A team is established whose goal is to breach the security of the system by simulating attacks on the system. Formal verification – The system is verified against a formal security specification.

25 Security checklist

26 Safety and dependability cases Safety and dependability cases are structured documents that set out detailed arguments and evidence that a required level of safety or dependability has been achieved. They are normally required by regulators before a system can be certified for operational use.

27 The system safety case It is now normal practice for a formal safety case to be required for all safety-critical computer- based systems e.g. railway signalling, air traffic control, etc. A safety case is: – A documented body of evidence that provides a convincing and valid argument that a system is adequately safe for a given application in a given environment. Arguments in a safety or dependability case can be based on formal proof, design rationale, safety proofs, etc. Process factors may also be included.

28 Components of a safety case

29 Argument structure

30 Insulin pump argument

31 Claim hierarchy


Download ppt "Software Engineering DKT 311 Lecture 11 Verification and critical system validation."

Similar presentations


Ads by Google