Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ensuring the Dependability of Software Systems

Similar presentations

Presentation on theme: "Ensuring the Dependability of Software Systems"— Presentation transcript:

1 Ensuring the Dependability of Software Systems
Dr. Lionel Briand, P. Eng. Canada Research Chair (Tier I) Software Quality Engineering Lab. Carleton University, Ottawa

2 Carleton SE Programs Accredited B. Eng. In Software Engineering
Full course on verification and validation Full course on software quality management Graduate studies SQUALL lab: Supported by a CRC chair

3 Objectives Overview Main practical issues Focus on testing
Current solutions and research Future research agenda

4 Outline Background Issues Test strategies Testability Test Automation
Conclusions and future work

5 Outline Background Issues Test strategies Testability Test Automation
Conclusions and future work

6 Dependability Dependability: Correctness, reliability, safety, robustness Correct but not safe or robust: the specification is inadequate Reliable but not correct: failures happen rarely Safe but not correct: annoying failures may happen Robust but not safe: catastrophic failures are possible

7 Improving Dependability
Fault Handling Fault Avoidance Fault Detection Fault Tolerance Design Methodology Inspections Atomic Transactions Modular Redundancy Verification Configuration Management Testing Debugging Component Testing Integration Testing System Testing Correctness Debugging Performance Debugging

8 Testing Process Overview
SW Representation Tests SW Code Tests Producing test cases and an oracle is difficult and time consuming The concept of Oracle is a key concept in testing Testing is also very dependent on representation form and content, e.g., specs formalism Results Expected Results Compare Oracle

9 Many Causes of Failures
The specification may be wrong or have a missing requirement The specification may contain a requirement that is impossible to implement given the prescribed software and hardware The system design may contain a fault The program code may be wrong Different testing activities look at these different causes – see next slide

10 SYSTEM IN USE! Pfleeger, 1998 Component code Integrated modules
Unit test Design descriptions System functional specifications Other software specifications User environment Customer requirements Component code Tested component . Integration test Function Performance Acceptance Installation Pfleeger, Software Engineering: Theory and Practice, 1998 Typical test activities, inputs, and outputs in the life cycle Test planning and preparation should happen early in the life cycle Common terminology: system test = function + performance test Integrated modules Functioning system Verified, validated software Accepted system Tested component Component code SYSTEM IN USE! Pfleeger, 1998

11 Practice No systematic test strategies
Very basic tools (e.g., capture and replay test executions) No clear test processes, with explicit objectives Poor testability But a substantial part of the development effort (between 30% and 50%) spent on testing SE must become an engineering practice

12 Ariane 5 – ESA Launcher This is the take-off of flight 501, French Guyana, 1996 That beautiful piece of engineering was destroyed by, let’s be frank, engineering neglect. Software was assumed to be easy … This was a very visible software defects – most of them are not that visible … but they are common. There are many books filled with such stories.

13 Ariane 5 – Root Cause Source: ARIANE 5 Flight 501 Failure, Report by the Inquiry Board A program segment for converting a floating point number to a signed 16 bit integer was executed with an input data value outside the range representable by a signed 16 bit integer. This run time error (out of range, overflow), which arose in both the active and the backup computers at about the same time, was detected and both computers shut themselves down. This resulted in the total loss of attitude control. The Ariane 5 turned uncontrollably and aerodynamic forces broke the vehicle apart. This breakup was detected by an on-board monitor which ignited the explosive charges to destroy the vehicle in the air. Ironically, the result of this format conversion was no longer needed after lift off. PRE: <= x <= , POST: y=int(x) It would seem likely that the programmer of the subprogram for converting a floating point number to a signed 16 bit number realized that the value of the floating point number to be converted must lie within a restricted range, namely the range of values which (after truncating or rounding to an integer) can be represented as a signed 16 bit number. Such restrictions on input values (preconditions for subprograms) were, however, neither systematically derived, documented nor followed back to determine corresponding restrictions on other values computed earlier [1]. The SRI was no longer needed, for Ariane 5, when it failed … As a result, because there was no exception handling, or back-up procedure, when the flight system realized the data from the SRI were flawed, the whole flight system shut off.

14 Ariane 5 – Lessons Learned
Rigorous reuse procedures, including usage-based testing (based on operational profiles) Adequate exception handling strategies (backup, degraded procedures?) Clear, complete, documented specifications (e.g., preconditions, post-conditions) Note this was not a complex, computing problem, but a deficiency of the software engineering practices in place … There are several ways to look at the problem It could have been prevented or detected in several ways 14

15 Outline Background Issues Test strategies Testability Test Automation
Conclusions and future work

16 Software Characteristics
No matter how rigorous we are, software is going to be faulty No exhaustive testing possible: based on incomplete testing, we must gain confidence that the system has the desired behavior Small differences in operating conditions will not result in dramatically different behavior: No continuity property. Dependability needs vary

17 Testing Requirements Effective at uncovering faults
Help locate faults for debugging Repeatable so that a precise understanding of the fault can be gained and corrections can be checked Automated so as to lower the cost and timescale Systematic so as to be predictable

18 Our Focus Test strategies: How to systematically test software?
Testability: What can be done to ease testing? Test Automation: What makes test automation possible?

19 Outline Background Issues Test strategies & Their Empirical Assessment
Testability Test Automation Conclusions and research

20 Software Representation (Model)
Test Coverage Software Representation (Model) Associated Criteria Test cases must cover all the … in the model Test Data Representation of the specification  Black-Box Testing the implementation  White-Box Testing

21 Empirical Testing Principle
Impossible to determine consistent and complete test criteria from theory Exhaustive testing cannot be performed in practice Therefore we need test strategies that have been empirically investigated A significant test case is a test case with high error detection potential – it increases our confidence in the program correctness The goal is to run a sufficient number of significant test cases – that number should be as small as possible

22 Empirical Methods Controlled Experiments (e.g., in university settings) High control on application of techniques Small systems & tasks Case studies (e.g., on industrial projects) Realism Practical issues, little control Simulations Large number of test sets can be generated More refined analysis (statistical variation) Difficult to automate, validity?

23 Test Evaluation based on Mutant Programs
Take a program and test data generated for that program Create a number of similar programs (mutants), each differing from the original in one small way, i.e., each possessing a fault E.g., replace addition operator by multiplication operator The test data are then run through the mutants If test data detect differences in mutants, then the mutants are said to be dead, otherwise live. A mutant remains live either because it is equivalent to the original program (functionally identical though syntactically different – equivalent mutant) or the test set is inadequate to kill the mutant Evaluation in terms of mutation score

24 Simulation Process 1

25 Simulation Process 2

26 This is the flattened statechart of a. system. , not a class
This is the flattened statechart of a *system*, not a class. You see the techniques shown here are not specific to class testing. However, in OO systems, you have many state-dependent classes that need to be tested independently using their statechart. This statechart has events without parameters and no guard condition, which makes it easy to use with the transition tree technique. Flattened statecharts, though they can be become very complex, are not meant to be visualized, but analyzed for testing purposes Cruise Control System

27 Transition Tree: Cover all Round-trip paths
Transition tree for flattened statechart Breadth first graph traversing

28 Transition Tree: Simulation Results

29 Comparing Criteria

30 Outline Background Issues Test strategies Testability Test Automation
Conclusions and future work

31 Testability Controllability: ability to put an object in a chosen state (e.g., by test driver) and to exercise its operations with input data Observability: ability to observe the outputs produced in response to a supplied test input sequence (where outputs may denote not only the output values returned by one operation, but also any other effect on the object environment: calls to distant features, commands sent to actuators, deadlocks …) These dimensions will determine the cost, error-proneness, and effectiveness of testing Definitions above are tailored to OO design context 2 dimensions to testability

32 Basic Techniques Get/Set methods in class interfaces
Assertions checked at run time State / Class invariants Pre-conditions Post-conditions Equality Methods: Provides ability to report whether two object are equal – not as simple as it seems … Message sequence checking methods: Detect run-time violations of the class’s state specifications Testability depends in part on Coding standards Design practice Availability of code instrumentation and analysis tools

33 Early Fault Detection and Diagnosis
Diagnosis scope Exception treatment (diagnosis and default mode) An execution thread Concurrent execution threads Infection point: global program state faulty after this point Failure produced at the output A contract Exception handled by contract Classical software Designed by contract Baudry et al, 2001

34 Ocl2j*: An AOP-based Approach
UML Model Stage 1: Contract Code Generation ocl2j Tool Ocl2jAspect Program Bytecode Stage 2: Program Instrumentation AspectJ Compiler Instrumented Bytecode * Developed at Carleton University, SQUALL

35 Contract Assertions and Debugging

36 Outline Background Issues Test strategies Testability Test Automation
Conclusions and future work

37 Objectives Test plans should be derived from specification & design documents This helps avoid errors in the test planning process and helps uncover problems in the specification & design With additional code analysis and suitable coding standards, test drivers can eventually be automatically derived There is a direct link between the quality of specifications & design and the testability of the system Test automation may be an additional motivation for model-driven development (e.g., UML-based)

38 Performance stress testing
Performance stress testing: to automate, based on the system task architecture, the derivation of test cases that maximize the chances of critical deadline misses within real-time systems time Event 1 Periodic tasks Event 1 Event 2 Genetic Algorithm The method we present combines the use of external aperiodic events (ones that are part of the interface of the software system under test, i.e., triggered by events from users, other software systems or sensors) and internally generated system events (events triggered by external events and hidden to the outside of the software system) with a Genetic Algorithm to automatically derive test cases that indicate the arrival times of the external aperiodic tasks such that the chances of their missing deadlines is maximized. = + Event 1 Aperiodic tasks System Event 2 Test case

39 Optimal Integration Orders
Briand and Labiche use Genetic Algorithms to identify optimal integration orders (minimize stubbing effort). In OO systems. Most classes in OO systems have dependency cycles, sometimes many of them. Integration order has a huge impact on the integration cost: Cost of stubbing classes How to decide of an optimal integration order? This is a combinatorial optimization (under constraints) problem. Solutions for the TSP cannot be reused verbatim.

40 Example: Jakarta ANT

41 Results We obtain, most of the time, (near) optimal orders, I.e., orders that minimize stubbing efforts GA can handle, with reasonable results, the most complex cases we have been able to find (e.g., 45 classes, 294 dependencies, > dependency cycles). The GA approach is flexible in the sense that it is easy to tailor the objective/fitness function, add new constraints on the order, etc. Evolver is the tool we used

42 Further Automation Meta-heuristic algorithms: Genetic algorithms, simulated annealing Generate test data based on constraints Structural testing Fault-based testing Testing exception conditions Analyze specifications (e.g., contracts) Specification flaws (satisfy precondition and violate postcondition) SEMINAL: Special issue of IST

43 Conclusions There are many opportunities to apply optimization and search techniques to help test automation Devising cost-effective testing techniques requires experimental research Achieving high testability requires: Good analysis and instrumentation tools Good specification and design practices

44 Questions? (en français or English)
Thank you Questions? (en français or English)

Download ppt "Ensuring the Dependability of Software Systems"

Similar presentations

Ads by Google