Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 CUSEC 2004 Ensuring the Dependability of Software Systems Dr. Lionel Briand, P. Eng. Canada Research Chair (Tier I) Software Quality Engineering Lab.

Similar presentations

Presentation on theme: "1 CUSEC 2004 Ensuring the Dependability of Software Systems Dr. Lionel Briand, P. Eng. Canada Research Chair (Tier I) Software Quality Engineering Lab."— Presentation transcript:

1 1 CUSEC 2004 Ensuring the Dependability of Software Systems Dr. Lionel Briand, P. Eng. Canada Research Chair (Tier I) Software Quality Engineering Lab. Carleton University, Ottawa

2 2 CUSEC 2004 Carleton SE Programs Accredited B. Eng. In Software Engineering –Full course on verification and validation –Full course on software quality management Graduate studies –SQUALL lab: –Supported by a CRC chair

3 3 CUSEC 2004 Objectives Overview Main practical issues Focus on testing Current solutions and research Future research agenda

4 4 CUSEC 2004 Outline Background Issues Test strategies Testability Test Automation Conclusions and future work

5 5 CUSEC 2004 Outline Background Issues Test strategies Testability Test Automation Conclusions and future work

6 6 CUSEC 2004 Dependability Dependability: Correctness, reliability, safety, robustness Correct but not safe or robust: the specification is inadequate Reliable but not correct: failures happen rarely Safe but not correct: annoying failures may happen Robust but not safe: catastrophic failures are possible

7 7 CUSEC 2004 Improving Dependability Testing Fault Handling Fault Avoidance Fault Tolerance Fault Detection Debugging Component Testing Integration Testing System Testing Verification Configuration Management Atomic Transactions Modular Redundancy Correctness Debugging Performance Debugging Inspections Design Methodology

8 8 CUSEC 2004 Testing Process Overview SW Representation SW Code Tests Compare Oracle Expected Results

9 9 CUSEC 2004 Many Causes of Failures The specification may be wrong or have a missing requirement The specification may contain a requirement that is impossible to implement given the prescribed software and hardware The system design may contain a fault The program code may be wrong

10 10 CUSEC 2004 Unit test Unit test Unit test Integration test Function test Performance test Acceptance test Installation test Component code Tested component Integrated modules Functioning system Verified, validated software Accepted system SYSTEM IN USE! Design descriptions System functional specifications Other software specifications Customer requirements User environment Pfleeger, 1998

11 11 CUSEC 2004 Practice No systematic test strategies Very basic tools (e.g., capture and replay test executions) No clear test processes, with explicit objectives Poor testability But a substantial part of the development effort (between 30% and 50%) spent on testing SE must become an engineering practice

12 12 CUSEC 2004 Ariane 5 – ESA Launcher

13 13 CUSEC 2004 Ariane 5 – Root Cause Source: ARIANE 5 Flight 501 Failure, Report by the Inquiry Board A program segment for converting a floating point number to a signed 16 bit integer was executed with an input data value outside the range representable by a signed 16 bit integer. This run time error (out of range, overflow), which arose in both the active and the backup computers at about the same time, was detected and both computers shut themselves down. This resulted in the total loss of attitude control. The Ariane 5 turned uncontrollably and aerodynamic forces broke the vehicle apart. This breakup was detected by an on-board monitor which ignited the explosive charges to destroy the vehicle in the air. Ironically, the result of this format conversion was no longer needed after lift off.

14 14 CUSEC 2004 Ariane 5 – Lessons Learned Rigorous reuse procedures, including usage- based testing (based on operational profiles) Adequate exception handling strategies (backup, degraded procedures?) Clear, complete, documented specifications (e.g., preconditions, post-conditions) Note this was not a complex, computing problem, but a deficiency of the software engineering practices in place … 14

15 15 CUSEC 2004 Outline Background Issues Test strategies Testability Test Automation Conclusions and future work

16 16 CUSEC 2004 Software Characteristics No matter how rigorous we are, software is going to be faulty No exhaustive testing possible: based on incomplete testing, we must gain confidence that the system has the desired behavior Small differences in operating conditions will not result in dramatically different behavior: No continuity property. Dependability needs vary

17 17 CUSEC 2004 Testing Requirements Effective at uncovering faults Help locate faults for debugging Repeatable so that a precise understanding of the fault can be gained and corrections can be checked Automated so as to lower the cost and timescale Systematic so as to be predictable

18 18 CUSEC 2004 Our Focus Test strategies: How to systematically test software? Testability: What can be done to ease testing? Test Automation: What makes test automation possible?

19 19 CUSEC 2004 Outline Background Issues Test strategies & Their Empirical Assessment Testability Test Automation Conclusions and research

20 20 CUSEC 2004 Test Coverage Software Representation (Model) Associated Criteria Test Data Test cases must cover all the … in the model Representation of the specification Black-Box Testing the implementation White-Box Testing

21 21 CUSEC 2004 Empirical Testing Principle Impossible to determine consistent and complete test criteria from theory Exhaustive testing cannot be performed in practice Therefore we need test strategies that have been empirically investigated A significant test case is a test case with high error detection potential – it increases our confidence in the program correctness The goal is to run a sufficient number of significant test cases – that number should be as small as possible

22 22 CUSEC 2004 Empirical Methods Controlled Experiments (e.g., in university settings) High control on application of techniques –Small systems & tasks Case studies (e.g., on industrial projects) Realism –Practical issues, little control Simulations Large number of test sets can be generated +More refined analysis (statistical variation) –Difficult to automate, validity?

23 23 CUSEC 2004 Test Evaluation based on Mutant Programs Take a program and test data generated for that program Create a number of similar programs (mutants), each differing from the original in one small way, i.e., each possessing a fault E.g., replace addition operator by multiplication operator The test data are then run through the mutants If test data detect differences in mutants, then the mutants are said to be dead, otherwise live. A mutant remains live either because it is equivalent to the original program (functionally identical though syntactically different – equivalent mutant) or the test set is inadequate to kill the mutant Evaluation in terms of mutation score

24 24 CUSEC 2004 Simulation Process 1

25 25 CUSEC 2004 Simulation Process 2

26 26 CUSEC 2004 Cruise Control System

27 27 CUSEC 2004 Transition Tree: Cover all Round- trip paths

28 28 CUSEC 2004 Transition Tree: Simulation Results

29 29 CUSEC 2004 Comparing Criteria

30 30 CUSEC 2004 Outline Background Issues Test strategies Testability Test Automation Conclusions and future work

31 31 CUSEC 2004 Testability Controllability: ability to put an object in a chosen state (e.g., by test driver) and to exercise its operations with input data Observability: ability to observe the outputs produced in response to a supplied test input sequence (where outputs may denote not only the output values returned by one operation, but also any other effect on the object environment: calls to distant features, commands sent to actuators, deadlocks …) These dimensions will determine the cost, error- proneness, and effectiveness of testing

32 32 CUSEC 2004 Basic Techniques Get/Set methods in class interfaces Assertions checked at run time –State / Class invariants –Pre-conditions –Post-conditions Equality Methods: Provides ability to report whether two object are equal – not as simple as it seems … Message sequence checking methods: Detect run- time violations of the classs state specifications Testability depends in part on –Coding standards –Design practice –Availability of code instrumentation and analysis tools

33 33 CUSEC 2004 Early Fault Detection and Diagnosis Diagnosis scope Exception treatment (diagnosis and default mode) An execution thread Concurrent execution threads Infection point: global program state faulty after this point Failure produced at the output A contract Exception handled by contract Classical software Designed by contract software Baudry et al, 2001

34 34 CUSEC 2004 Ocl2j*: An AOP-based Approach Instrumented Bytecode AspectJ Compiler Ocl2jAspectocl2j Tool Program Bytecode UML Model Stage 1: Contract Code Generation Stage 2: Program Instrumentation * Developed at Carleton University, SQUALL

35 35 CUSEC 2004 Contract Assertions and Debugging

36 36 CUSEC 2004 Outline Background Issues Test strategies Testability Test Automation Conclusions and future work

37 37 CUSEC 2004 Objectives Test plans should be derived from specification & design documents This helps avoid errors in the test planning process and helps uncover problems in the specification & design With additional code analysis and suitable coding standards, test drivers can eventually be automatically derived There is a direct link between the quality of specifications & design and the testability of the system Test automation may be an additional motivation for model-driven development (e.g., UML-based)

38 38 CUSEC 2004 Performance stress testing Performance stress testing: to automate, based on the system task architecture, the derivation of test cases that maximize the chances of critical deadline misses within real-time systems Aperiodic tasks Periodic tasks System Event 1 Event 2 + Genetic Algorithm = Test case Event 1 Event 2 time

39 39 CUSEC 2004 Optimal Integration Orders Briand and Labiche use Genetic Algorithms to identify optimal integration orders (minimize stubbing effort). In OO systems. Most classes in OO systems have dependency cycles, sometimes many of them. Integration order has a huge impact on the integration cost: Cost of stubbing classes How to decide of an optimal integration order? This is a combinatorial optimization (under constraints) problem. Solutions for the TSP cannot be reused verbatim.

40 40 CUSEC 2004 Example: Jakarta ANT

41 41 CUSEC 2004 Results We obtain, most of the time, (near) optimal orders, I.e., orders that minimize stubbing efforts GA can handle, with reasonable results, the most complex cases we have been able to find (e.g., 45 classes, 294 dependencies, > dependency cycles). The GA approach is flexible in the sense that it is easy to tailor the objective/fitness function, add new constraints on the order, etc.

42 42 CUSEC 2004 Further Automation Meta-heuristic algorithms: Genetic algorithms, simulated annealing Generate test data based on constraints –Structural testing –Fault-based testing –Testing exception conditions Analyze specifications (e.g., contracts) –Specification flaws (satisfy precondition and violate postcondition)

43 43 CUSEC 2004 Conclusions There are many opportunities to apply optimization and search techniques to help test automation Devising cost-effective testing techniques requires experimental research Achieving high testability requires: –Good analysis and instrumentation tools –Good specification and design practices

44 44 CUSEC 2004 Thank you Questions? (en français or English)

Download ppt "1 CUSEC 2004 Ensuring the Dependability of Software Systems Dr. Lionel Briand, P. Eng. Canada Research Chair (Tier I) Software Quality Engineering Lab."

Similar presentations

Ads by Google