Ensuring the Dependability of Software Systems

Slides:



Advertisements
Similar presentations
Numbers Treasure Hunt Following each question, click on the answer. If correct, the next page will load with a graphic first – these can be used to check.
Advertisements

Symantec 2010 Windows 7 Migration EMEA Results. Methodology Applied Research performed survey 1,360 enterprises worldwide SMBs and enterprises Cross-industry.
Symantec 2010 Windows 7 Migration Global Results.
Requirements Engineering Processes – 2
Simplifications of Context-Free Grammars
Adders Used to perform addition, subtraction, multiplication, and division (sometimes) Half-adder adds rightmost (least significant) bit Full-adder.
Zhongxing Telecom Pakistan (Pvt.) Ltd
1
Copyright © 2003 Pearson Education, Inc. Slide 1 Computer Systems Organization & Architecture Chapters 8-12 John D. Carpinelli.
1 Note content copyright © 2004 Ian Sommerville. NU-specific content copyright © 2004 M. E. Kabay. All rights reserved. Critical Systems Validation IS301.
Processes and Operating Systems
Copyright © 2011, Elsevier Inc. All rights reserved. Chapter 6 Author: Julia Richards and R. Scott Hawley.
Properties Use, share, or modify this drill on mathematic properties. There is too much material for a single class, so you’ll have to select for your.
By Rick Clements Software Testing 101 By Rick Clements
Introduction to Algorithms 6.046J/18.401J
1 RA I Sub-Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Casablanca, Morocco, 20 – 22 December 2005 Status of observing programmes in RA I.
Process a Customer Chapter 2. Process a Customer 2-2 Objectives Understand what defines a Customer Learn how to check for an existing Customer Learn how.
Custom Statutory Programs Chapter 3. Customary Statutory Programs and Titles 3-2 Objectives Add Local Statutory Programs Create Customer Application For.
Making the System Operational
Programming Language Concepts
The 5S numbers game..
Software Engineering COMP 201
Week 2 The Object-Oriented Approach to Requirements
Configuration management
Chapter 5 – Enterprise Analysis
Chapter 11: Models of Computation
1 Refactoring with Contracts Shmuel Tyszberowicz School of Computer Science The Academic College of Tel Aviv Yaffo Maayan Goldstein School of Computer.
Red Tag Date 13/12/11 5S.
Software Testing Strategies
Software testing.
Defect testing Objectives
Testing Workflow Purpose
1 Software Unit Test Coverage And Test Adequacy Hong Zhu, Patrick A. V. Hall, John H.R. May Presented By: Arpita Gandhi.
PP Test Review Sections 6-1 to 6-6
EIS Bridge Tool and Staging Tables September 1, 2009 Instructor: Way Poteat Slide: 1.
Juan Gallegos November Objective Objective of this presentation 2.
Outline Minimum Spanning Tree Maximal Flow Algorithm LP formulation 1.
การทดสอบโปรแกรม กระบวนการในการทดสอบ
Operating Systems Operating Systems - Winter 2010 Chapter 3 – Input/Output Vrije Universiteit Amsterdam.
Exarte Bezoek aan de Mediacampus Bachelor in de grafische en digitale media April 2014.
Copyright © 2012, Elsevier Inc. All rights Reserved. 1 Chapter 7 Modeling Structure with Blocks.
1 RA III - Regional Training Seminar on CLIMAT&CLIMAT TEMP Reporting Buenos Aires, Argentina, 25 – 27 October 2006 Status of observing programmes in RA.
Defect Tolerance for Yield Enhancement of FPGA Interconnect Using Fine-grain and Coarse-grain Redundancy Anthony J. YuGuy G.F. Lemieux September 15, 2005.
CONTROL VISION Set-up. Step 1 Step 2 Step 3 Step 5 Step 4.
Adding Up In Chunks.
Lecture 8: Testing, Verification and Validation
Chapter 10 Software Testing
Software Processes.
Artificial Intelligence
Global Analysis and Distributed Systems Software Architecture Lecture # 5-6.
1 Using Bayesian Network for combining classifiers Leonardo Nogueira Matos Departamento de Computação Universidade Federal de Sergipe.
Chapter 10: The Traditional Approach to Design
Systems Analysis and Design in a Changing World, Fifth Edition
1 Let’s Recapitulate. 2 Regular Languages DFAs NFAs Regular Expressions Regular Grammars.
Essential Cell Biology
PSSA Preparation.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 13 Slide 1 Application architectures.
 2003 Prentice Hall, Inc. All rights reserved. 1 Chapter 13 - Exception Handling Outline 13.1 Introduction 13.2 Exception-Handling Overview 13.3 Other.
14-1 © Prentice Hall, 2004 Chapter 14: OOSAD Implementation and Operation (Adapted) Object-Oriented Systems Analysis and Design Joey F. George, Dinesh.
Energy Generation in Mitochondria and Chlorplasts
Tutorial 1: Sensitivity analysis of an analytical function
1 Decidability continued…. 2 Theorem: For a recursively enumerable language it is undecidable to determine whether is finite Proof: We will reduce the.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
System/Software Testing
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
ISSTA 2002, Rome, Italy 1 Investigating the Use of Analysis Contracts to Support Fault Isolation in Object-Oriented Code Lionel Briand, Yvan Labiche, Hong.
SYSC Software Validation, Verification and Testing1 SYSC 4101 – Software Validation, Verification and Testing Part II – Software Testing Overview.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
Presentation transcript:

Ensuring the Dependability of Software Systems Dr. Lionel Briand, P. Eng. Canada Research Chair (Tier I) Software Quality Engineering Lab. Carleton University, Ottawa

Carleton SE Programs Accredited B. Eng. In Software Engineering Full course on verification and validation Full course on software quality management Graduate studies SQUALL lab: http://www.sce.carleton.ca/Squall/ Supported by a CRC chair

Objectives Overview Main practical issues Focus on testing Current solutions and research Future research agenda

Outline Background Issues Test strategies Testability Test Automation Conclusions and future work

Outline Background Issues Test strategies Testability Test Automation Conclusions and future work

Dependability Dependability: Correctness, reliability, safety, robustness Correct but not safe or robust: the specification is inadequate Reliable but not correct: failures happen rarely Safe but not correct: annoying failures may happen Robust but not safe: catastrophic failures are possible

Improving Dependability Fault Handling Fault Avoidance Fault Detection Fault Tolerance Design Methodology Inspections Atomic Transactions Modular Redundancy Verification Configuration Management Testing Debugging Component Testing Integration Testing System Testing Correctness Debugging Performance Debugging

Testing Process Overview SW Representation Tests SW Code Tests Producing test cases and an oracle is difficult and time consuming The concept of Oracle is a key concept in testing Testing is also very dependent on representation form and content, e.g., specs formalism Results Expected Results Compare Oracle

Many Causes of Failures The specification may be wrong or have a missing requirement The specification may contain a requirement that is impossible to implement given the prescribed software and hardware The system design may contain a fault The program code may be wrong Different testing activities look at these different causes – see next slide

SYSTEM IN USE! Pfleeger, 1998 Component code Integrated modules Unit test Design descriptions System functional specifications Other software specifications User environment Customer requirements Component code Tested component . Integration test Function Performance Acceptance Installation Pfleeger, Software Engineering: Theory and Practice, 1998 Typical test activities, inputs, and outputs in the life cycle Test planning and preparation should happen early in the life cycle Common terminology: system test = function + performance test Integrated modules Functioning system Verified, validated software Accepted system Tested component Component code SYSTEM IN USE! Pfleeger, 1998

Practice No systematic test strategies Very basic tools (e.g., capture and replay test executions) No clear test processes, with explicit objectives Poor testability But a substantial part of the development effort (between 30% and 50%) spent on testing SE must become an engineering practice

Ariane 5 – ESA Launcher This is the take-off of flight 501, French Guyana, 1996 That beautiful piece of engineering was destroyed by, let’s be frank, engineering neglect. Software was assumed to be easy … This was a very visible software defects – most of them are not that visible … but they are common. There are many books filled with such stories.

Ariane 5 – Root Cause Source: ARIANE 5 Flight 501 Failure, Report by the Inquiry Board A program segment for converting a floating point number to a signed 16 bit integer was executed with an input data value outside the range representable by a signed 16 bit integer. This run time error (out of range, overflow), which arose in both the active and the backup computers at about the same time, was detected and both computers shut themselves down. This resulted in the total loss of attitude control. The Ariane 5 turned uncontrollably and aerodynamic forces broke the vehicle apart. This breakup was detected by an on-board monitor which ignited the explosive charges to destroy the vehicle in the air. Ironically, the result of this format conversion was no longer needed after lift off. PRE: -32768 <= x <= +32767, POST: y=int(x) It would seem likely that the programmer of the subprogram for converting a floating point number to a signed 16 bit number realized that the value of the floating point number to be converted must lie within a restricted range, namely the range of values which (after truncating or rounding to an integer) can be represented as a signed 16 bit number. Such restrictions on input values (preconditions for subprograms) were, however, neither systematically derived, documented nor followed back to determine corresponding restrictions on other values computed earlier [1]. The SRI was no longer needed, for Ariane 5, when it failed … As a result, because there was no exception handling, or back-up procedure, when the flight system realized the data from the SRI were flawed, the whole flight system shut off.

Ariane 5 – Lessons Learned Rigorous reuse procedures, including usage-based testing (based on operational profiles) Adequate exception handling strategies (backup, degraded procedures?) Clear, complete, documented specifications (e.g., preconditions, post-conditions) Note this was not a complex, computing problem, but a deficiency of the software engineering practices in place … There are several ways to look at the problem It could have been prevented or detected in several ways 14

Outline Background Issues Test strategies Testability Test Automation Conclusions and future work

Software Characteristics No matter how rigorous we are, software is going to be faulty No exhaustive testing possible: based on incomplete testing, we must gain confidence that the system has the desired behavior Small differences in operating conditions will not result in dramatically different behavior: No continuity property. Dependability needs vary

Testing Requirements Effective at uncovering faults Help locate faults for debugging Repeatable so that a precise understanding of the fault can be gained and corrections can be checked Automated so as to lower the cost and timescale Systematic so as to be predictable

Our Focus Test strategies: How to systematically test software? Testability: What can be done to ease testing? Test Automation: What makes test automation possible?

Outline Background Issues Test strategies & Their Empirical Assessment Testability Test Automation Conclusions and research

Software Representation (Model) Test Coverage Software Representation (Model) Associated Criteria Test cases must cover all the … in the model Test Data Representation of the specification  Black-Box Testing the implementation  White-Box Testing

Empirical Testing Principle Impossible to determine consistent and complete test criteria from theory Exhaustive testing cannot be performed in practice Therefore we need test strategies that have been empirically investigated A significant test case is a test case with high error detection potential – it increases our confidence in the program correctness The goal is to run a sufficient number of significant test cases – that number should be as small as possible

Empirical Methods Controlled Experiments (e.g., in university settings) High control on application of techniques Small systems & tasks Case studies (e.g., on industrial projects) Realism Practical issues, little control Simulations Large number of test sets can be generated More refined analysis (statistical variation) Difficult to automate, validity?

Test Evaluation based on Mutant Programs Take a program and test data generated for that program Create a number of similar programs (mutants), each differing from the original in one small way, i.e., each possessing a fault E.g., replace addition operator by multiplication operator The test data are then run through the mutants If test data detect differences in mutants, then the mutants are said to be dead, otherwise live. A mutant remains live either because it is equivalent to the original program (functionally identical though syntactically different – equivalent mutant) or the test set is inadequate to kill the mutant Evaluation in terms of mutation score

Simulation Process 1

Simulation Process 2

This is the flattened statechart of a. system. , not a class This is the flattened statechart of a *system*, not a class. You see the techniques shown here are not specific to class testing. However, in OO systems, you have many state-dependent classes that need to be tested independently using their statechart. This statechart has events without parameters and no guard condition, which makes it easy to use with the transition tree technique. Flattened statecharts, though they can be become very complex, are not meant to be visualized, but analyzed for testing purposes Cruise Control System

Transition Tree: Cover all Round-trip paths Transition tree for flattened statechart Breadth first graph traversing

Transition Tree: Simulation Results

  Comparing Criteria

Outline Background Issues Test strategies Testability Test Automation Conclusions and future work

Testability Controllability: ability to put an object in a chosen state (e.g., by test driver) and to exercise its operations with input data Observability: ability to observe the outputs produced in response to a supplied test input sequence (where outputs may denote not only the output values returned by one operation, but also any other effect on the object environment: calls to distant features, commands sent to actuators, deadlocks …) These dimensions will determine the cost, error-proneness, and effectiveness of testing Definitions above are tailored to OO design context 2 dimensions to testability

Basic Techniques Get/Set methods in class interfaces Assertions checked at run time State / Class invariants Pre-conditions Post-conditions Equality Methods: Provides ability to report whether two object are equal – not as simple as it seems … Message sequence checking methods: Detect run-time violations of the class’s state specifications Testability depends in part on Coding standards Design practice Availability of code instrumentation and analysis tools

Early Fault Detection and Diagnosis Diagnosis scope Exception treatment (diagnosis and default mode) An execution thread Concurrent execution threads Infection point: global program state faulty after this point Failure produced at the output A contract Exception handled by contract Classical software Designed by contract Baudry et al, 2001

Ocl2j*: An AOP-based Approach UML Model Stage 1: Contract Code Generation ocl2j Tool Ocl2jAspect Program Bytecode Stage 2: Program Instrumentation AspectJ Compiler Instrumented Bytecode * Developed at Carleton University, SQUALL

Contract Assertions and Debugging

Outline Background Issues Test strategies Testability Test Automation Conclusions and future work

Objectives Test plans should be derived from specification & design documents This helps avoid errors in the test planning process and helps uncover problems in the specification & design With additional code analysis and suitable coding standards, test drivers can eventually be automatically derived There is a direct link between the quality of specifications & design and the testability of the system Test automation may be an additional motivation for model-driven development (e.g., UML-based)

Performance stress testing Performance stress testing: to automate, based on the system task architecture, the derivation of test cases that maximize the chances of critical deadline misses within real-time systems time Event 1 Periodic tasks Event 1 Event 2 Genetic Algorithm The method we present combines the use of external aperiodic events (ones that are part of the interface of the software system under test, i.e., triggered by events from users, other software systems or sensors) and internally generated system events (events triggered by external events and hidden to the outside of the software system) with a Genetic Algorithm to automatically derive test cases that indicate the arrival times of the external aperiodic tasks such that the chances of their missing deadlines is maximized. = + Event 1 Aperiodic tasks System Event 2 Test case

Optimal Integration Orders Briand and Labiche use Genetic Algorithms to identify optimal integration orders (minimize stubbing effort). In OO systems. Most classes in OO systems have dependency cycles, sometimes many of them. Integration order has a huge impact on the integration cost: Cost of stubbing classes How to decide of an optimal integration order? This is a combinatorial optimization (under constraints) problem. Solutions for the TSP cannot be reused verbatim.

Example: Jakarta ANT

Results We obtain, most of the time, (near) optimal orders, I.e., orders that minimize stubbing efforts GA can handle, with reasonable results, the most complex cases we have been able to find (e.g., 45 classes, 294 dependencies, > 400000 dependency cycles). The GA approach is flexible in the sense that it is easy to tailor the objective/fitness function, add new constraints on the order, etc. Evolver is the tool we used

Further Automation Meta-heuristic algorithms: Genetic algorithms, simulated annealing Generate test data based on constraints Structural testing Fault-based testing Testing exception conditions Analyze specifications (e.g., contracts) Specification flaws (satisfy precondition and violate postcondition) SEMINAL: Special issue of IST

Conclusions There are many opportunities to apply optimization and search techniques to help test automation Devising cost-effective testing techniques requires experimental research Achieving high testability requires: Good analysis and instrumentation tools Good specification and design practices

Questions? (en français or English) Thank you Questions? (en français or English)