Pseudo-Exhaustive Testing for Software Rick Kuhn Vadim Okun National Institute of Standards and Technology Gaithersburg,

Slides:



Advertisements
Similar presentations
Introducing Formal Methods, Module 1, Version 1.1, Oct., Formal Specification and Analytical Verification L 5.
Advertisements

An Introduction to the Model Verifier verds Wenhui Zhang September 15 th, 2010.
Model Checking I What are LTL and CTL?. and or dreq q0 dack q0bar.
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
14004 L5 - © D. Deugo, Lecture 5 Combinational Models (Binder chapter 6)
Advanced Topics in Combinatorial Methods for Testing Rick Kuhn National Institute of Standards and Technology Gaithersburg, MD.
IPOG: A General Strategy for T-Way Software Testing
Combinatorial Testing Using Covering Arrays - Going Beyond Pairwise Testing Raghu Kacker, NIST Yu Lei, UTA 5/15/06.
Today’s Agenda  HW #1 Due  Quick Review  Finish Input Space Partitioning  Combinatorial Testing Software Testing and Maintenance 1.
1 CODE TESTING Principles and Alternatives. 2 Testing - Basics goal - find errors –focus is the source code (executable system) –test team wants to achieve.
Lecture 4&5: Model Checking: A quick introduction Professor Aditya Ghose Director, Decision Systems Lab School of IT and Computer Science University of.
A CONTROL INSTRUMENTS COMPANY The Effectiveness of T-way Test Data Generation or Data Driven Testing Michael Ellims.
Rick Kuhn Computer Security Division
Michael Ernst, page 1 Improving Test Suites via Operational Abstraction Michael Ernst MIT Lab for Computer Science Joint.
Combinatorial Methods for Cybersecurity Testing
Testing an individual module
Describing Syntax and Semantics
Applying Dynamic Analysis to Test Corner Cases First Penka Vassileva Markova Madanlal Musuvathi.
Optimizing Symbolic Model Checking for Constraint-Rich Systems Randal E. Bryant Bwolen Yang, Reid Simmons, David R. O’Hallaron Carnegie Mellon University.
Formal verification Marco A. Peña Universitat Politècnica de Catalunya.
State coverage: an empirical analysis based on a user study Dries Vanoverberghe, Emma Eyckmans, and Frank Piessens.
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 9 Functional Testing
January 27, 2002 ECEN5033 University of Colorado -- Class Testing 1 Specifying interactions Remainder of slides assume Operations defined by a class are.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
CS527: (Advanced) Topics in Software Engineering Overview of Software Quality Assurance Tao Xie ©D. Marinov, T. Xie.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
System/Software Testing
Introduction to Software Testing Chapter 9.4 Model-Based Grammars Paul Ammann & Jeff Offutt
Software Testing. Definition To test a program is to try to make it fail.
Combinatorial Methods for Event Sequence Testing D. Richard Kuhn 1, James M. Higdon 2, James F. Lawrence 1,3, Raghu N. Kacker 1, Yu Lei 4 1 National Institute.
Automated Combinatorial Testing for Software Rick Kuhn and Raghu Kacker National Institute of Standards and Technology Gaithersburg, MD.
Software Engineering Prof. Dr. Bertrand Meyer March 2007 – June 2007 Chair of Software Engineering Static program checking and verification Slides: Based.
DNA Computing on a Chip Mitsunori Ogihara and Animesh Ray Nature, vol. 403, pp Cho, Dong-Yeon.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
Advanced Technology Center Slide 1 Requirements-Based Testing Dr. Mats P. E. Heimdahl University of Minnesota Software Engineering Center Dr. Steven P.
Ch6: Software Verification. 1 Decision table based testing  Applicability:  Spec. is described by a decision table.  Tables describe:  How combinations.
The OWASP Foundation OWASP AppSec DC October This is a work of the U.S. Government and is not subject to copyright protection.
B. Fernández, D. Darvas, E. Blanco Formal methods appliedto PLC code verification Automation seminar CERN – IFAC (CEA) 02/06/2014.
CS6133 Software Specification and Verification
On Reducing the Global State Graph for Verification of Distributed Computations Vijay K. Garg, Arindam Chakraborty Parallel and Distributed Systems Laboratory.
1 Hybrid-Formal Coverage Convergence Dan Benua Synopsys Verification Group January 18, 2010.
Combinatorial Testing Review “Combinatorial Software Testing”, Kuhn, Kacker, Lei, Hunter, IEEE Computer, Aug Compared “traditional” software test.
Formal Verification Lecture 9. Formal Verification Formal verification relies on Descriptions of the properties or requirements Descriptions of systems.
CP Summer School Modelling for Constraint Programming Barbara Smith 2. Implied Constraints, Optimization, Dominance Rules.
VIRTUAL MEMORY By Thi Nguyen. Motivation  In early time, the main memory was not large enough to store and execute complex program as higher level languages.
Software Engineering Principles. SE Principles Principles are statements describing desirable properties of the product and process.
Today’s Agenda  Reminder: HW #1 Due next class  Quick Review  Input Space Partitioning Software Testing and Maintenance 1.
Distributed Information Systems. Motivation ● To understand the problems that Web services try to solve it is helpful to understand how distributed information.
CAS 721 Course Project Implementing Branch and Bound, and Tabu search for combinatorial computing problem By Ho Fai Ko ( )
Page 1 5/2/2007  Kestrel Technology LLC A Tutorial on Abstract Interpretation as the Theoretical Foundation of CodeHawk  Arnaud Venet Kestrel Technology.
Isolating Failure-Inducing Combinations in Combinatorial Testing using Test Augmentation and Classification Kiran Shakya Tao Xie North Carolina State University.
- 1 -  P. Marwedel, Univ. Dortmund, Informatik 12, 05/06 Universität Dortmund Validation - Formal verification -
Testing OO software. State Based Testing State machine: implementation-independent specification (model) of the dynamic behaviour of the system State:
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Dynamic Testing.
Black Box Unit Testing What is black-box testing? Unit (code, module) seen as a black box No access to the internal or logical structure Determine.
Seminar On Rain Technology
A PRELIMINARY EMPIRICAL ASSESSMENT OF SIMILARITY FOR COMBINATORIAL INTERACTION TESTING OF SOFTWARE PRODUCT LINES Stefan Fischer Roberto E. Lopez-Herrejon.
Fundamentals of Fault-Tolerant Distributed Computing In Asynchronous Environments Paper by Felix C. Gartner Graeme Coakley COEN 317 November 23, 2003.
Linux Standard Base Основной современный стандарт Linux, стандарт ISO/IEC с 2005 года Определяет состав и поведение основных системных библиотек.
CSPs: Search and Arc Consistency Computer Science cpsc322, Lecture 12
Ryan Lekivetz JMP Division of SAS Abstract Covering Arrays
Combinatorial Testing
Software Fault Interactions and Implications for Software Testing
CS5123 Software Validation and Quality Assurance
Ryan Lekivetz & Joseph Morgan, JMP
Software System Integration
IS 2935: Developing Secure Systems
IPOG: A General Strategy for T-Way Software Testing
CS240: Advanced Programming Concepts
Presentation transcript:

Pseudo-Exhaustive Testing for Software Rick Kuhn Vadim Okun National Institute of Standards and Technology Gaithersburg, MD

What is NIST? National Institute of Standards and Technology The nation’s measurement and testing laboratory 3,000 scientists, engineers, and support staff including 3 Nobel laureates Basic and applied research in physics, chemistry, materials, electronics, computer science

Automated Testing Using Covering Arrays ● Project to combine automated test generation with combinatorial methods ● Goals – reduce testing cost, improve cost-benefit ratio for formal methods

Problem: the usual... ● Too much to test ● Even with formal specs, we still need to test ● Take advantage of formal specs to produce tests also – better business case for FM ● Testing may exceed 50% of development cost Example: 20 variables, 10 values each combinations Which ones to test?

Suppose no failure requires more than a pair of settings to trigger Then test all pairs – 180 test cases sufficient to detect any failure How many interactions required to detect flaws in real-world software? If we know, can conduct “pseudo exhaustive” testing Solution: Combinatorial Testing

Example: 5 parameters, 4 values each, 3-way combinations - Parameters - Test | A B C D E 1 | | | | | | | | | | etc..... All 3-way combinations of A,B,C values But also all 3-way combinations of A,B,D; A,B,E; A,C,D;... B,D,E;... etc...

Dalal, et al – effectiveness of pairwise testing, no higher degree interactions Smith, Feather, Muscetolla 2000 – NASA Deep Space 1 software – pairwise testing detected 88% and 50% of flaws for 2 subsystems, no higher degree interactions Wallace, Kuhn 2001 – medical device s/w – 98% of flaws were pairwise interactions, no failure required > 4 conditions to trigger Kuhn, Reilly 2002 – web server, browser; no failure required > 6 conditions to trigger Kuhn, Wallace, Gallo 2004 – large NASA distributed database; no failure required > 4 conditions to trigger Failure-Triggering Fault Interactions – data suggest few variables involved

FTFI numbers for 4 application domains – failures triggered by 1 to 6 conditions

Number of tests: suppose we want all 4-way combinations of 30 parameters, 5 values each  3,800 tests – too many to create manually Test set to do this is a covering array Time to generate covering arrays: problem is NP hard No. of combinations: Problem: Combinatorial Testing Requires a Lot of Tests ( ) v k nknk For n variables with v values, k-way combinations

Solution: Automated Testing Test data generation – easy Test oracle generation – hard Creating test oracles – model checking and other state exploration methods Model-checker test production: if assertion is not true, then a counterexample is generated. This can be converted to a test case. Black & Ammann, 1999

Proof-of-concept experiment Traffic Collision Avoidance System module – Small, practical example – 2 pages of SMV – Used in other experiments on testing – Siemens testing experiments, Okun dissertation – Suitable for model checking 12 variables: 7 boolean, two 3-value, one 4-value, two 10-value Tests generated w/ Lei “In Parameter Order” (IPO) algorithm extended for >2 parameters Modeled in SMV, model checked w/ NuSMV Test cases produced/tests generated: – 2-way: 100 / 156 – 3-way: 405 / 461 – 4-way: 1,375 / 1,450 – 5-way: 4,220 / 4,309 – 6-way: 10,902 / 11,094

Model checking example -- specification for a portion of tcas - altitude separation. -- The corresponding C code is originally from Siemens Corp. Research -- Vadim Okun 02/2002 MODULE main VAR Cur_Vertical_Sep : { 299, 300, 601 }; High_Confidence : boolean;... init(alt_sep) := START_; next(alt_sep) := case enabled & (intent_not_known | !tcas_equipped) : case need_upward_RA & need_downward_RA : UNRESOLVED; need_upward_RA : UPWARD_RA; need_downward_RA : DOWNWARD_RA; 1 : UNRESOLVED; esac; 1 : UNRESOLVED; esac;... SPEC AG ((enabled & (intent_not_known | !tcas_equipped) & !need_downward_RA & need_upward_RA) -> AX (alt_sep = UPWARD_RA))

Computation Tree Logic The usual logic operators,plus temporal: A φ - All: φ holds on all paths starting from the current state. E φ - Exists: φ holds on some paths starting from the current state. G φ - Globally: φ has to hold on the entire subsequent path. F φ - Finally: φ eventually has to hold X φ - Next: φ has to hold at the next state [others not listed] execution paths states on the execution paths SPEC AG ((enabled & (intent_not_known | !tcas_equipped) & !need_downward_RA & need_upward_RA) -> AX (alt_sep = UPWARD_RA)) “FOR ALL executions, IF enabled & (intent_not_known.... THEN in the next state alt_sep = UPWARD_RA”

What is the most effective way to integrate combinatorial testing with model checking? Given AG(P -> AX(R)) “for all paths, in every state, if P then in the next state, R holds” For k-way variable combinations, v1 & v2 &... & vk vi abbreviates “var1 = val1” Now combine this constraint with assertion to produce counterexamples. Some possibilities: 1.AG(v1 & v2 &... & vk & P -> AX !(R)) 2.AG(v1 & v2 &... & vk -> AX !(1)) 3.AG(v1 & v2 &... & vk -> AX !(R))

What happens with these assertions? 1.AG(v1 & v2 &... & vk & P -> AX !(R)) P may have a negation of one of the v i, so we get 0 -> AX !(R)) always true, so no counterexample, no test. This is too restrictive! 2.AG(v1 & v2 &... & vk -> AX !(1)) The model checker makes non-deterministic choices for variables not in v1..vk, so all R values may not be covered by a counterexample. This is too loose! 3.AG(v1 & v2 &... & vk -> AX !(R)) Forces production of a counterexample for each R. This is just right!

Results Roughly consistent with data on large systems But errors harder to detect than real-world examples studied

Problem: the usual - scaling up Tests for Personal Identity Verification (PIV) smartcard to be used by all US Government employees and contractors Three modules: 25, 29, 43 parameters Tests currently being produced with TVEC test generator Plan to experiment with both TVEC and SMV model checker Generate 10 5 to 10 6 tests per module, probably up to 5-way combinations Producing optimal covering arrays is NP hard!

Solution: new covering array algorithms Tradeoffs: FireEye (extended IPO) – Lei – optimal, can be used for most cases under 50 parameters Produces minimal number of tests at cost of long run time Outperforms other algorithms in minimizing tests Run time exponential in interaction level Not parallelized Paintball – Kuhn – suboptimal, can be used for very large arrays (>50 parameters) or higher interaction levels Simple algorithm, generates random tests w/ a few tunable parameters Suboptimal in number of tests by 3% to 40% depending on run time Extremely fast and can be parallelized (still hits a wall, but the wall is further away than for other algorithms) Reduces test generation time from days to one or two, at cost of redundant tests

Number of tests vs. runtime, 4-way interactions, 4-value variables No optimization, single processor for paintball algorithm Variables This increment getting smaller This increment getting way bigger!

Will pseudo-exhaustive testing work in practice? The usual potential pitfalls: ● Faithfulness of model to actual code ● Always a problem ● Being able to generate tests from specification helps make formal modeling more cost effective ● Time cost of generating tests ● Model checking very costly in run time ● Inherent limits on number of variable values even with ideal covering array generation: need at least C(n,k) * v k ● Abstraction needed to make this tractable ● Equivalence classes for variable values may miss a lot that matters ● Not all software is suited to this scheme – e.g., good for code with lots of decisions, not so good for numerical functions.

Summary and conclusions Proof of concept is promising – integrated w/ model checking Appears to be economically practical New covering array algorithms help make this approach more tractable Working on cluster implementation of covering array algorithm Many unanswered questions Is it cost-effective? What kinds of software does it work best on? What kinds of errors does it miss? What failure-triggering fault interaction level testing is required? 5-way? 6-way? more? Large real-world example will help answer these questions Please contact us if you are interested! Rick Kuhn Vadim Okun