Theory of Testing and SATEL. 2 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN.

Slides:



Advertisements
Similar presentations
1 Verification by Model Checking. 2 Part 1 : Motivation.
Advertisements

Formal Methods and Testing Goal: software reliability Use software engineering methodologies to develop the code. Use formal methods during code development.
Semantics Static semantics Dynamic semantics attribute grammars
Foundations of Cryptography Lecture 10 Lecturer: Moni Naor.
Automatic Verification Book: Chapter 6. What is verification? Traditionally, verification means proof of correctness automatic: model checking deductive:
Timed Automata.
Testing Concurrent/Distributed Systems Review of Final CEN 5076 Class 14 – 12/05.
Grey Box testing Tor Stålhane. What is Grey Box testing Grey Box testing is testing done with limited knowledge of the internal of the system. Grey Box.
Logic.
1-1 Copyright © 2015, 2010, 2007 Pearson Education, Inc. Chapter 21, Slide 1 Chapter 21 Comparing Two Proportions.
SATEL Semi Automatic TEsting Language University of Geneva Levi Lúcio, Didier Buchs M-TOOS, Portland 4/30/2015.
1 Software Engineering Lecture 11 Software Testing.
Copyright © 2006 Addison-Wesley. All rights reserved.1-1 ICS 410: Programming Languages Chapter 3 : Describing Syntax and Semantics Axiomatic Semantics.
ISBN Chapter 3 Describing Syntax and Semantics.
CS 355 – Programming Languages
1 Chapter 4 Dynamic Modeling and Analysis (Part I) Object-Oriented Technology From Diagram to Code with Visual Paradigm for UML Curtis H.K. Tsang, Clarence.
November 2005J. B. Wordsworth: J5DAMQVT1 Design and Method Quality, Verification, and Testing.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
CSC1016 Coursework Clarification Derek Mortimer March 2010.
CS652 Spring 2004 Summary. Course Objectives  Learn how to extract, structure, and integrate Web information  Learn what the Semantic Web is  Learn.
1 Model Checking, Abstraction- Refinement, and Their Implementation Based on slides by: Orna Grumberg Presented by: Yael Meller June 2008.
CS 330 Programming Languages 09 / 18 / 2007 Instructor: Michael Eckmann.
Programming Language Semantics Mooly SagivEran Yahav Schrirber 317Open space html://
Department of CIS University of Pennsylvania 1/31/2001 Specification-based Protocol Testing Hyoung Seok Hong Oleg Sokolsky CSE 642.
SATEL Semi Automatic TEsting Language University of Geneva Levi Lúcio VALID Meeting - Besançon 10/3/06.
Semantics with Applications Mooly Sagiv Schrirber html:// Textbooks:Winskel The.
Describing Syntax and Semantics
CPN’04 UML and Petri Nets for Test Case Generation University of Geneva D.Buchs, L.Lúcio, L.Pedro.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Reverse Engineering State Machines by Interactive Grammar Inference Neil Walkinshaw, Kirill Bogdanov, Mike Holcombe, Sarah Salahuddin.
Artificial Intelligence: Definition “... the branch of computer science that is concerned with the automation of intelligent behavior.” (Luger, 2009) “The.
Introduction Telerik Software Academy Software Quality Assurance.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
1 Automatic Refinement and Vacuity Detection for Symbolic Trajectory Evaluation Orna Grumberg Technion Haifa, Israel Joint work with Rachel Tzoref.
Overview of Formal Methods. Topics Introduction and terminology FM and Software Engineering Applications of FM Propositional and Predicate Logic Program.
NP Complexity By Mussie Araya. What is NP Complexity? Formal Definition: NP is the set of decision problems solvable in polynomial time by a non- deterministic.
Automatic Test Generation from here until the end (of my Phd.) University of Geneva Levi Lúcio SMV & Les Diablerets.
1 Levi Lúcio © A Test Selection Language for CO-OPN Specifications Levi Lúcio, Luis Pedro and Didier Buchs University of Geneva.
Reactive systems – general
CMPF144 FUNDAMENTALS OF COMPUTING THEORY Module 5: Classical Logic.
CIS 842: Specification and Verification of Reactive Systems Lecture Specifications: LTL Model Checking Copyright , Matt Dwyer, John Hatcliff,
ISBN Chapter 3 Describing Semantics -Attribute Grammars -Dynamic Semantics.
Black Box Testing Techniques Chapter 7. Black Box Testing Techniques Prepared by: Kris C. Calpotura, CoE, MSME, MIT  Introduction Introduction  Equivalence.
1 Context-dependent Product Line Practice for Constructing Reliable Embedded Systems Naoyasu UbayashiKyushu University, Japan Shin NakajimaNational Institute.
4. The process specification (プロセス仕様) You will learn: (次の内容を学び) The concept of process specification (プロセス 仕様の概念) Notations for process specification (プロセス.
Formal Specification of Intrusion Signatures and Detection Rules By Jean-Philippe Pouzol and Mireille Ducassé 15 th IEEE Computer Security Foundations.
Chapter 3 Part II Describing Syntax and Semantics.
Semantics In Text: Chapter 3.
COP4020 Programming Languages Introduction to Axiomatic Semantics Prof. Robert van Engelen.
Scientific Debugging. Errors in Software Errors are unexpected behaviors or outputs in programs As long as software is developed by humans, it will contain.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
Testing OO software. State Based Testing State machine: implementation-independent specification (model) of the dynamic behaviour of the system State:
Lecture 6 inferential statistics  Research hypotheses  Statistical hypotheses  Acceptable risks  ‘Real world model’  Decision rules  Experiment report.
CIS 540 Principles of Embedded Computation Spring Instructor: Rajeev Alur
CMSC 330: Organization of Programming Languages Operational Semantics.
Daniel Kroening and Ofer Strichman 1 Decision Procedures An Algorithmic Point of View Basic Concepts and Background.
CSC3315 (Spring 2009)1 CSC 3315 Languages & Compilers Hamid Harroud School of Science and Engineering, Akhawayn University
C HAPTER 3 Describing Syntax and Semantics. D YNAMIC S EMANTICS Describing syntax is relatively simple There is no single widely acceptable notation or.
Learning From Observations Inductive Learning Decision Trees Ensembles.
On computable numbers, with an application to the ENTSCHEIDUNGSPROBLEM COT 6421 Paresh Gupta by Alan Mathison Turing.
Operational Semantics Mooly Sagiv Reference: Semantics with Applications Chapter 2 H. Nielson and F. Nielson
The Church-Turing Thesis Chapter Are We Done? FSM  PDA  Turing machine Is this the end of the line? There are still problems we cannot solve:
Requirements Specification
A Methodology and a Framework for Test Case Generation
Knowledge Representation and Reasoning
Input Space Partition Testing CS 4501 / 6501 Software Testing
Graph Coverage for Specifications CS 4501 / 6501 Software Testing
Representations & Reasoning Systems (RRS) (2.2)
Test Design Techniques Software Testing: IN3240 / IN4240
Presentation transcript:

Theory of Testing and SATEL

2 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN /2c++

L.Lúcio3 Exhaustive test set - Definition The exhaustive set of tests for a given specification can be formalized as: T Exhaustive = {  formula,result  | formula  composition of  input,output  pairs result = true if formula models a valid behavior result = false if formula models an invalid behavior } The exhaustive test set describes fully the expected semantics of the specification, including valid and invalid behaviors…

L.Lúcio4 Specification-Based Test Generation Specification (SP) Program (P) ╞ Exhaustive Test Set (T SP ) ╞o╞o ╞ : program P satisfies (has the same semantics as) specification SP; ╞ o : program P reacts according to test set T SP (as observed by an Oracle O).

L.Lúcio5 Pertinence and Practicability According to the previous slide, the following formula holds: IF test set T SP is pertinent – valid and unbiased –valid – no incorrect programs are accepted; –unbiased – no correct programs are rejected; (P╞ SP) (P╞ o T SP ) But, exhaustive test sets are not practicable in the real world (infinite testing time)…

L.Lúcio6 Test Selection We thus need a way of reducing the exhaustive (infinite) test set to a test set that is practicable, while keeping pertinence… How do we do this? By stating hypotheses about the behavior of the program – the idea is to find good hypotheses that generalize correctly the behavior of the SUT!

L.Lúcio 7 Stating Hypotheses Example: consider as SUT a (simplified) embedded controller for a drink vending machine: Drink Vending Machine insert_money (Y) select_drink (X) accept_money reject_money give_drink not_enough_money Drinks available: Coke (2 coins), Water (1 coin), Rivella (3 coins)

L.Lúcio 8 Stating Hypotheses (2) Hypotheses 1: if the SUT works well for sequences of at least 3 operations, then the system works well (regularity); AND Hypotheses 2: if the system works well while choosing one kind of drink, then it will work well for choosing all kinds (uniformity ). ,, true  Example test 1 ,, false  Example test 2

L.Lúcio 9 Where to find Hypotheses? From the Test Engineer  The knowledge of a test engineer about the functioning of the SUT should be used;  He/She can provide truthful generalizations about the behavior of the SUT! In the Specification  The specification contains an abstraction of all possible behaviors of the SUT;  It is possible complement user’s hypotheses automatically if the specification is formal!

L.Lúcio10 Specification – Complementing human Hypotheses Example: imagine the following example from the DVM: the user inserts 2 coins “insertMoney(2)” and then selects a drink “selectDrink(X)”. There are then 3 interesting behaviors: The buyer doesn’t insert enough coins for drink X and gets nothing (Rivella); The buyer inserts just enough coins, chooses X and gets it (Coke); The buyer inserts too many coins, chooses X, gets it and the change (Water). Assuming the specification is precise enough, the points of choice stated in the operation selectDrink(X) of the specification can be used to add further behavior classification that can be combined with the hypotheses stated by the test engineer. This is called sub-domain decomposition.

L.Lúcio11 Applying Hypotheses Our idea is to use a formal language to describe tests (HML) and defined a language to apply constraints (hypotheses) to those tests; Of course, the final test set will be pertinent (valid and unbiased) only when all the hypotheses correspond to valid generalizations of the behavior of the program! Test Engineer Specification (with behavior) Hypotheses on Program behavior

L.Lúcio12 Oracle The Oracle is a decision procedure that decides whether a test was successful or not… Drink Vending Machine ,, false  Oracle Test Program Yes (test passes) No (test doesn’t pass)

13 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN /2c++

14 State of the Art Running Example ATM System with the operations –login(password) / logged, wrongPass, blocked –logout –withdraw(amount) / giveMoney(amount), notEnoughMoney Following the second wrong login no more operations are allowed The ATM distributes 20 or 100 CHF bills Initial state –There are 100(CHF) in the account –‘a’ is the right password, ‘b’ and ‘c’ are wrong passwords

15 SATEL What are Test Intentions? A test intention defines both a subset of the SUT behavior and hypothesis about how to test it loginLogout < 4x 1withdraw reachBlocked

16 SATEL Recursive Test Intentions and Composition Variables f : PrimitiveHML T in loginLogout; f in loginLogout => f. HML( { login(a) with logged> } { logout } T) in loginLogout; Base case for the recursion (empty test intention) Recursive definition Test intentions may be composed: f in loginLogout & nbEvents( f ) f in 4LessLoginLogout Regularity over execution pathTest intention reuse One test intention is defined by a set of axioms: HML(T), true HML({login(a) with logged} {logout} T), true HML({login(a) with logged} {logout} {login(a) with logged} {logout} T), true HML({login(a) with logged} {logout} {login(a) with logged} {logout} {login(a) with logged} {logout} T), true

17 SATEL Uniformity Axioms Variables pass : password uniformity( pass ) => HML( { login( pass ) with wrongPass } { login( pass ) with blocked } T) in reachBlocked; HML({login(b) with wrongPass} {login(b) with blocked} T), true HML({login(a) with wrongPass} T), false HML({login(b) with wrongPass} {login(a) with blocked} T), false Uniformity predicate

18 SATEL Regularity and subUniformity Axioms Variables obs : primitiveObservation am : natural ( am HML( { login(a) with logged } ( { withdraw( am ) with obs} T) in 1withdraw; HML({login(a) with logged} {withdraw(120) with notEnoughMoney T), true HML({login(a) with logged} {withdraw(80) with giveMoney(80)} T), true regularity predicate subUniformity predicate

19 SATEL Test Intention Definition Mechanisms

20 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN /2c++

21 SATEL Semantics Semantics of a test intention Test intention unfolding (solve recursion) Calculate the exhaustive test set –Replace all variables exhaustively –Generate oracles by validating against the model Positive tests are annotated with true Negative tests extracted from test intentions but having an impossible last event, annotated with false Reduce the exhaustive test set by solving all predicates on the variables

22 SATEL Semantics Annotations Annotations correspond the conditions in the model allowing a state transition In CO-OPN an annotation is a conjunction of conditions reflecting the hierarchy and dynamicity of the model

23 SATEL Semantics Equivalence Class Calculation subUniformity( pass ) => HML( in login; Variables obs : primitiveObservation pass : password (AADT) C1: correct password C2: wrong password HML({login(a) with logged} T), true HML({login(b) with wrongPass} T), true HML({login(a) with logged} T), true HML({login(c) with wrongPass} T), true

24 C1: correct password C2: wrong password C3: two wrong login C4: true C5: not enough money C6: enough money SATEL Semantics Equivalence Class Calculation (cont) subUniformity( path ), nbEvents( path ) path in allUnder4 ; Variables path : primitiveHml

25 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN

26 CO-OPN /2c++ ATM Model annotation = context sync conditions  (object sync condition, object id) pairs loggedIn

27 Tools Developed an IDE for SATEL’s concrete syntax integrated with CoopnBuilder A case study with an industrial partner (CTI) allowed beginning to identify methodological capabilities of SATEL