Presentation is loading. Please wait.

Presentation is loading. Please wait.

Theory of Testing and SATEL. 2 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN.

Similar presentations


Presentation on theme: "Theory of Testing and SATEL. 2 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN."— Presentation transcript:

1 Theory of Testing and SATEL

2 2 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN /2c++

3 L.Lúcio3 Exhaustive test set - Definition The exhaustive set of tests for a given specification can be formalized as: T Exhaustive = {  formula,result  | formula  composition of  input,output  pairs result = true if formula models a valid behavior result = false if formula models an invalid behavior } The exhaustive test set describes fully the expected semantics of the specification, including valid and invalid behaviors…

4 L.Lúcio4 Specification-Based Test Generation Specification (SP) Program (P) ╞ Exhaustive Test Set (T SP ) ╞o╞o ╞ : program P satisfies (has the same semantics as) specification SP; ╞ o : program P reacts according to test set T SP (as observed by an Oracle O).

5 L.Lúcio5 Pertinence and Practicability According to the previous slide, the following formula holds: IF test set T SP is pertinent – valid and unbiased –valid – no incorrect programs are accepted; –unbiased – no correct programs are rejected; (P╞ SP) (P╞ o T SP ) But, exhaustive test sets are not practicable in the real world (infinite testing time)…

6 L.Lúcio6 Test Selection We thus need a way of reducing the exhaustive (infinite) test set to a test set that is practicable, while keeping pertinence… How do we do this? By stating hypotheses about the behavior of the program – the idea is to find good hypotheses that generalize correctly the behavior of the SUT!

7 L.Lúcio 7 Stating Hypotheses Example: consider as SUT a (simplified) embedded controller for a drink vending machine: Drink Vending Machine insert_money (Y) select_drink (X) accept_money reject_money give_drink not_enough_money Drinks available: Coke (2 coins), Water (1 coin), Rivella (3 coins)

8 L.Lúcio 8 Stating Hypotheses (2) Hypotheses 1: if the SUT works well for sequences of at least 3 operations, then the system works well (regularity); AND Hypotheses 2: if the system works well while choosing one kind of drink, then it will work well for choosing all kinds (uniformity ). ,, true  Example test 1 ,, false  Example test 2

9 L.Lúcio 9 Where to find Hypotheses? From the Test Engineer  The knowledge of a test engineer about the functioning of the SUT should be used;  He/She can provide truthful generalizations about the behavior of the SUT! In the Specification  The specification contains an abstraction of all possible behaviors of the SUT;  It is possible complement user’s hypotheses automatically if the specification is formal!

10 L.Lúcio10 Specification – Complementing human Hypotheses Example: imagine the following example from the DVM: the user inserts 2 coins “insertMoney(2)” and then selects a drink “selectDrink(X)”. There are then 3 interesting behaviors: The buyer doesn’t insert enough coins for drink X and gets nothing (Rivella); The buyer inserts just enough coins, chooses X and gets it (Coke); The buyer inserts too many coins, chooses X, gets it and the change (Water). Assuming the specification is precise enough, the points of choice stated in the operation selectDrink(X) of the specification can be used to add further behavior classification that can be combined with the hypotheses stated by the test engineer. This is called sub-domain decomposition.

11 L.Lúcio11 Applying Hypotheses Our idea is to use a formal language to describe tests (HML) and defined a language to apply constraints (hypotheses) to those tests; Of course, the final test set will be pertinent (valid and unbiased) only when all the hypotheses correspond to valid generalizations of the behavior of the program! Test Engineer Specification (with behavior) Hypotheses on Program behavior

12 L.Lúcio12 Oracle The Oracle is a decision procedure that decides whether a test was successful or not… Drink Vending Machine ,, false  Oracle Test Program Yes (test passes) No (test doesn’t pass)

13 13 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN /2c++

14 14 State of the Art Running Example ATM System with the operations –login(password) / logged, wrongPass, blocked –logout –withdraw(amount) / giveMoney(amount), notEnoughMoney Following the second wrong login no more operations are allowed The ATM distributes 20 or 100 CHF bills Initial state –There are 100(CHF) in the account –‘a’ is the right password, ‘b’ and ‘c’ are wrong passwords

15 15 SATEL What are Test Intentions? A test intention defines both a subset of the SUT behavior and hypothesis about how to test it loginLogout < 4x 1withdraw reachBlocked

16 16 SATEL Recursive Test Intentions and Composition Variables f : PrimitiveHML T in loginLogout; f in loginLogout => f. HML( { login(a) with logged> } { logout } T) in loginLogout; Base case for the recursion (empty test intention) Recursive definition Test intentions may be composed: f in loginLogout & nbEvents( f ) f in 4LessLoginLogout Regularity over execution pathTest intention reuse One test intention is defined by a set of axioms: HML(T), true HML({login(a) with logged} {logout} T), true HML({login(a) with logged} {logout} {login(a) with logged} {logout} T), true HML({login(a) with logged} {logout} {login(a) with logged} {logout} {login(a) with logged} {logout} T), true

17 17 SATEL Uniformity Axioms Variables pass : password uniformity( pass ) => HML( { login( pass ) with wrongPass } { login( pass ) with blocked } T) in reachBlocked; HML({login(b) with wrongPass} {login(b) with blocked} T), true HML({login(a) with wrongPass} T), false HML({login(b) with wrongPass} {login(a) with blocked} T), false Uniformity predicate

18 18 SATEL Regularity and subUniformity Axioms Variables obs : primitiveObservation am : natural ( am HML( { login(a) with logged } ( { withdraw( am ) with obs} T) in 1withdraw; HML({login(a) with logged} {withdraw(120) with notEnoughMoney T), true HML({login(a) with logged} {withdraw(80) with giveMoney(80)} T), true regularity predicate subUniformity predicate

19 19 SATEL Test Intention Definition Mechanisms

20 20 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN /2c++

21 21 SATEL Semantics Semantics of a test intention Test intention unfolding (solve recursion) Calculate the exhaustive test set –Replace all variables exhaustively –Generate oracles by validating against the model Positive tests are annotated with true Negative tests extracted from test intentions but having an impossible last event, annotated with false Reduce the exhaustive test set by solving all predicates on the variables

22 22 SATEL Semantics Annotations Annotations correspond the conditions in the model allowing a state transition In CO-OPN an annotation is a conjunction of conditions reflecting the hierarchy and dynamicity of the model

23 23 SATEL Semantics Equivalence Class Calculation subUniformity( pass ) => HML( in login; Variables obs : primitiveObservation pass : password (AADT) C1: correct password C2: wrong password HML({login(a) with logged} T), true HML({login(b) with wrongPass} T), true HML({login(a) with logged} T), true HML({login(c) with wrongPass} T), true

24 24 C1: correct password C2: wrong password C3: two wrong login C4: true C5: not enough money C6: enough money SATEL Semantics Equivalence Class Calculation (cont) subUniformity( path ), nbEvents( path ) path in allUnder4 ; Variables path : primitiveHml

25 25 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN

26 26 CO-OPN /2c++ ATM Model annotation = context sync conditions  (object sync condition, object id) pairs loggedIn

27 27 Tools Developed an IDE for SATEL’s concrete syntax integrated with CoopnBuilder A case study with an industrial partner (CTI) allowed beginning to identify methodological capabilities of SATEL


Download ppt "Theory of Testing and SATEL. 2 Presentation Structure Theory of testing SATEL (Semi-Automatic TEsting Language) –Test Intentions –SATEL semantics –CO-OPN."

Similar presentations


Ads by Google