Presentation is loading. Please wait.

Presentation is loading. Please wait.

 Yves Le Traon 2003 OO System Testing Behavioral test patterns Automatic test synthesis from UML models.

Similar presentations


Presentation on theme: " Yves Le Traon 2003 OO System Testing Behavioral test patterns Automatic test synthesis from UML models."— Presentation transcript:

1  Yves Le Traon 2003 OO System Testing Behavioral test patterns Automatic test synthesis from UML models

2  Yves Le Traon 2003 Outline  System testing  Behavioral test patterns  Generating behavioral test patterns

3  Yves Le Traon 2003 Testing product lines  Benefiting from the PL specificities  Testing commonalities  Deriving tests according to the variants  Specific tests  Reusing tests Building test assets  Defining test independently from the products  Using generic scenarios  Deriving product-specific test cases from those generic scenarios

4  Yves Le Traon 2003 Test système et UML  Meeting distribué

5  Yves Le Traon 2003 The use case scenarios  High level, simple, incomplete  Wildcards for genericity  Example: Enter use case scenario (x is a scenario parameter) (b) :Server x:user enter(*, x) ok (a) Nominal case (b) Exceptional case :Server x:user enter(*, x) nok

6  Yves Le Traon 2003 Test système et UML Cas d’utilisationScénarios Nominaux Scénarios Exc. Rares Scénarios Exc. Echecs A PlanifierN A1, N A2 E A1, E A2 B OuvrirN B1 E B1, E B2 I ClôturerN I1 R I1 C ConsulterN C1 E C1 D EntrerN C1 R D1 E D1, E D2 E Demander la Parole N E1 E E1 G ParlerN G1, N G2 R G1 E G1, E G2 H SortirN H1 E H1 F Donner la ParoleN F1 E F1, E F2

7  Yves Le Traon 2003 Test système et UML  Critère minimum: Couvrir chaque scénario avec une donnée de test  Ici 27 cas de test  Critère de couverture des combinaisons de use-cases  Prérequis : un diagramme d’activité des use-cases

8  Yves Le Traon 2003 Activity diagrams  A swimlane per actor  Visually significant  Within UML notations  Suitable to apply algorithms  Difficult to build  Hard or impossible to express certain behaviors  Not suitable for use cases shared by actors

9  Yves Le Traon 2003 Test système et UML

10  Yves Le Traon 2003 Test système et UML  Critère : tester chaque scénario de chaque cas d’utilisation dans chaque séquence nominale élémentaire (1 passage par boucle)

11  Yves Le Traon 2003 Test système et UML  Données de tests à générer pour la séquence A.B.I.H => 4 + 2x3 + 2x1x2 + 2x1x2x2 = 22 cas de test doivent être générés via ce « pattern » Cible de Test A B I H Combinaison des Cas de Test             2 1 2 1 A A A A E E N N       2 1 A A N N            2 1 1 B B B E E N       2 1 A A N N   1 B N        1 1 I I R N       2 1 A A N N   1 B N        1 1 I I R N        1 1 H H E N

12  Yves Le Traon 2003 Test système et UML  En évitant les redondances: ordonner les « cibles de test » => 10 cas de test doivent être générés via ce « pattern » Cible de Test H I B A Combinaison des Cas de Test       2 1 A A N N   1 B N  N I1        1 1 H H E N       2 1 A A N N   1 B N  R       2 1 A A N N   1 B E       2 1 A A E E

13  Yves Le Traon 2003 Behavioral Test Patterns  Based on the use case scenarios  high level  generic (use of wildcards)  incomplete  nominal or exceptional  A selection from among the scenarios :  An accept scenario (  test objective)  Reject scenarios (optional)  Prefix scenarios (  initialisation, optional)

14  Yves Le Traon 2003 Benefits from test patterns  Generation of product specific test cases from product independant test patterns  But tedious to build test patterns especially for « basis tests »  Idea : being able to build automatically significant sets of test patterns

15  Yves Le Traon 2003 How to exploit use cases ordering ?  Generate pertinent paths of use cases  In order to reach a test criterion  Issues:  An algorithm to assemble the use cases taking into account the pre and post conditions  Defining pertinent test criterions

16  Yves Le Traon 2003 Conclusion  From early modeling to test cases :  From reusable and generic test pattern  To concrete test cases, specific to each product  Two ways of selecting test patterns:  manually (qualitative approach)  driven by use cases sequential dependencies (quantitative approach)

17  Yves Le Traon 2003 From system level test patterns to specific test cases : application to product-Line architectures

18  Yves Le Traon 2003 Product Line architectures  A product line : a set of systems which share a common software architecture and a set of reusable components.  Building a product line aims at developing once the common core of a set of products, and to reuse it for all the products.  Defining a product family  Variants and commonalities  Reuse assets  For our purpose: specify behavioural test patterns, that become reusable “test assets” of the product-line

19  Yves Le Traon 2003 Product Line architectures: a key challenge  Use case scenarios cannot be used directly for testing  Generic and incomplete.  Parameters are not known, nor object instances (scenarios concern roles).  Specify the general system functionality without knowing – at that stage - the exact sequence calls/answers.  Generating test cases from such test patterns for a given UML specification is thus one of the key challenges in software testing today.

20  Yves Le Traon 2003 PL  Variants  optional, when a component can be present or not,  alternative, when at a variation point, one and only one component can be chosen among a set of components,  multiple, when at a variation point, several components can be chosen among a set of components.  All the variants must appear in the architecture but not all the possible combination of variants  Extracting a product from the global product line architecture : product instantiation

21  Yves Le Traon 2003 Product Line architectures: example  Virtual Meeting Server PL offers simplified web conference services:  it aims at permitting several kinds of work meetings, on a distributed platform. ( general case of a ‘chat’ software).  When connected to the server, a client can enter or exit a meeting, speak, or plan new meetings.  Three types of meetings  standard meetings where the client who has the floor is designated by a moderator (nominated by the organizer of the meeting)  democratic meetings which are standard meetings where the moderator is a FIFO robot (the first client to ask for permission to speak is the first to speak)  private meetings which are standard meetings with access limited to a defined set of clients.

22  Yves Le Traon 2003 The Virtual Meeting Example  Connection to the server  Planning of meetings  Participation in meetings  Moderation of meetings VirtualMtg enter plan open close consult leave hand over speak moderator manageruser connect Virtual meeting use case diagram

23  Yves Le Traon 2003 Product Line architectures: example  Due to marketing constraints, the Virtual Meeting PL is derivable into three products  a demonstration edition: standard and limited  a personal edition: any type but limited  an enterprise edition: any type, no limitations  Two variants : type (multiple) and participants limitation (optional )  (also OS, languages, interfaces etc.)

24  Yves Le Traon 2003 The Virtual Meeting Example  Two main variants:  the kinds of meetings available  the limitation of the number of participants  Three products:  Demonstration edition  Personal edition  Enterprise edition Virtual Meeting Variant 1 {multiple}: available meetings Variant 2 {optional}: meetings limitation Demonstration edition Standardtrue Personal edition Standard, private, democratic true Enterprise edition Standard, private, democratic false

25  Yves Le Traon 2003 Testing product lines  Benefiting from the PL specificities  Testing commonalities  Deriving tests according to the variants  Specific tests  Reusing tests Building test assets  Defining test independently from the products  Using generic scenarios  Deriving product-specific test cases from those generic scenarios

26  Yves Le Traon 2003 A contradiction  Test scenarios must be expressed at a very high level  to be reusable  to be independent from the variants and the products  Generic scenarios are too vague and incomplete  cannot be directly used on a specific product  Impossible to reuse generic test scenarios ?

27  Yves Le Traon 2003 Behavioral Test Patterns  Based on the use case scenarios  high level  generic  product independent  nominal or exceptional  A selection from among the scenarios :  An accept scenario  Reject scenarios  Prefix scenarios

28  Yves Le Traon 2003 Testing a PL  Behavioral Test Patterns (or Test Objective)  an accept scenario: it expresses the behavior that has to be tested, e.g. the successful exit (“leave” a meeting use case) of a participant from a meeting,  one or several (optional) reject scenarios: they express the behaviors that are not significant for the tester, e.g. the consult function of a meeting state does not interact with the entering into a meeting.  one or several (optional) preamble (or prefix) scenarios that must precede the accept scenario. For example, a meeting must be opened before any participant can enter the virtual meeting.

29  Yves Le Traon 2003 An Example S- Prefix S+ :Server x:user enter(*, x) nok x:user :Server connect(x) ok plan(*, x) ok open(*, x) ok :Server y:user close(*, y) :Server leave(*, y) y:user

30  Yves Le Traon 2003 The reject scenarios  Optional  Reduce the « noise »  Avoid calls irrelevant for the test  Exclude interfering calls

31  Yves Le Traon 2003  Describes the preamble part of the test case  Guides the synthesis  A composition of use-case scenarios  Scenarios versus object diagram ? The prefix Prefix x:user :Server connect(x) ok plan(*, x) ok open(*, x) ok user2:user user3:user user4:user server:demoServer user1:user Object diagram

32  Yves Le Traon 2003 Typical reject scenarios  Some scenarios can be added automatically  Use of a boolean dependency matrix PlanOpenCloseConsultEnterSpeakLeave PlanXX OpenXX CloseXX ConsultXX EnterXXXX SpeakXXX LeaveXX Scenarios independent from the enter use case : added as reject scenarios

33  Yves Le Traon 2003 Typical reject / prefix scenarios  Use of the activity diagram  Accept scenario = the targeted scenario in a use cas  Prefix = the previous scenarios in the path  Reject = all the scenarios of the use cases that are not involved in the path.

34  Yves Le Traon 2003 Generating test patterns Product instanciation Detailed Design General Design Main classes Interfaces… P1 P2 P3 TP1 TP2 Test cases synthesis Use cases UC1 UC2 nominalexceptional nominalexceptional Evolution Test patterns specification (test objective) Accept scenario Reject scenarios (optional) Prefix scenarios (optional) selection manual automated or

35  Yves Le Traon 2003

36 Compiling the Test Pattern  Inputs venant d’UML:  Le diagramme de classes détaillé avec – autant que possible – un statechart par classe active du système  Un diagramme d’objets initial  Le pattern de test  Les aspects dynamiques sont fournis à TGV sous forme d’API  Output :  Un scénario détaillé UML décrivant tous les appels précis et les verdicts attendus à effectuer sur le système pour observer le comportement spécifié dans le pattern

37  Yves Le Traon 2003 Compiling the Test Pattern  accept+ = sequential composition of the prefix and the accept scenario  Scenarios making up the test case =  accepted by accept+  rejected by none of the reject scenarios  accept+  LTS S +  reject scenarios {seq j - } j  J  LTS {S j - } j  J  Test pattern  LTS S +    j  J S j -

38  Yves Le Traon 2003 Synthesis of the test case  Inputs of TGV:  Simulation API  LTS representing the Test Pattern  Which actions are internal ?  Which actions are inputs ? outputs ?  Output of TGV: IOLTS representing a test case  UML test case derivation

39  Yves Le Traon 2003 Product Line architectures: example (a) non-limited meetings (b) limited meetings

40  Yves Le Traon 2003 An Example S- Prefix S+ :Server x:user enter(*, x) nok x:user :Server connect(x) ok plan(*, x) ok open(*, x) ok :Server y:user close(*, y) :Server leave(*, y) y:user

41  Yves Le Traon 2003 Test patterns and test cases user2:useruser3:useruser4:user server:demoServer user1:user enter(aMtg, user2) ok enter(aMtg, user3) ok enter(aMtg, user4) nok enter(aMtg, user1) ok connect(user1) ok plan(aMtg, user1) ok open(aMtg, user1) ok Preamble Test Objective

42  Yves Le Traon 2003 Conclusion  From early modeling to test cases :  From reusable and generic test pattern  To concrete test cases, specific to each product  Methodology « not fully automated » …

43  Yves Le Traon 2003 Testability Analysis of a UML Class Diagram

44  Yves Le Traon 2003 Testability ?  Quality factor  The property of a software to be easily tested  A testability measurement  To estimate the testing effort  early in the software life-cycle  hopefully make the design closer to a correct implementation

45  Yves Le Traon 2003 Objectives  Focus on OO specific testing problems  In OO software control is widespread all over the design  Objects interactions  The UML class diagram is the main specification/design reference  The test criterion can be defined with respect to this reference

46  Yves Le Traon 2003 Objectives  The class diagram is often under-specified from a testing point of view  Many potential object interactions will never occur on final software  Make the class diagram more complete to avoid hard to test interactions between objects

47  Yves Le Traon 2003 Example

48  Yves Le Traon 2003 Example

49  Yves Le Traon 2003 Example

50  Yves Le Traon 2003 Example

51  Yves Le Traon 2003 Example

52  Yves Le Traon 2003 Unformal analysis  Two potential undesirable interactions :  Self usage

53  Yves Le Traon 2003 Unformal analysis  Concurrent usage

54  Yves Le Traon 2003 Capturing anti-patterns  Interaction  Self-usage C E D’ e d’ c UU * D * d U U * * C D * d c U * U

55  Yves Le Traon 2003 Unformal analysis  Inheritance complexity A,B, (B2, B21), C || A, C A,B2,(B, B21),C || A, C A, B21, (B, B2), C || A, C Complexity = 6

56  Yves Le Traon 2003 Detecting testability issues  Test criterion : cover those interactions  Derive a graph from a UML class diagram  To identify hard to test interactions  To capture complexity of interactions due to polymorphism

57  Yves Le Traon 2003 Detecting testability issues NCo Co BS B BICQ BAIM DP AIMDP ICQDP IP AIMIP ICQIP C Co NCo CS

58  Yves Le Traon 2003 Detecting testability issues NCo Co BS B BICQ BAIM DP AIMDP ICQDP IP AIMIP ICQIP C Co NCo CS

59  Yves Le Traon 2003 Detecting testability issues NCo Co BS B BICQ BAIM DP AIMDP ICQDP IP AIMIP ICQIP C Co NCo CS

60  Yves Le Traon 2003 Improving Design Testability  clarifying the design, so that the code can be as close as possible to what the designer wants  When it is possible :  Refactoring for Testability : Use interface classes (“empty” from an execution point of view)  Use of dedicated stereotypes  Specify roles of the relationships in terms of creation, consultation, modification

61  Yves Le Traon 2003 Refactoring for Testability  Example: transforming a class into an interface A B A B

62  Yves Le Traon 2003 Improving Design Testability  «create»: from class a to class b means that objects of type a calls the creation method on objects of type b.  If no «use» stereotype is attached to the same link, only the creation method can be called.  «use»: from class a to class b means that objects of type a can call any method excluding the create one on objects of type b.  It may be refined in the following stereotypes:

63  Yves Le Traon 2003 Improving Design Testability  «use_consult»: is a specialization of «use» stereotype where the called methods do never modify attributes of the objects of type b.  «use_def»: is a specialization of «use» stereotype where at least one of the called methods may modify attributes of the objects of type b.

64  Yves Le Traon 2003 Improving Design Testability  Anti-pattern iff:  e1 being the entry edge of end(P1), e2 being the entry edge of end(P2)  In practice, most interactions may be deleted by adding the rigth stereotypes C e2 «use» P1 P2 e1 «use_def» D

65  Yves Le Traon 2003 Use of dedicated stereotypes

66  Yves Le Traon 2003 Use of dedicated stereotypes

67  Yves Le Traon 2003 Use of dedicated stereotypes

68  Yves Le Traon 2003 Use of dedicated stereotypes «create»

69  Yves Le Traon 2003 Use of dedicated stereotypes  Automating stereotypes insertion for design patterns  Abstract model for design patterns  Meta-level specification  Describe testability constraints at abstract level with UML collaborations /Factory /Product «create»

70  Yves Le Traon 2003 More preciseness « Abstract Factory » Product Factory «create»

71  Yves Le Traon 2003 Improving Design Testability  Feasability  verification of a «use-consult» from a to b consists in verifying that: a only calls query methods of b,  b query methods never modify b state (directly and indirectly through the call of non-query methods).

72  Yves Le Traon 2003 Conclusion  Specific OO testability issues  Test criterion  Graph model  Preciseness or refactoring for a more testable design

73  Yves Le Traon 2003 Testabilité de logiciels OO et conception par contrats Robustesse et diagnosabilité

74  Yves Le Traon 2003 Design-by-contract™  A design philosophy (B. Meyer)  Component-based OO approach  A component  is not responsible from its inputs consistency (caller responsibility) may refuse to work if caller breaks the contract  is responsible from its result  Specification is derived into executable contracts

75  Yves Le Traon 2003 Design-by-contract™  Boolean assertions:  pre and post conditions for offered services  invariants for general consistency properties  A failure to meet the contract terms indicates the presence of a fault:  precondition violation  client broke the contract  postcondition violation  a bug in the routine

76  Yves Le Traon 2003 Design-by-contract™: Example BankAccount {balance  overdraft} balance: Sum overdraft: Sum deposit (amount: Sum) {pre: amount > 0} {post: balance = balance @pre + amount} withdraw (amount: Sum) {pre: amount > 0 and amount  balance-overdraft} {post: balance = balance @pre - amount}

77  Yves Le Traon 2003 Precondition: Burden on the client  Specification on what must be true for a client to be allowed to call a method  example: amount > 0

78  Yves Le Traon 2003 Postcondition: Burden on the implementor  Specification on what must be true at completion of any successful call to a method  example: balance = balance @pre + amount

79  Yves Le Traon 2003 The objectives  Quantitative estimate of what contracts really improve in the software  trade-off between cost of design by contract /quality of the software  We propose two measures  robustness  diagnosability

80  Yves Le Traon 2003 Robustness  Degree to which the software is able to recover from internal faults that would otherwise have provoked a failure.

81  Yves Le Traon 2003 Robustness A Local robustness contracts quality Combination is better than addition Global robustness B C A contracts

82  Yves Le Traon 2003 Robustness: Test dependency A D CB contracts component A component plugged into a system has a robustness enhanced by its clients contracts  test dependency

83  Yves Le Traon 2003 Robustness  Local robustness  the probability that a component detects internal faults or faults in other components of the system  Global robustness  the probability that a fault is detected by any component

84  Yves Le Traon 2003 Robustness: Measures  Axiomatisation  express clear properties about this measure  identify main features for robustness computation  Experimental calibration of the model mutation analysis for:  estimate the local robustness of components  the impact of a coupling for the global robustness

85  Yves Le Traon 2003  A chaque composant on associe une robustesse Rob  est la probabilité que les contrats du composant i détectent une erreur du composant j C1 C3 C5 C4 C2

86  Yves Le Traon 2003 Robustness : Measures  We distinguish two other features to compute the robustness:  robustness of a component embedded in a system (RobInS)  Probability the failure comes from C i : Prob_failure(i)=1/#stat

87  Yves Le Traon 2003 Robustness : Measures  Computing local and global robustness

88  Yves Le Traon 2003 Robustness: Measures Test A Mutants Generation mutantA6 mutantA5 mutantA4 mutantA3 mutantA2 mutantA1 Test Execution 1 Enhance Test 2 Add contracts to the specification mutantAj killed Test OK ! Error detected mutantAj alive Error not detected Component A

89  Yves Le Traon 2003 Robustness: Results MinimumMaximumaverage % mutants killed (initial contracts) 17%83%58,5% % mutants killed after contracts improvement 72%100%87,5% Min.Max.Average % mutants of provider killed by client’s contracts 50%84%69%

90  Yves Le Traon 2003 Robustness: Results

91  Yves Le Traon 2003 Robustness: Conclusion  About the results:  no contracts  system not robust  robustness improves rapidly with a few contracts  very high robustness is very expensive: almost 40% more contracts to improve from 80% to 100% robustness  Estimate the quantity of contracts needed for a given robustness level

92  Yves Le Traon 2003 Diagnosability  Diagnosability expresses the effort for the localization of a fault as well as the preciseness allowed by a test strategy on a given system

93  Yves Le Traon 2003 Diagnosability: the help of contracts Diagnosis scope Classical software

94  Yves Le Traon 2003 Diagnosability: the help of contracts Diagnosis scope Designed by contract software Exception treatment (diagnosis and default mode)

95  Yves Le Traon 2003 Diagnosability: Measures  Main features:  Absorption coefficient  if the first executed contract has a probability p to detect a, the second has only .p probability to detect it, the third one  2.p  Indistinguishability set (IS)  is the probability that contract j detects a faulty statement in IS i...

96  Yves Le Traon 2003 Diagnosability: Measures  Assumptions:  the contracts repartition in a flow is uniform Each IS has same size ISsize (=#stat div #contracts),  the closer a contract is to the faulty statement i the more probable it can detect the fault  the contracts have an equal probability p to detect a fault coming from the statements they are directly consecutive to  each statement has the same probability to be faulty equal to 1/Nstat

97  Yves Le Traon 2003 Diagnosability: Measures  Computing diagnosability:.

98  Yves Le Traon 2003 Diagnosability: Results 00,20,40,60,81 Contracts/assertions density Diagnosability 0 100 200 300 400 500 600 700 800 900 1000 0.2 0.4 0.6 0.8 Contracts /assertions efficiency

99  Yves Le Traon 2003 Diagnosability: Conclusion  Diagnosabilty improves very rapidly when using contracts  The quantity of contracts is less important than their quality for improving diagnosability

100  Yves Le Traon 2003 Conclusion  Definition of measures for two quality factors  Experimental studies have been conducted  Measures estimate the contribution of contract quality and density  The quality of contracts is more important than their quantity


Download ppt " Yves Le Traon 2003 OO System Testing Behavioral test patterns Automatic test synthesis from UML models."

Similar presentations


Ads by Google