Presentation is loading. Please wait.

Presentation is loading. Please wait.

Development of Software Testing Ontology and Application to Test Automation Prof. Hong Zhu Department of Computing and Electronics Oxford Brookes University.

Similar presentations


Presentation on theme: "Development of Software Testing Ontology and Application to Test Automation Prof. Hong Zhu Department of Computing and Electronics Oxford Brookes University."— Presentation transcript:

1 Development of Software Testing Ontology and Application to Test Automation Prof. Hong Zhu Department of Computing and Electronics Oxford Brookes University Oxford OX33 1HX, UK

2 Acknowledgement Mr. Yufeng Zhang, MSc and PhD student at the National University of Defence Technology, China Mr. Qingning Huo, PhD student at Oxford Brookes University, UK Dr. Sue Greenwood, Oxford Brookes University, UK June, ONTOSE 2011, London

3 June, ONTOSE 2011, London Context: Web Services Web services is a distributed computing technique that offers more flexibility and looser coupling based on the internet and web infrastructure. The dominant of program-to-program interactions The components: (service providers, requesters, registry): Autonomous: control their own resources and their own behaviours Active: execution not triggered by message, and Persistent: computational entities that last long time Interactions between components: Social ability: discover and establish interaction at runtime Collaboration: as opposite to control, may refuse service, follow a complicated protocol, etc.

4 June, ONTOSE 2011, London WS Technique Stack Basic standards: WSDL: service description and publication UDDI: for service registration and retrieval SOAP for service invocation and delivery More advanced standards for collaborations between service providers and requesters. BPEL4WS: business process and workflow models. OWL-S: ontology for the description of semantics of services Registry ProviderRequester Search for services registered services register service request service deliver service

5 June, ONTOSE 2011, London A Typical Scenario: Car Insurance Broker CIBs Services Bank Bs Services Insurance A 1 s Services Insurance A 2 s Services Insurance A n s Services GUI Interface CIBs service requester WS Registry End users Other service users Could be statically integrated Should be dynamically integrated for business flexibility and competence, and lower operation and maintenance cost

6 Challenges to Testing WS Testing own side services Mostly similar to test software components Some special issues, much work reported Testing other sides services Some similarity to component testing. The differences are Lack of software artifacts Lack of control over test executions Lack of means of observation on system behaviour Testing service composition Static composition: Mostly similar to integration test Dynamic composition: Most challenging, because The need to deal with diversity The need of testing on-the-fly The need of non-intrusive testing The need of full automation June, ONTOSE 2011, London

7 June, ONTOSE 2011, London The Proposed Approach A WS should be accompanied by a testing service functional services: the services of the original functionality testing services: the services to enable test the functional services Testing services can be either provided By the same vendor of the functional services By a third party Independent testing services: Providers: testing tool vendors companies of specialized in software testing The services: to generate test cases to measure test adequacy to extract various types of diagrams from source code or design and specification documents, etc.

8 June, ONTOSE 2011, London Architecture

9 June, ONTOSE 2011, London Illustration in the Typical Scenario

10 June, ONTOSE 2011, London How Does the System Work? The Scenario Suppose the car insurance broker want to search for web services of insurers and test the web service before making quote for its customers. Car Insurance Broker CIB Insurer Web Service IS customer Information about the car and the user Insurance quotes Testing the integration of two services

11 June, ONTOSE 2011, London Collaboration Process in the Typical Scenario

12 June, ONTOSE 2011, London Automating Test Services The key technique issues: How to describe, publish and register test services at WS registry; How to retrieve test services automatically for testing dynamically composed services; How invoke test services by both a human tester and a program; How to report test results in the forms that are suitable for both human beings to read and machine to understand These issues can be resolved by the utilization of a software testing ontology.

13 June, ONTOSE 2011, London STOWS: Software Testing Ontology for WS STOWS is base on an ontology of software testing originally developed for agent oriented software testing (Zhu & Huo 2003, 2005). The concepts of software testing are represented as classes Knowledge about software testing are represented as relations between concepts

14 June, ONTOSE 2011, London Basic Concepts of Software Testing Tester: who carries out a testing activity. Activity: actions performed in testing process, e.g. test planning, test case generation, test execution, result validation, adequacy measurement and test report generation, etc. Artifact: the entities used and/or produced by a testing activity, Location: expressed by a URL or a URI. Format: the format in which data are presented e.g. the files, data, program code and documents etc. Method: the method used to perform a test activity. Test methods can be classified in a number of different ways. Context: the context in which a testing activity is performed, e.g. in software development stages to achieve various testing purposes Environment: The testing environment is the hardware and software configurations in which a testing is to be performed.

15 June, ONTOSE 2011, London Structure of Basic Concepts: Examples Test Activity Test planning Test Case Generation Test Execution Result validation Adequacy measurement Report generation Tester Atomic Service Composite Service

16 June, ONTOSE 2011, London Compound Concepts Capability: describes what a tester can do the activities that a tester can perform the context to perform the activity the testing method used the environment to perform the testing the required resources (i.e. the input) the output that the tester can generate Capability Activity Method Artefact Capability Data Context Environment

17 June, ONTOSE 2011, London Task: describes what testing service is requested A testing activity to be performed How the activity is to be performed: the context the testing method to be used the environment in which the activity must be carried out the available resources the expected outcomes

18 June, ONTOSE 2011, London Relations Between Concepts Relationships between concepts are a very important part of the knowledge of software testing: Subsumption relation between testing methods Compatibility between artefacts formats Enhancement relation between environments Inclusion relation between test activities Temporal ordering between test activities How such knowledge is used: Instances of basic relations are stored in a knowledge-base as basic facts Used by the testing broker to search for test services through compound relations

19 June, ONTOSE 2011, London Compound Relations MorePowerful relation: between two capabilities. MorePowerful(c 1, c 2 ) means that a tester has capability c 1 implies that the tester can do all the tasks that can be done by a tester who has capability c 2. Contains relation: between two tasks. Contains(t 1, t 2 ) means that accomplishing task t 1 implies accomplishing t 2. Matches relation: between a capability and a task. Match(c, t) means that a tester with capability c can fulfil the task t. Capability Tester MorePowerful * * IsMorePowerful C2C2 C1C1 Task Contains T1T1 T2T2 C T Matches Match Contain * * * *

20 June, ONTOSE 2011, London Definition of the MorePowerful Relation A capability C 1 is more powerful than C 2, written MorePowerful(C 1, C 2 ), if and only if C 2 s capability is included in C 1 s activities C 1 and C 2 have the same context. Environment of C 1 is the enhancement of the environment of C 2. The method of C 2 is subsumed by C 1. For each input artefact of C 1, there is a corresponding compatible input in the input artefact of C 2 For each output artefact of C 2 there is a corresponding compatible output artefact of C 1.

21 June, ONTOSE 2011, London Definition of the Contains Relation A task T 1 contains task T 2, written Contains(T 1, T 2 ), if and only if T 1 and T 2 have the same context, T 1 s activities include and T 2 s activities, The method of T 1 subsumes the method of T 2, The environment of T 2 is an enhancement of the environment of T 1, For each input artefact of T 1, there is a corresponding compatible the input artefact of T 2, For each output artefact of T 2, there is a corresponding compatible the output artefact of T 1.

22 June, ONTOSE 2011, London Definition of the Matches Relation A capability C matches a task T, written Matches(C, T), if and only if C and T have the same context, Cs activities include Ts activity, The method of C subsumes the method of T, The environment of T is an enhancement of environment of C, For each input artefact of T, there is a corresponding compatible input artefact of C, For each output artefact of C, there is a corresponding compatible the output artefact of T.

23 June, ONTOSE 2011, London Properties of the Compound Relations (1) The relations MorePowerful and Contains are reflexive and transitive. (2) c 1, c 2 Capability, t Task, MorePowerful(c 1, c 2 ) Matches(c 2, t) Matches(c 1, t). (3) c Capability, t 1, t 2 Task, Contains(t 1, t 2 ) Matches(c, t 1 ) Matches(c, t 2 ).

24 June, ONTOSE 2011, London Prototype Implementation Representation of STOWS in OWL Both basic and compound concepts are classes in OWL and represented as XML data definition Use STOWS in Semantic Web Services Compound concepts represented in OWL are transformed into OWL-S Service Profile for registration, discovery and invocation UDDI /OWL-S registry server: using OWL-S/UDDI Matchmaker The environment: Windows XP, Intel Core Duo CPU 2.16GHz, Jdk 1.5, Tomcat 5.5 and Mysql 5.0.

25 June, ONTOSE 2011, London Transformation of STOWS in OWL-S Activity Context Environment Method Capability data Input Artefacts Output Artefacts ServiceCategory INPUT PARAMETERS ContextMark EnvironmentMark MethodMark Artefacts… OUTPUT PARAMETERS Artefacts… Capability Service profile

26 June, ONTOSE 2011, London Ontology Management Motivation All the terms used in the capability description for test service registration, discovery and invocations must be first defined in the ontology. However, it is impossible to build a complete ontology of software testing the huge volume of software testing knowledge the rapid development of new testing technique, methods and tools. Therefore, the ontology must be extendable and open to the public for updating. To implement a framework, rather than a complete and fixed ontology To provide an ontology management mechanism to enable the population of the ontology

27 June, ONTOSE 2011, London The Ontology Management Mechanism It provides three services to users: AddClass: to add new concept DeleteClass: to delete concept UpdateClass: to revise concept of the ontology Restrictions on the manipulation of the data model Authority Checker: elementary classes form the framework of the ontology STOWS. None of them could be pruned down extended classes attached to the elementary classes to define new concepts instances of the concepts. added by the users and can be deleted from the hierarchy Conflict Checker the new class to be added does not exist in the ontology the class to be deleted has no subclasses in the hierarchy

28 June, ONTOSE 2011, London Structure of OMS

29 June, ONTOSE 2011, London Test Brokers A test service that compose existing test services Decompose test tasks into subtasks Search for test services to carry out the subtasks Select test services from candidates Coordinate the selected test services Invoke them in the right order Pass data between them Collects test results, etc. Itself is a test service as well There may be multiple test brokers owned by different vendors

30 June, ONTOSE 2011, London Architecture of the Prototype Test Broker We have developed a prototype test broker to demonstrate the feasibility of the approach.

31 June, ONTOSE 2011, London Test broker process model

32 A Running Example CIQS: the WS of the PingAn Insurance Company in China Jun ONTOSE 2011, London Test Broker CIB: Car Insurance Broker TCG: Test Case Generator TCE: Test Case Executor for CIQS CIQS: Car Insurance Quote Service Matchmaker Request testing CIQS Search testers Invoke tester Register

33 Case Study: Dealing with Diversity Aim: To evaluate the capability of dealing with diversity Method: To wrap a wide range of SW tools into test services Jun ONTOSE 2011, London NameDescription CASCATA CASOCC-based test case generation tool Test Case Format Translator Translate the test case generated by CASCAT into the format recognizable by Calculator Test Case Executor Test Case ExecutorExecutes test case for a numeric calculator web service Klee Generate and execute test cases from C source code by symbolic execution Magic Check conformance between component specifications and their implementations XML ComparatorCompare XML files Java NCSSMeasure two standard metrics for Java program FindbugsFind bugs in Java program by static analysis PMD A static analysis tool for finding potential bugs and other problems in Java source code WSDL Based Test Case Generator* A WSDL based test case generation tool Web Service Test Case Executor* Execute the test case generated by WSDL Based Test Case Generator

34 Experiment 1: Dealing with Subtle Differences Aim: To test the systems capability of accurately choosing an appropriate tester from those of subtle differences Method: Application of the data mutation testing technique: Mutation operators: transformations of data (service profiles in this case) (4 types ) Seeds: a set of original service profiles (11 seeds) Mutants: service profiles generated from the seeds by applying mutation operators (167 mutants) Jun ONTOSE 2011, London

35 Experiment 2: Scalability Aim: To evaluate the scalability of test brokers in terms of its efficiency to deal with test problems of practical sizes. Problem sizes in terms of The number of testers in the registry The size of the knowledge-base in the test broker The complexity of test task requested Method: To run the system for a number of times To calculate the average lengths of execution time spent various modules of the test broker Jun ONTOSE 2011, London

36 Experiment Results The Effect of the Number of Testers the average search time increases with the number of testers in the registry almost linearly. Jun ONTOSE 2011, London

37 The Effect of the Knowledge-Base Size As the size of knowledge-base (in terms of the number of test plan templates) increases, the time spent by the task analyzer module also increases, but in an almost linear rate. Jun ONTOSE 2011, London

38 The Effect of Task Complexity The total execution time is a quadratic polynomial function of the number of different subtasks (with R 2 =0.9984). Jun ONTOSE 2011, London

39 June, ONTOSE 2011, London Conclusion The challenges imposed on testing web services can be met by employing ontology of software testing to collaborate test services. Feasibility tested by case studies with the prototype implementation Practical usability: Implementable without any change to the existing standards of Semantic WS Motivation for wider adoption by industry Business opportunities for testing tool vendors and software testing companies to provide testing services online as web services Scalable test services are distributed and there is no extra-burden on UDDI servers.

40 June, ONTOSE 2011, London Future Work To populate the ontology of software testing (e.g. the formats of many different representations of testing related artefacts) To device the mechanism of certification and authentication for testing services Social challenges: For the above approach to be practically useful, it must be adopted by web service developers, testing tool vendors and software testing companies To improve the test broker, even to generalise it to all service composition Limitation of OWL-S semantic web services

41 June, ONTOSE 2011, London Related works Tsai et al. (2004): a framework to extend the function of UDDI to enable collaboration Check-in and check-out services to UDDI servers A service is added to UDDI registry only if it passes a check-in test. A check-out testing is performed every time the service is searched for. It is recommended to a client only if it passes the check-out test. To facilitate such tests, they require test scripts being included in the information registered for the WS on UDDI. Group testing: further investigation of the problem how to select a service from a large number of candidates by testing. A test case ranking technique to improve the efficiency of group testing. Bertolino et al (2005): audition framework an admission testing when a WS is registered to UDDI run time monitoring services on both functional and non-functional behaviours after a service is registered in a UDDI server, Service test governance (STG) (2009): to incorporate testing into a wider context of quality assurance of WS imposing a set of policies, procedures, documented standards on WS development, etc. Bertolino and Polini admitted (2009), on a pure SOA based scenario the framework is not applicable. Both recognised the need of collaboration in testing WS, the technical details about how to collaborate multiple parties in WS testing was left open.

42 June, ONTOSE 2011, London Thank You


Download ppt "Development of Software Testing Ontology and Application to Test Automation Prof. Hong Zhu Department of Computing and Electronics Oxford Brookes University."

Similar presentations


Ads by Google