Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prof. Hong Zhu Department of Computing and Electronics

Similar presentations


Presentation on theme: "Prof. Hong Zhu Department of Computing and Electronics"— Presentation transcript:

1 Development of Software Testing Ontology and Application to Test Automation
Prof. Hong Zhu Department of Computing and Electronics Oxford Brookes University Oxford OX33 1HX, UK

2 Acknowledgement Mr. Yufeng Zhang, MSc and PhD student at the National University of Defence Technology, China Mr. Qingning Huo, PhD student at Oxford Brookes University, UK Dr. Sue Greenwood, Oxford Brookes University, UK June, 2011 ONTOSE 2011, London

3 Context: Web Services Web services is a distributed computing technique that offers more flexibility and looser coupling based on the internet and web infrastructure. The dominant of program-to-program interactions The components: (service providers, requesters, registry): Autonomous: control their own resources and their own behaviours Active: execution not triggered by message, and Persistent: computational entities that last long time Interactions between components: Social ability: discover and establish interaction at runtime Collaboration: as opposite to control, may refuse service, follow a complicated protocol, etc. June, 2011 ONTOSE 2011, London

4 WS Technique Stack Basic standards:
WSDL: service description and publication UDDI: for service registration and retrieval SOAP for service invocation and delivery More advanced standards for collaborations between service providers and requesters. BPEL4WS: business process and workflow models. OWL-S: ontology for the description of semantics of services June, 2011 Registry Provider Requester Search for services registered services register service request service deliver service ONTOSE 2011, London

5 A Typical Scenario: Car Insurance Broker
Other service users End users CIB’s Services Bank B’s Services Insurance A1’s Services Insurance A2’s Services Insurance An’s Services GUI Interface CIB’s service requester WS Registry Could be statically integrated June, 2011 Should be dynamically integrated for business flexibility and competence, and lower operation and maintenance cost ONTOSE 2011, London

6 Challenges to Testing WS
Testing own side services Mostly similar to test software components Some special issues, much work reported Testing other side’s services Some similarity to component testing. The differences are Lack of software artifacts Lack of control over test executions Lack of means of observation on system behaviour Testing service composition Static composition: Mostly similar to integration test Dynamic composition: Most challenging, because The need to deal with diversity The need of testing on-the-fly The need of non-intrusive testing The need of full automation June, 2011 ONTOSE 2011, London

7 The Proposed Approach A WS should be accompanied by a testing service
functional services: the services of the original functionality testing services: the services to enable test the functional services Testing services can be either provided By the same vendor of the functional services By a third party Independent testing services: Providers: testing tool vendors companies of specialized in software testing The services: to generate test cases to measure test adequacy to extract various types of diagrams from source code or design and specification documents, etc. June, 2011 ONTOSE 2011, London

8 Architecture June, 2011 ONTOSE 2011, London

9 Illustration in the Typical Scenario
June, 2011 ONTOSE 2011, London

10 How Does the System Work?
The Scenario Suppose the car insurance broker want to search for web services of insurers and test the web service before making quote for its customers. Testing the integration of two services June, 2011 Information about the car and the user Car Insurance Broker CIB Insurer Web Service IS Insurance quotes customer ONTOSE 2011, London

11 Collaboration Process in the Typical Scenario
June, 2011 ONTOSE 2011, London

12 Automating Test Services
The key technique issues: How to describe, publish and register test services at WS registry; How to retrieve test services automatically for testing dynamically composed services; How invoke test services by both a human tester and a program; How to report test results in the forms that are suitable for both human beings to read and machine to understand June, 2011 These issues can be resolved by the utilization of a software testing ontology. ONTOSE 2011, London

13 STOWS: Software Testing Ontology for WS
STOWS is base on an ontology of software testing originally developed for agent oriented software testing (Zhu & Huo 2003, 2005). The concepts of software testing are represented as classes Knowledge about software testing are represented as relations between concepts June, 2011 ONTOSE 2011, London

14 Basic Concepts of Software Testing
Tester: who carries out a testing activity. Activity: actions performed in testing process, e.g. test planning, test case generation, test execution, result validation, adequacy measurement and test report generation, etc. Artifact: the entities used and/or produced by a testing activity, Location: expressed by a URL or a URI. Format: the format in which data are presented e.g. the files, data, program code and documents etc. Method: the method used to perform a test activity. Test methods can be classified in a number of different ways. Context: the context in which a testing activity is performed, e.g. in software development stages to achieve various testing purposes Environment: The testing environment is the hardware and software configurations in which a testing is to be performed. June, 2011 ONTOSE 2011, London

15 Structure of Basic Concepts: Examples
Tester Atomic Service Composite Service Test Activity Test planning Test Case Generation Test Execution Result validation Adequacy measurement Report generation June, 2011 ONTOSE 2011, London

16 Capability: describes what a tester can do
Compound Concepts Capability: describes what a tester can do the activities that a tester can perform the context to perform the activity the testing method used the environment to perform the testing the required resources (i.e. the input) the output that the tester can generate June, 2011 Capability Activity Method Artefact Capability Data Context Environment ONTOSE 2011, London

17 Task: describes what testing service is requested
A testing activity to be performed How the activity is to be performed: the context the testing method to be used the environment in which the activity must be carried out the available resources the expected outcomes June, 2011 ONTOSE 2011, London

18 Relations Between Concepts
Relationships between concepts are a very important part of the knowledge of software testing: Subsumption relation between testing methods Compatibility between artefacts’ formats Enhancement relation between environments Inclusion relation between test activities Temporal ordering between test activities How such knowledge is used: Instances of basic relations are stored in a knowledge-base as basic facts Used by the testing broker to search for test services through compound relations June, 2011 ONTOSE 2011, London

19 Compound Relations MorePowerful relation: between two capabilities.
MorePowerful(c1, c2) means that a tester has capability c1 implies that the tester can do all the tasks that can be done by a tester who has capability c2. Contains relation: between two tasks. Contains(t1, t2) means that accomplishing task t1 implies accomplishing t2. Matches relation: between a capability and a task. Match(c, t) means that a tester with capability c can fulfil the task t. June, 2011 Capability Tester MorePowerful * IsMorePowerful C2 C1 Task Contains T1 T2 C T Matches Match Contain ONTOSE 2011, London

20 Definition of the MorePowerful Relation
A capability C1 is more powerful than C2, written MorePowerful(C1, C2), if and only if C2’s capability is included in C1’s activities C1 and C2 have the same context. Environment of C1 is the enhancement of the environment of C2. The method of C2 is subsumed by C1. For each input artefact of C1 , there is a corresponding compatible input in the input artefact of C2 For each output artefact of C2 there is a corresponding compatible output artefact of C1. June, 2011 ONTOSE 2011, London

21 Definition of the Contains Relation
A task T1 contains task T2, written Contains(T1, T2), if and only if T1 and T2 have the same context, T1’s activities include and T2’s activities, The method of T1 subsumes the method of T2, The environment of T2 is an enhancement of the environment of T1, For each input artefact of T1, there is a corresponding compatible the input artefact of T2, For each output artefact of T2 , there is a corresponding compatible the output artefact of T1. June, 2011 ONTOSE 2011, London

22 Definition of the Matches Relation
A capability C matches a task T, written Matches(C, T), if and only if C and T have the same context, C’s activities include T’s activity, The method of C subsumes the method of T, The environment of T is an enhancement of environment of C, For each input artefact of T , there is a corresponding compatible input artefact of C, For each output artefact of C, there is a corresponding compatible the output artefact of T. June, 2011 ONTOSE 2011, London

23 Properties of the Compound Relations
(1) The relations MorePowerful and Contains are reflexive and transitive. (2) c1, c2Capability, tTask, MorePowerful(c1, c2)  Matches(c2, t) Matches(c1, t). (3) cCapability, t1, t2Task, Contains(t1, t2)  Matches(c, t1)  Matches(c, t2). June, 2011 ONTOSE 2011, London

24 Prototype Implementation
Representation of STOWS in OWL Both basic and compound concepts are classes in OWL and represented as XML data definition Use STOWS in Semantic Web Services Compound concepts represented in OWL are transformed into OWL-S Service Profile for registration, discovery and invocation UDDI /OWL-S registry server: using OWL-S/UDDI Matchmaker The environment: Windows XP, Intel Core Duo CPU 2.16GHz, Jdk 1.5, Tomcat 5.5 and Mysql 5.0. June, 2011 ONTOSE 2011, London

25 Transformation of STOWS in OWL-S
Activity Context Environment Method Capability data Input Artefacts Output Artefacts ServiceCategory INPUT PARAMETERS ContextMark EnvironmentMark MethodMark Artefacts… OUTPUT PARAMETERS Capability Service profile June, 2011 ONTOSE 2011, London

26 Ontology Management Motivation
All the terms used in the capability description for test service registration, discovery and invocations must be first defined in the ontology. However, it is impossible to build a complete ontology of software testing the huge volume of software testing knowledge the rapid development of new testing technique, methods and tools. Therefore, the ontology must be extendable and open to the public for updating. To implement a framework, rather than a complete and fixed ontology To provide an ontology management mechanism to enable the population of the ontology June, 2011 ONTOSE 2011, London

27 The Ontology Management Mechanism
It provides three services to users: AddClass: to add new concept DeleteClass: to delete concept UpdateClass: to revise concept of the ontology Restrictions on the manipulation of the data model Authority Checker: elementary classes form the framework of the ontology STOWS. None of them could be pruned down extended classes attached to the elementary classes to define new concepts instances of the concepts. added by the users and can be deleted from the hierarchy Conflict Checker the new class to be added does not exist in the ontology the class to be deleted has no subclasses in the hierarchy June, 2011 ONTOSE 2011, London

28 Structure of OMS June, 2011 ONTOSE 2011, London

29 A test service that compose existing test services
Test Brokers A test service that compose existing test services Decompose test tasks into subtasks Search for test services to carry out the subtasks Select test services from candidates Coordinate the selected test services Invoke them in the right order Pass data between them Collects test results, etc. Itself is a test service as well There may be multiple test brokers owned by different vendors June, 2011 ONTOSE 2011, London

30 Architecture of the Prototype Test Broker
We have developed a prototype test broker to demonstrate the feasibility of the approach. June, 2011 ONTOSE 2011, London

31 Test broker process model
June, 2011 ONTOSE 2011, London

32 A Running Example CIQS: the WS of the PingAn Insurance Company in China CIB: Car Insurance Broker Request testing CIQS Search testers Test Broker Matchmaker Invoke tester Register Jun. 2011 TCG: Test Case Generator TCE: Test Case Executor for CIQS CIQS: Car Insurance Quote Service ONTOSE 2011, London

33 Case Study: Dealing with Diversity
Aim: To evaluate the capability of dealing with diversity Method: To wrap a wide range of SW tools into test services Name Description CASCAT A CASOCC-based test case generation tool Test Case Format Translator Translate the test case generated by CASCAT into the format recognizable by Calculator Test Case Executor Test Case Executor Executes test case for a numeric calculator web service Klee Generate and execute test cases from C source code by symbolic execution Magic Check conformance between component specifications and their implementations XML Comparator Compare XML files Java NCSS Measure two standard metrics for Java program Findbugs Find bugs in Java program by static analysis PMD A static analysis tool for finding potential bugs and other problems in Java source code WSDL Based Test Case Generator* A WSDL based test case generation tool Web Service Test Case Executor* Execute the test case generated by WSDL Based Test Case Generator Jun. 2011 ONTOSE 2011, London

34 Experiment 1: Dealing with Subtle Differences
Aim: To test the system’s capability of accurately choosing an appropriate tester from those of subtle differences Method: Application of the data mutation testing technique: Mutation operators: transformations of data (service profiles in this case) (4 types ) Seeds: a set of original service profiles (11 seeds) Mutants: service profiles generated from the seeds by applying mutation operators (167 mutants) Jun. 2011 ONTOSE 2011, London

35 Experiment 2: Scalability
Aim: To evaluate the scalability of test brokers in terms of its efficiency to deal with test problems of practical sizes. Problem sizes in terms of The number of testers in the registry The size of the knowledge-base in the test broker The complexity of test task requested Method: To run the system for a number of times To calculate the average lengths of execution time spent various modules of the test broker Jun. 2011 ONTOSE 2011, London

36 Experiment Results The Effect of the Number of Testers
the average search time increases with the number of testers in the registry almost linearly. Jun. 2011 ONTOSE 2011, London

37 The Effect of the Knowledge-Base Size
As the size of knowledge-base (in terms of the number of test plan templates) increases, the time spent by the task analyzer module also increases, but in an almost linear rate. Jun. 2011 ONTOSE 2011, London

38 The Effect of Task Complexity
The total execution time is a quadratic polynomial function of the number of different subtasks (with R2=0.9984). Jun. 2011 ONTOSE 2011, London

39 Conclusion The challenges imposed on testing web services can be met by employing ontology of software testing to collaborate test services. Feasibility tested by case studies with the prototype implementation Practical usability: Implementable without any change to the existing standards of Semantic WS Motivation for wider adoption by industry Business opportunities for testing tool vendors and software testing companies to provide testing services online as web services Scalable test services are distributed and there is no extra-burden on UDDI servers. June, 2011 ONTOSE 2011, London

40 Future Work To populate the ontology of software testing (e.g. the formats of many different representations of testing related artefacts) To device the mechanism of certification and authentication for testing services Social challenges: For the above approach to be practically useful, it must be adopted by web service developers, testing tool vendors and software testing companies To improve the test broker, even to generalise it to all service composition Limitation of OWL-S semantic web services June, 2011 ONTOSE 2011, London

41 Related works Bertolino and Polini admitted (2009), “on a pure SOA based scenario the framework is not applicable”. Tsai et al. (2004): a framework to extend the function of UDDI to enable collaboration Check-in and check-out services to UDDI servers A service is added to UDDI registry only if it passes a check-in test. A check-out testing is performed every time the service is searched for. It is recommended to a client only if it passes the check-out test. To facilitate such tests, they require test scripts being included in the information registered for the WS on UDDI. Group testing: further investigation of the problem how to select a service from a large number of candidates by testing. A test case ranking technique to improve the efficiency of group testing. Bertolino et al (2005): audition framework an admission testing when a WS is registered to UDDI run time monitoring services on both functional and non-functional behaviours after a service is registered in a UDDI server, Service test governance (STG) (2009): to incorporate testing into a wider context of quality assurance of WS imposing a set of policies, procedures, documented standards on WS development, etc. June, 2011 Both recognised the need of collaboration in testing WS, the technical details about how to collaborate multiple parties in WS testing was left open. ONTOSE 2011, London

42 Thank You June, 2011 ONTOSE 2011, London


Download ppt "Prof. Hong Zhu Department of Computing and Electronics"

Similar presentations


Ads by Google