Presentation is loading. Please wait.

Presentation is loading. Please wait.

11 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Raúl García-Castro, Asunción Gómez-Pérez May 13th, 2004 Benchmarking.

Similar presentations


Presentation on theme: "11 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Raúl García-Castro, Asunción Gómez-Pérez May 13th, 2004 Benchmarking."— Presentation transcript:

1 11 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Raúl García-Castro, Asunción Gómez-Pérez May 13th, 2004 Benchmarking Ontology Technology

2 22 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Table of contents Benchmarking Experimental Software Engineering Measurement Ontology Technology Evaluation Conclusions and Future Work

3 33 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Benchmark vs Benchmarking BenchmarkBenchmarking IS A TestContinuous process PURPOSE Measure Evaluate Search for best practices – Measure – Evaluate Improve TARGET Method System Product Service Process

4 44 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Benchmarking Classification Participants involved Internal benchmarkingOne organization Competitive benchmarkingDirect competitor Functional/industry benchmarkingCompetitors in the same industry Generic benchmarkingCompetitors in any industry

5 55 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Experimental Software Engineering ExperimentExperimentation IS A TestProcess PURPOSE Discover Demonstrate Clarify Evaluate Predict Understand Improve TARGET Software process Software product Software process Software product

6 66 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Experiment Classification Number of projects OneMore than one Number of teams per project OneSingle projectMulti-project variation More than one Replicated projectBlocked subject-project

7 77 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Measurement Measurable entities: ResourceProduct Process Attributes Internal attributes. Measured in terms of the entity itself. External attributes. Measured with respect to how the entity relates to its environment.

8 88 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Methodologies PlanDesignImplement Execute AnalyzeInformChange Benchmarking PlanningAnalysisIntegrationAction PlanCollectAnalyzeAdapt Experimentation DefinitionPlanningOperationInterpretation Experiment contextExperiment design Conducting the experiment and data collection AnalysisPresentation of results Interpretation of results Measurement Define objectives Assign responsibilities Do research Define initial metrics Get tools for collection and analysis Create a metrics database Publicize the collection of the metrics Establish a training class in software metrics Establish a mechanism for changing InitializationRequirements definition Component design Component buildImplementation

9 99 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Table of contents Benchmarking Experimental Software Engineering Measurement Ontology Technology Evaluation Conclusions and Future Work

10 10 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez General framework for ontology tool evaluation OntoWeb deliverable 1.3: Tool comparison according to different criteria: Ontology building tools Ontology merge and integration tools Ontology evaluation tools Ontology-based annotation tools Ontology storage and querying tools

11 11 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Ontology building tool evaluation AuthorsToolsCriteria Duineveld et al., 1999 Ontolingua, WebOnto, ProtégéWin, Ontosaurus, ODE General properties that can also be found in other types of programs Ontology properties found in the tools Cooperation properties when constructing an ontology Stojanovic and Motik, 2002OilEd, OntoEdit, Protégé-2000Ontology evolution requirements fulfilled by the tool Sofia Pinto et al., 2002Protégé-2000Support provided in ontology reuse processes Time and effort for developing an ontology Usability EON 2002KAON, Loom, OilEd, OntoEdit, OpenKnoME, Protégé-2000, SemTalk, Terminae, WebODE Expressiveness of the knowledge model attached to the tool Usability Reasoning mechanisms Scalability Gómez-Pérez and Suárez- Figueroa, 2003 OilEd, OntoEdit, Protégé-2000, WebODE Ability to detect taxonomic anomalies Lambrix et al., 2003Chimaera, DAG-Edit, OilEd, Protégé-2000 General criteria (availability, functionality, multiple instance, data model, reasoning, sample ontologies, reuse, formats, visualization, help, shortcuts, stability, customization, extendibility, multiple users) User interface criteria (relevance, efficiency, attitude and learnability). EON 2003DOE, OilEd, Protégé-2000, SemTalk, WebODE Interoperability Amount of knowledge lost during exports and imports Corcho et al., 2004WebODETemporal efficiency Stability

12 12 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Other ontology tool evaluation AuthorType of toolCriteria Giboin et al., 2002Ontology-based tools (CoMMA) Usability Utility Sure and Iosif, 2002Ontology-based search tools (QuizRDF and Spectacle) Information nding time Mistakes during a search Mistakes during a search Noy and Musen, 2002Ontology merging tools (Prompt) Precision and recall of the tools suggestions Difference between result ontologies Lambrix and Edberg, 2003Ontology merging tools (Prompt and Chimaera) General criteria (availability, stability) Merging criteria (functionality, assistance, precision and recall of suggestions, time to merge) User interface criteria (relevance, efciency, attitude and learnability) Guo et al., 2003Ontology repositories (DLDB) Load time Repository size Query response time Completeness Euzenat, 2003Ontology alignment methodsDistance between alignments Amount of resource consumed (time, memory, user input, etc.)

13 13 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Workload generation for ontology tools Tempich and Volz, 2003Ontology classification (DAML ontology library): Taxonomic nature Description logic style Database schema-like OntoWeb D 1.3, 2002OntoGenerator Guo et al., 2003Univ-Bench Artificial data generator Corcho et al., 2004Workload generated by test definition

14 14 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez RDF and OWL test suites W3C RDF Core Working Group: RDF test suite W3C Web Ontology Working Group: OWL test suite Check the correct usage of the tools that implement RDF and OWL KBs Illustrate the resolution of different issues considered by the Working Groups

15 15 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Description Logics systems comparison DL’98Measure DL systems’ performance Haarslev and Möller, 1999 Evaluate optimisation strategies for Abox reasoning DL’99Evaluate generic DL systems’ features: logic implemented, availability, future plans, etc. Elhaik et al., 1998Randomly generation of Tboxes and Aboxes according to probability distributions Ycart and Rousset, 2000 Defined a natural probability distribution of Aboxes associated to a given Tbox

16 16 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Table of contents Benchmarking Experimental Software Engineering Measurement Ontology Technology Evaluation Conclusions and Future Work

17 17 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Conclusions We present an overview of the main research areas involved in benchmarking. There is no common benchmarking methodology, although they are general and similar. Evaluation studies concerning ontology technology have been scarce. In the last years the effort devoted to evaluate ontology technology is significantly growing. Most of the evaluation studies are of qualitative nature. Few of them involve quantitative data and controlled environments.

18 18 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Future Work 1.State of the Art (month 6) 2.First draft of a methodology (month 12) 3.Identification of criteria (month 12) 4.Identification of metrics (month 12) 5.Definition of test beds for benchmarking (month 12) 6.Development of first versions of prototypes of tools (month 18) 7.Benchmarking of ontology development tools according to the criteria and test beds produced (month 18) Goals (18 months):

19 19 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Raúl García-Castro, Asunción Gómez-Pérez May 13th, 2004 Benchmarking Ontology Technology


Download ppt "11 Benchmarking Ontology Technology. May 13th, 2004 © R. García-Castro, A. Gómez-Pérez Raúl García-Castro, Asunción Gómez-Pérez May 13th, 2004 Benchmarking."

Similar presentations


Ads by Google