Download presentation
Presentation is loading. Please wait.
Published bySabrina Cooper Modified over 9 years ago
1
Combining the strengths of UMIST and The Victoria University of Manchester Quality views: capturing and exploiting the user perspective on information quality Paolo Missier, Suzanne Embury, Mark Greenwood School of Computer Science, University of Manchester Alun Preece, Binling Jin Department of Computing Science, University of Aberdeen www.qurator.org Describing the Quality of Curated e-Science Information Resources
2
Combining the strengths of UMIST and The Victoria University of Manchester Outline Information and information quality (IQ) in e-science Quality views: a quality lens on data Semantic model for IQ Architectural framework for quality views State of the project and current research
3
Combining the strengths of UMIST and The Victoria University of Manchester Information and quality in e-science Scientists are increasingly required to place more of their data in the public domain Scientists use other scientists' experimental results as part of their own work In silico experiments (eg Workflow-based) Lab experiment In silico experiments (eg Workflow-based) Public BioDBs E-science experiment Can I trust this data? What evidence do I have that it is suitable for my experiment? Variations in the quality of the data being shared Scientists have no control over the quality of public data Lack of awareness on quality: difficult to measure and assess –No standards!
4
Combining the strengths of UMIST and The Victoria University of Manchester A concrete scenario Qualitative proteomics: identification of proteins in a cell sample Step 1Step n Candidate Data for matching (peptides peak lists) Match algorithm Reference DBs - MSDB - NCBI - SwissProt/Uniprot Wet lab Information service (“Dry lab”) Hit list: {ID, score, p-value,…} False negatives: incompleteness of reference DBs, pessimistic matching False positives: optimistic matching False negatives: incompleteness of reference DBs, pessimistic matching False positives: optimistic matching
5
Combining the strengths of UMIST and The Victoria University of Manchester The complete in silico workflow 1: identify proteins; 2: analyze their functions What is the quality of this processor’s output? Is the processor introducing noise in the flow? GO = Gene Ontology Reference controlled vocabulary for describing protein function (and more) How can a user rapidly test this and other hypotheses on quality?
6
Combining the strengths of UMIST and The Victoria University of Manchester The users’ perception of quality Scientists often have only a blurry notion of their quality requirements for the data Scientists often have only a blurry notion of their quality requirements for the data “One size fits-all” approach to quality does not work –Scientists tend to apply personal acceptability criteria to data –Driven mostly by prior personal and peers’ experience –Based on the expected use of the data What levels of false positives / negatives are acceptable? It is difficult for users to implement quality criteria and test them on the data
7
Combining the strengths of UMIST and The Victoria University of Manchester Quality views: making quality explicit Our goals: To support groups of users within a (scientific) community in understanding information quality on specific data domains To foster reuse of quality definitions within the community Approach: Provide a conceptual model and architectural framework to capture user preferences on data quality Let users populate the framework with custom definitions for indicators and personal decision criteria –The framework allows uses to rapidly test quality preferences and observe their effect on the data –Semi-automated integration in the data processing environment Quality views: A specification of quality preferences and how they apply to the data Quality views: A specification of quality preferences and how they apply to the data
8
Combining the strengths of UMIST and The Victoria University of Manchester Basic elements of information quality 1 - Quality dimensions: A basic set of generic definitions for well-known non-functional properties of the data Ex. Accuracy: describes “how close the observed value is to the actual value” 1 - Quality dimensions: A basic set of generic definitions for well-known non-functional properties of the data Ex. Accuracy: describes “how close the observed value is to the actual value” 2- Quality evidence: Any measurable quantities that can be used to express formal quality criteria Evidence is not by itself a measure of quality Ex. “Hit ratio in protein identification” 2- Quality evidence: Any measurable quantities that can be used to express formal quality criteria Evidence is not by itself a measure of quality Ex. “Hit ratio in protein identification” 3- Quality assertions: Decision procedures for data acceptability, based on available evidence 3- Quality assertions: Decision procedures for data acceptability, based on available evidence
9
Combining the strengths of UMIST and The Victoria University of Manchester The nature of quality evidence Direct evidence: indicators that represent some quality property –Algorithms may exist to determine the biological plausibility of an experiment’s outcome –may be costly, not always available, and possibly inconclusive Indirect evidence: inexpensive indicators that correlate with other more expensive indicators –Eg some function of “hit ratio” and “sequence coverage” –Need experimental evidence of the correlation Goals: design suitable functions to collect / compute evidence associate evidence to data (data quality annotation) Goals: design suitable functions to collect / compute evidence associate evidence to data (data quality annotation)
10
Combining the strengths of UMIST and The Victoria University of Manchester Generic (e-science) evidence recency: how recently the experiment was performed, or its results published –Evidence: submission, publication dates submitter reputation: is the lab well-known for its accuracy in carrying out this type of experiments –Metric: lab ranking (subjective) publications prestige: are the experiment results presented in high-profile journal publications –Metric: Impact Factor and more (official) Collecting data provenance is the key to providing most of these types of evidence
11
Combining the strengths of UMIST and The Victoria University of Manchester Semantic model for Information Quality The key IQ concepts are captured using an ontology: Provides shareable, formal definitions for –QualityProperties (“dimensions”) –Quality Evidence –Quality Assertions –DataAnalysisTools: Describe how indicators are computed The ontology is implemented in OWL DL –Expressive operators for defining concepts and their relationships –Support for subsumption reasoning
12
Combining the strengths of UMIST and The Victoria University of Manchester Domain-specific User-oriented Concrete qualities Wang and Strong, Beyond Accuracy: What Data Quality Means to Data Consumers, Journal of Management Information Systems, 1996 Top-level taxonomy of quality dimensions Generic dimensions
13
Combining the strengths of UMIST and The Victoria University of Manchester Main taxonomies and properties Class restriction: MassCoverage is-evidence-for. ImprintHitEntry Class restriction: PIScoreClassifier assertion-based-on-evidence. Mass PIScoreClassifier assertion-based-on-evidence. Coverage assertion-based-on-evidence: QualityAssertion QualityEvidence is-evidence-for: QualityEvidence DataEntity
14
Combining the strengths of UMIST and The Victoria University of Manchester Associating evidence to data Annotation functions compute quality evidence values for datasets and associate them to the data –Defined in the DataAnalysisTool taxonomy as part of the ontology
15
Combining the strengths of UMIST and The Victoria University of Manchester Quality assertions Defined as ranking or classification functions f(D,I): Input: dataset D vector I = [I 1,I 2,…I n ] of indicator values Possible outputs: A classification {(d,c i )} for each d D A ranking {(d,r i )} for each d D The classification scheme C = {c1,..ck} and the ranking interval [r,R] are themselves defined in the ontology Assertions formalize the user’s bias on evidence as computable decision models on that evidence Example: PIScoreClassifier partitions the input dataset into three classes {low, avg, high} based on a function of [HitScore, MassCoverage] Example: PIScoreClassifier partitions the input dataset into three classes {low, avg, high} based on a function of [HitScore, MassCoverage]
16
Combining the strengths of UMIST and The Victoria University of Manchester Quality views in practice Quality views are declarative specifications for: desired data classification models and evidence –I = [I 1,I 2,…I n ] –class i (d), rank i (d) for all d D condition-action pairs, eg: If then Where depends on the data processing environment –Filter out d –Highlight d in a viewer –Send d to a designated process or repository –… Quality views are based on a small set of formal operators They are expressed using an XML syntax Quality views are based on a small set of formal operators They are expressed using an XML syntax
17
Combining the strengths of UMIST and The Victoria University of Manchester Execution model for Quality views QVs can be embedded within specific data management host environments for runtime execution –For static data: a query processor –For dynamic data: a workflow engine Host environment Declarative (XML) QV Embedded Executable QV QV compiler Dataset D D’ Qurator quality framework - Quality assertion services Quality view on D’
18
Combining the strengths of UMIST and The Victoria University of Manchester User model Compose quality view IQ ontology Compile and deploy Execute on test data Assess View results (Update assertion models) Quality assertion services Re-deploy bindings (XML) Implementing rapid testing of quality hypotheses:
19
Combining the strengths of UMIST and The Victoria University of Manchester The Qurator quality framework
20
Combining the strengths of UMIST and The Victoria University of Manchester Compiled quality workflow
21
Combining the strengths of UMIST and The Victoria University of Manchester Embedded quality workflow
22
Combining the strengths of UMIST and The Victoria University of Manchester Example effect of QV: noise reduction
23
Combining the strengths of UMIST and The Victoria University of Manchester Summary A conceptual model and architecture for capturing the user’s perception on information quality –Formal, semantic model makes concepts Shareable Reusable Machine-processable Quality views are user-defined and compiled to data processing environments (possibly multiple) The Qurator framework supports a runtime model for QVs Current work: –Formal semantics for QVs –Exploiting semantic technology to support the QV specification task –Addressing more real use cases Main paradigm: let scientists experiment with quality concepts in an easy and intuitive way by observing the effect of their personal bias
24
Combining the strengths of UMIST and The Victoria University of Manchester
25
Combining the strengths of UMIST and The Victoria University of Manchester
26
Combining the strengths of UMIST and The Victoria University of Manchester
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.