Presentation is loading. Please wait.

Presentation is loading. Please wait.

Automatic evaluation of fairness

Similar presentations


Presentation on theme: "Automatic evaluation of fairness"— Presentation transcript:

1 Automatic evaluation of fairness
WG Assessment of Data Fitness for Use - RDA 12 – Gaborone – Botswana – Nov 07 Luiz Bonino – International Technology Coordinator GO FAIR

2 Fair metrics

3 Fair principles Findable: Accessible: Interoperable: Reusable:
F1. (meta)data are assigned a globally unique and persistent identifier; F2. data are described with rich metadata; F3. metadata clearly and explicitly include the identifier of the data it describes; F4. (meta)data are registered or indexed in a searchable resource; Accessible: A1. (meta)data are retrievable by their identifier using a standardized communications protocol; A1.1 the protocol is open, free, and universally implementable; A1.2. the protocol allows for an authentication and authorization procedure, where necessary; A2. metadata are accessible, even when the data are no longer available; Interoperable: I1. (meta)data use a formal, accessible, shared, and broadly applicable language for knowledge representation. I2. (meta)data use vocabularies that follow FAIR principles; I3. (meta)data include qualified references to other (meta)data; Reusable: R1. (meta)data are richly described with a plurality of accurate and relevant attributes; R1.1. (meta)data are released with a clear and accessible data usage license; R1.2. (meta)data are associated with detailed provenance; R1.3. (meta)data meet domain-relevant community standards;

4 What is fairness? FAIRness reflects the extent to which a digital resource addresses the FAIR principles as per the expectations defined by a community of stakeholders. Michel Dumontier -

5 EC’s EOSC FAIR Metrics Group
How fair things are? EC’s EOSC FAIR Metrics Group FAIRness of repositories: IDCC17 Practice Paper “Are the FAIR Data Principles fair?” by Alastair Dunning, Madelein de Smael, Jasmin Böhmer DANS FAIR metrics - NIH Commons Framework Working Group on FAIR Metrics Michel Dumontier -

6 Susanna-Assunta Sansone
fair METRICS GROUP Michel Dumontier Univ. Maastricht Susanna-Assunta Sansone Univ. Oxford Peter Doorn DANS Mark Wilkinson U.P Madrid Erik Schultes GO FAIR Luiz Bonino GO FAIR/LUMC

7 Principles for fair metrics
Clear: understand what is meant Realistic: possible to report on what is being asked of them Discriminating: can distinguish the FAIRness of the resource Measurable: assessment can be objective, quantitative, machine-interpretable, scalable and reproducible Universality: applicable to all digital objects

8 FAIR Metrics The current metrics are available for public discussion at the FAIR Metrics GitHub, with suggestions and comments being made through the GitHub comment submission system ( They are represented as i) nanopublications and ii) latex and iii) PDF documents They are free to use for any purpose under the CC0 license. Versioned releases will be made to Zenodo as the metrics evolve, with the first release already available for download Michel Dumontier – IGAD , Susanna Assunta-Sansone RDA

9 Fair metrics Example metrics: Extra metrics
F1 - URL of the document containing the persistence policy of the identifier scheme F3 - URL of the metadata record and the data identifier R1.1 – URL of the license R1.3 – URL(s) of the standards registry’s record of the used community standard(s). CURRENTLY R1.3 – Digital certificate from a recognized body attesting the compliance of the resource to community standards. NEXT STEP Extra metrics Ex.: requirement to use a selection of standards for clinical trial projects

10 Fair metrics paper

11 Fair metrics paper (2) FAIR Evaluator Framework is currently under review at SciData Preprint here:

12 Tooling based on the Fair metrics

13 Fair metrics evaluator
Based on the metrics defined by the FAIR Metrics Group (fairmetrics.org) Two forms of evaluation: Semi-automatic – the user answers a questionnaire and the Evaluator checks Automatic – given the URL of the resource’s evaluation metadata, the Evaluator performs an automatic evaluation Support for core and extended metrics

14 Go fair foundation certification program
Based on the FAIR Metrics Evaluator Certificates are also FAIR (persistent identifiers, metadata, …) Detailed report providing information on what to improve

15 Metrics evaluator example (beta)

16 Data stewardship wizard – dsw.fairdata.solutions
Dynamic hierarchical questionnaire Customisable DS Knowledge Model Integration with the Data Stewardship for Open Science book (FAIR) metrics evaluation Desirability of answers Integration with other resources (FAIRSharing, Bio.tools, …) – upcoming Machine actionable DMPs - part of

17 Data stewardship wizard

18 Q&A – contact info Luiz Bonino International Technology Coordinator – GO FAIR Associate Professor BioSemantics – LUMC Skype: luizolavobonino Web:


Download ppt "Automatic evaluation of fairness"

Similar presentations


Ads by Google