Our philosophy, process and the tools
To evaluate the FAIRness of a digital object we need metrics. We focus on the development of such metrics to assess compliance to each and every one of the FAIR principles. We believe that increasing the FAIRness of digital resources maximizes their reuse, and that obtaining an assessment provides feedback to content creators about the degree that they enable others to find and reuse them. However, compliance is different than impact. We do not believe that all digital resources are of equal quality, nonetheless, when such resources are created and published, they should be maximally discoverable and reusable as per the FAIR principles. Abiding by the FAIR principles makes unequal resources discoverable, and this will aid in the assessment of their quality. However, any metric that assess the popularity of a digital resource is not a measure of its FAIRness.
We are guided by the following principles:
- Clear: so that anybody can understand what is meant
- Realistic: so that anybody can report on what is being asked of them
- Discriminating: so that we can distinguish the degree to which a resource meets the FAIR principles, and can provide instruction as to what would maximize that value
- Measurable: The assessment can be made in an objective, quantitative, machine-interpretable, scalable and reproducible manner → transparency of what is being measured, and how.
- Universality: The extent to which the metrics are applicable to all digital resources.
The FAIRness of a resource includes information about organizations, affiliations, policies, and URLs, and other entities that are inherently dynamic. Hence, measures of FAIRness are likely to vary over time. However, the reports on the FAIRness of a digital resource must themselves be FAIR and indicate when and by what mechanism the report was generated.
Process for developing FAIRness metrics
- Establish charter/objectives (June 3-4, 2017)
- Establish guidelines for the development of the FAIR metrics (June 3-4, 2017)
- Establish a metric proposal form (June 3-4, 2017)
- Examine each FAIR principle (June-Sept)
- Propose a one or more possible metrics
- Filled out the information in the metric proposal form
- Discuss the merits of the proposal
- does it conform with the principles for FAIR metrics (clear, realistic, universal, measureable, universal)
- is it atomic or does it comprise of a number of different or complementary aspects
- Iteratively refine and test the metric proposal until consensus is pragmatically achieved
- Prepare a public “release candidate” report detailing all proposed FAIR metrics (Sept 2017)
- Hold a webinar, to also recorded
- Implementation phase to obtain feedback (Oct-Nov, 2017)
- Clarification and/or revision and/or extension of the FAIR metrics where necessary to address public comments ( Jan-Feb 2018)
- Final recommendation for the core set of FAIR metrics (March 2018)
- Hold a webinar, to also recorded
* The reference implementation will be a model implementation, but its completeness is subject to availability of
- Framework Document (July 2017)
- FAIR Metric Proposal Form (July 2017)
- Draft recommendation for and webinar on the core set of FAIR metrics (Sept 2017)
- Computational framework for the Automated Evaluation of the FAIR Metrics (Sep 2017)
- Final recommendation for and webinar on the core set of FAIR metrics (March 2018)
- Reference Implementation for Automated Evaluation of the FAIR Metrics* (March 2018)