Indicators and Systems for Monitoring Open Science

EOSC Synergy partners are well aware that the studies performed and reports written provide just snapshots of an Open Science landscape that is in full motion. Also elsewhere it is recognised that there is a need to set up monitoring systems and open science dashboards that are able to keep track of the changes that are taking place. The monitoring systems for EOSC readiness and implementation need regular updates, and sometimes they can even reflect developments in real-time (“live indicators”). Much depends, however, on which indicators are chosen to be followed, on their international comparability, as well as on their validity and robustness.

EOSC Synergy has attempted to use indicators for its landscape and gap analyses that were used by other projects and bodies, such as the those proposed/used for the EOSC Pillar project, EOSC Landscape Task Force (TF), EOSC Marketplace, Sustainability Working Group (WG), EOSC Strategic Research and Innovation Agenda (SRIA), and Landscape WG. 

To analyse the difference between the current situation and the EOSC goals described in the EOSC SRIA, the EOSC Synergy project focused on the following subject areas in the gap analysis: Open Science Strategy; Permanent Identifiers (PIDs); Funding; Access Provisioning; Awareness, Maturity and Interest. The underlying indicators and data derived from the country landscape reports were grouped into these subject areas.

The findings showed a positive correlation between the earlier adoption of Open Access and Open Science policies and the adoption of Open/FAIR Data strategies. Related recommendations propose bridging the identified gaps by strengthening and implementing RDM (research data management) and FAIR data policies and practices, as well as raising awareness on these subjects.

Currently, we are observing a move from interest in one-off analyses towards more permanent monitoring systems, and several Proofs of Concept and prototypes of such systems have been developed. Moreover, several lists of indicators to be monitored have been discussed and proposed. Collaboration and coordination in this area is key to avoid the establishment of  a variety of Open Science monitoring systems, observatories and dashboards, all with their own disparate and peculiar list of indicators

A second issue that we see and like to warn about is a tendency for unrealistically extensive indicator lists, visualised by multiple systems. The focus seems to be rather on “more is better”, with too little attention for the quality, availability and (international) comparability of the indicators. What will we learn from monitoring dozens of metrics from incomparable registration systems of doubtful reliability? The danger is that we will soon have multiple monitoring tools, all working differently, with diverging sets of indicators of low quality, often available for just a few countries. There are now already systems around displaying the indicators attractively, but hiding the underlying chaos and lack of reliability: worse, the nicer the visualisation, the less transparent how the indicators are actually measured, or what the metrics really represent.

This is why we do not want to push forward yet another list of indicators (the EOSC Synergy list), but rather recommend the following:

  • Be selective in the choice of indicators: it is better to have 10 high-quality indicators than 50 that are unreliable.
  • Select those indicators that appear in multiple reports (including, of course, those in the EOSC Synergy reports).
  • Select indicators that are internationally comparable and available.
  • Make sure that the provenance of the indicators is clear, and that the indicators are really transparent, i.e. that the way in which an indicator is measured is accountable, and that the source of information is accessible.
  • Set up a mechanism with clear responsibilities per country for the delivery and updating of the indicators.
  • Don’t waste money on the development of competing systems for the visualisation of indicators. There is actually no problem here: there are already several open source systems that can simply be implemented.
  • Do pay attention to the complete pipeline from the source to the presentation of the indicators, which includes standard formats (or forms) for supplying the information.

In short, the data on which an Open Science monitor or dashboard is to be based should be FAIR as well!