Automated Quality Assessment of Metadata across Open Data Portals
Citations Over TimeTop 1% of 2016 papers
Abstract
The Open Data movement has become a driver for publicly available data on the Web. More and more data—from governments and public institutions but also from the private sector—are made available online and are mainly published in so-called Open Data portals. However, with the increasing number of published resources, there is a number of concerns with regards to the quality of the data sources and the corresponding metadata, which compromise the searchability, discoverability, and usability of resources. In order to get a more complete picture of the severity of these issues, the present work aims at developing a generic metadata quality assessment framework for various Open Data portals: We treat data portals independently from the portal software frameworks by mapping the specific metadata of three widely used portal software frameworks (CKAN, Socrata, OpenDataSoft) to the standardized Data Catalog Vocabulary metadata schema. We subsequently define several quality metrics, which can be evaluated automatically and in an efficient manner. Finally, we report findings based on monitoring a set of over 260 Open Data portals with 1.1M datasets. This includes the discussion of general quality issues, for example, the retrievability of data, and the analysis of our specific quality metrics.
Related Papers
- → Open government data and environmental science: a federal Canadian perspective(2020)24 cited
- Discoverability of SPARQL endpoints in linked open data(2013)
- → Interlinking SciGraph and DBpedia Datasets Using Link Discovery and Named Entity Recognition Techniques(2019)5 cited
- Collaborate, automate, prepare, prioritize: creating metadata for legacy research data(2013)
- → Discoverability of open data is critical to Earth system science(2022)